A Glance at NaNoGenMo 2024

It’s already been a full month since the most recent

National Novel Generation Month (NaNoGenMo 2024). I surely should have written up some thoughts sooner. Other computer-generated texts have kept me busy, though!

The anthology I edited with Lillian-Yvonne Bertram, Output: An Anthology of Computer-Generated Text, 1953–2023, was published on November 5 and we’ve been going to discuss it, read from it, and hear people’s reactions. More information about Output can be found in my previous blog post, which I’m updating to reflect upcoming events.

Many of the selections in the book are excerpts from NaNoGenMo projects — not only ones in the Novels section, because this activity inspires people to do all sorts of more-than-50,000-word projects.

I do notice that while everyone seems to be in a rapt fervor about generative AI, and there is an overabundance of POD books produced using commercial LLM-based systems, this is what’s happening with NaNoGenMo:

2020: 56 completed projects.
2021: 56 completed projects.
2022: 33 completed projects.
2023: 23 completed projects.
2024: 22 completed projects.

I haven’t yet retooled as a data scientist, but it seems that fewer and fewer projects are being completed in recent years. I also feel that more of the projects are not in the spirit of the original NaNoGenMo, which called for a sample “novel” to be shared along with code. Some participants employ proprietary commercial LLMs, so sharing code is not (in my understanding of the requirement) a possibility. Of course, opinions vary. Hugovk and the community have been accepting of projects of this sort, so I won’t clamor to kick them off GitHub.

Worth noting, however: Not all LLM-based NaNoGenMo projects are unfree. If open/free LLMs are part of one’s project, they can be shared along with the code used to invoke them. That’s how Barney Livingston restaged his novel generator based on frames of the movie A.I. He even ran the model locally, a great alternative to Bitcoin mining for heating your house during the bleak November. While he found the results more coherent, he notes: “I think repeating this with future tools will result in even blander results, AI was much more amusing back when it was shonkier.” There’s another way that even a commercial LLM-based system can be used with sharable code as the result: Have the system generate the code from very high-level instructions, as Chris Pressey did with The Resistance. The result has its compelling moments and charms — police_station is always written in snake case, for instance, and consider the paragraph: Maria Smith responded, “We need to act on the situation.” while outlining the situation. antagonist replied, “What’s our first step?” With 9483 lines of code in 77 source files, it’s no wonder that Pressey considers the generated system to have a “glorious trainwreck-y quality.”

Whatever’s going on with NaNoGenMo trends in the 20s, my own enthusiasm for this online event is undimmed. I contributed an offhand project this year, The Fall. I’m more keen on the code (a single page) than the output, if that makes any sense. The composition technique my generator uses is perhaps less sophisticated than the similar one employed by Vera Chellgren’s Algorithm Pretending to Be AI. I learned that co-creator of BASIC Thomas E. Kurtz had died during the month, and decided to do something as a tribute to him: My project is implemented in a modern-day BASIC. The BASIC programming language, which became the lingua franca of home computing, prompted many of us to explore the creative potential of the computer, and to use it as a language machine, in fun and literary ways. So perhaps there would have been no NaNoGenMo without BASIC?

Somewhat related, I was pleased to see that Charles Mangin started his project on an Apple II (although in 6502 assembly, not BASIC) and wound up with a fine mash-up of Frankenstein and Jane Eyre produced by a one-line bash script. And speaking of scripts, I was intrigued by the beginnings of a generated primer for Shavian, an alternate alphabet for English. Another innovative project, based on craft tradition and making connections between number, color, and verbal art, was Lee Tusman’s quilt poems.

One of the first people to start work in 2024, James Burt, posted some about his work. “Working with the LLM fills me with awe,” he wrote, at first, of these systems’ prodigious ability to generate text. At the end of the process, though, he found that “It was an interesting experiment, although the book produced was not particularly engaging. There’s a flatness to LLM-generated prose which I didn’t overcome.” I wonder if such deflation was shared by other NaNoGenMo participants who have been around for a while and are trying out LLMs, or are new to the game and are trying out LLMs? I did like some projects using Transformer-architecture models based on massive text corpora, but they were the ones that were conceptually clever and extreme: Barney Livingston’s re-creation of A.I. A.I. by an A.I. and Chris Pressey’s taking AI assistance with coding way over the top. And, Livingston did end up saying that he probably wouldn’t redo his experiment again, and that he “strongly suspect[s] we’re well into the diminishing returns stage of large language models.”

But we need not use LLMs, or even more concise statistical models. Plenty of other directions are being explored. I look forward to their being several dozen generated book projects in years to come, using models large, small, existing, and … novel.

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

This site uses Akismet to reduce spam. Learn how your comment data is processed.