{"id":5247,"date":"2025-01-02T14:27:19","date_gmt":"2025-01-02T19:27:19","guid":{"rendered":"https:\/\/nickm.com\/post\/?p=5247"},"modified":"2025-01-02T14:40:46","modified_gmt":"2025-01-02T19:40:46","slug":"a-glance-at-nanogenmo-2024","status":"publish","type":"post","link":"https:\/\/nickm.com\/post\/2025\/01\/a-glance-at-nanogenmo-2024\/","title":{"rendered":"A Glance at NaNoGenMo 2024"},"content":{"rendered":"<p>It\u2019s already been a full month since the most recent<\/p>\n\n<p><a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\">National Novel Generation Month (NaNoGenMo 2024).<\/a> I surely should have written up some thoughts sooner. Other computer-generated texts have kept me busy, though!<\/p>\n\n<p>The anthology I edited with Lillian-Yvonne Bertram, <a href=\"https:\/\/mitpress.mit.edu\/9780262549813\/output\/\"><em>Output: An Anthology of Computer-Generated Text, 1953\u20132023,<\/em><\/a> was published on November 5 and we\u2019ve been going to discuss it, read from it, and hear people\u2019s reactions. More information about <em>Output<\/em> can be found in <a href=\"https:\/\/nickm.com\/post\/2024\/11\/output_out\/\">my previous blog post,<\/a> which I\u2019m updating to reflect upcoming events.<\/p>\n\n<p>Many of the selections in the book are excerpts from NaNoGenMo projects \u2014 not only ones in the Novels section, because this activity inspires people to do all sorts of more-than-50,000-word projects.<\/p>\n\n<p>I do notice that while everyone seems to be in a rapt fervor about generative AI, and there is an overabundance of POD books produced using commercial LLM-based systems, this is what\u2019s happening with NaNoGenMo:<\/p>\n\n<p>2020: <strong>56<\/strong> completed projects.<br \/>\n2021: <strong>56<\/strong> completed projects.<br \/>\n2022: <strong>33<\/strong> completed projects.<br \/>\n2023: <strong>23<\/strong> completed projects.<br \/>\n2024: <strong>22<\/strong> completed projects.<\/p>\n\n<p>I haven\u2019t yet retooled as a data scientist, but it seems that fewer and fewer projects are being completed in recent years. I also feel that more of the projects are not in the spirit of the original NaNoGenMo, which called for a sample \u201cnovel\u201d to be shared <em>along with code.<\/em> Some participants employ proprietary commercial LLMs, so sharing code is not (in my understanding of the requirement) a possibility. Of course, opinions vary. Hugovk and the community have been accepting of projects of this sort, so I won\u2019t clamor to kick them off GitHub.<\/p>\n\n<p>Worth noting, however: Not all LLM-based NaNoGenMo projects are unfree. If open\/free LLMs are part of one\u2019s project, they can be shared along with the code used to invoke them. That\u2019s how <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/14\">Barney Livingston restaged his novel generator based on frames of the movie <em>A.I.<\/em><\/a> He even ran the model locally, a great alternative to Bitcoin mining for heating your house during the bleak November. While he found the results more coherent, he notes: \u201cI think repeating this with future tools will result in even blander results, AI was much more amusing back when it was shonkier.\u201d There\u2019s another way that even a commercial LLM-based system can be used with sharable code as the result: Have the system generate the code <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/6\">from <em>very high-level instructions,<\/em> as Chris Pressey did with <em>The Resistance.<\/em><\/a> The result has its compelling moments and charms \u2014 police_station is always written in snake case, for instance, and consider the paragraph: <em>Maria Smith responded, &#8220;We need to act on the situation.&#8221; while outlining the situation. antagonist replied, &#8220;What&#8217;s our first step?&#8221;<\/em> With 9483 lines of code in 77 source files, it\u2019s no wonder that Pressey considers the generated system to have a \u201cglorious trainwreck-y quality.\u201d<\/p>\n\n<p>Whatever\u2019s going on with NaNoGenMo trends in the 20s, my own enthusiasm for this online event is undimmed. I contributed an offhand project this year, <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/24\"><em>The Fall.<\/em><\/a> I\u2019m more keen on the code (a single page) than the output, if that makes any sense. The composition technique my generator uses is perhaps less sophisticated than the similar one employed by <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/19\">Vera Chellgren\u2019s <em>Algorithm Pretending to Be AI.<\/em><\/a> I learned that co-creator of BASIC Thomas E. Kurtz had died during the month, and decided to do something as a tribute to him: My project is implemented in a modern-day BASIC. The BASIC programming language, which became the lingua franca of home computing, prompted many of us to explore the creative potential of the computer, and to use it as a language machine, in fun and literary ways. So perhaps there would have been no NaNoGenMo without BASIC?<\/p>\n\n<p>Somewhat related, I was pleased to see that Charles Mangin started his project on an Apple II (although in 6502 assembly, not BASIC) and wound up with <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/4\">a fine mash-up of <em>Frankenstein<\/em> and <em>Jane Eyre<\/em> produced by a one-line bash script.<\/a> And speaking of scripts, I was intrigued by the <a href=\"https:\/\/cnoocy.dreamwidth.org\/103400.html\">beginnings of a generated primer for Shavian,<\/a> an alternate alphabet for English. Another innovative project, based on craft tradition and making connections between number, color, and verbal art, was <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/22\">Lee Tusman\u2019s quilt poems.<\/a><\/p>\n\n<p>One of the <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/9\">first people to start work in 2024, James Burt,<\/a> posted some about his work. <a href=\"https:\/\/jamesburt.me.uk\/2024\/11\/nanogenmo-updates\/\">\u201cWorking with the LLM fills me with awe,\u201d<\/a> he wrote, at first, of these systems\u2019 prodigious ability to generate text. At the end of the process, though, he found that <a href=\"https:\/\/jamesburt.me.uk\/2024\/12\/thoughts-on-nanogenmo-2024\/\">\u201cIt was an interesting experiment, although the book produced was not particularly engaging. There\u2019s a flatness to LLM-generated prose which I didn\u2019t overcome.\u201d<\/a> I wonder if such deflation was shared by other NaNoGenMo participants who have been around for a while and are trying out LLMs, or are new to the game and are trying out LLMs? I did like some projects using Transformer-architecture models based on massive text corpora, but they were the ones that were conceptually clever and extreme: <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/14\">Barney Livingston\u2019s re-creation of <em>A.I. A.I. by an A.I.<\/em><\/a> and <a href=\"https:\/\/github.com\/NaNoGenMo\/2024\/issues\/6\">Chris Pressey\u2019s taking AI assistance with coding way over the top.<\/a> And, Livingston did end up saying that he probably wouldn\u2019t redo his experiment again, and that he <a href=\"https:\/\/github.com\/barnoid\/AIAI2\">\u201cstrongly suspect[s] we&#8217;re well into the diminishing returns stage of large language models.\u201d<\/a><\/p>\n\n<p>But we need not use LLMs, or even more concise statistical models. Plenty of other directions are being explored. I look forward to their being <em>several<\/em> dozen generated book projects in years to come, using models large, small, existing, and &#8230; novel.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>It\u2019s already been a full month since the most recent National Novel Generation Month (NaNoGenMo 2024). I surely should have written up some thoughts sooner. Other computer-generated texts have kept me busy, though! The anthology I edited with Lillian-Yvonne Bertram, Output: An Anthology of Computer-Generated Text, 1953\u20132023, was published on November 5 and we\u2019ve been &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/nickm.com\/post\/2025\/01\/a-glance-at-nanogenmo-2024\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;A Glance at NaNoGenMo 2024&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[138,168,30],"class_list":["post-5247","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-generation","tag-nanogenmo","tag-story-generation"],"_links":{"self":[{"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/posts\/5247","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/comments?post=5247"}],"version-history":[{"count":5,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/posts\/5247\/revisions"}],"predecessor-version":[{"id":5253,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/posts\/5247\/revisions\/5253"}],"wp:attachment":[{"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/media?parent=5247"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/categories?post=5247"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nickm.com\/post\/wp-json\/wp\/v2\/tags?post=5247"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}