The OUTPUT Anthology is Out!

OUTPUT: An Anthology of Computer-Generated Text 1953–2023 in a hand model’s hands (or an AI facsimile thereof?)

I’m delighted that after more than four years of work by Lillian-Yvonne Bertram and myself — we’re co-editors of this book — the MIT Press and Counterpath have jointly published

Output: An Anthology of Computer-Generated Text, 1953–2023

Book launch events are posted here and will be updated as new ones are scheduled!

This anthology spans seven decades of computer-generated text, beginning before the term “artificial intelligence” was even coined. While not restricted to poetry, fiction, and other creative projects, it reveals the rich work that has been done by artists, poets, and other sorts of writers who have taken computing and code into their own hands. The anthology includes examples of powerful and principled rhetorical generation along with story generation systems based on cognitive research. There are examples of “real news” generation that has already been informing us — along with hoaxes and humor.

Page spread from OUTPUT with Everest Pipkin’s i’ve never picked a protected flower

Page spread from OUTPUT with Talan Memmott’s Self Portrait(s) [as Other(s)]

Page spread from OUTPUT with thricedotted’s The Seeker

It’s all contextualized by brief introductions to each excerpt, longer introductions to each fine-grained genre of text generation, and an overall introduction that Lillian-Yvonne and I wrote. There are 200 selections in the 500-page book, which we hope will be a valuable sourcebook for academics and students — but also a way for general readers to learn about innovations in computing and writing.

You can buy Output now from several sources. I suggest your favorite independent bookseller! If you’re in the Boston area, stop by the MIT Press Bookstore which as of this writing, has 21 on hand as of actually publishing this post, has 14 copies!

Upcoming Book Launches & Talks

January 13 (Monday) “The Output Anthology at Computer-Generated Text’s Cultural Crux”, a talk of mine at the UCSC Computational Media Colloquium, Engineering 2 Room 280, 12:30pm–1:30pm. Free & open to the public.

January 20 (Monday) Toronto book launch with me, Matt Nish-Lapidus, Kavi Duvvoori, and others TBA at the University of Toronto’s Centre for Culture & Technology, 6pm–7:30pm. Free & open to the public.

March 11 (Tuesday) Massachusetts Institute of Technology book launch with the editors, MIT’s Room 32-155, 5pm-6:30pm. Free & open to the public. Book sales thanks to the MIT Press Bookstore.

Previous Events

November 11 (Monday): Both editors spoke at the University of Virginia, Bryan Hall, Faculty Lounge, Floor 2. Free & open to the public. 5pm.

November 20 (Wednesday): Online book launch for Output, hosted by the University of Maryland. Both editors in conversation with Matt Kirschenbaum. Free, register on Zoom. 12noon Eastern Time.

November 21 (Thursday) Book launch at WordHack with me, David Gissen, Sasha Stiles, Andrew Yoon, and open mic presenters. Wonderville, 1186 Broadway, Brooklyn, 7pm. $15. Book sales.

December 6 (Friday) Output will be available for sale and I’ll be at the Bad Quarto / Nick Montfort table at Center for Book Arts Winter Market, 28 W 27th St Floor 3, 4pm–8pm.

December 9 (Monday) Book launch at Book Club Bar with the editors, Charles Bernstein, Robin Hill, Stephanie Strickland, and Leonard Richardson. 197 E 3rd St (at Ave B), New York City’s East Village. Free, RSVP required. 8pm. Book sales thanks to Book Club.

December 13 (Friday) European book launch with the editors, Scott Rettberg, and Tegan Pyke. University of Bergen’s Center for Digital Narrative, Langesgaten 1-2, 3:30pm. Free & open to the public, book sales thanks to Akedemika. This event was streamed & recorded and is available to view on YouTube.

Advice Concerning the Increase in AI-Assisted Writing

Edward Schiappa, Professor of Rhetoric
Nick Montfort, Professor of Digital Media
10 January 2023

[In response to a request, this is an HTML version of a PDF memo for our colleagues at MIT]

There has been a noticeable increase in student use of AI assistance for writing recently. Instructors have expressed concerns and have been seeking guidance on how to deal with systems such as GPT-3, which is the basis for the very recent ChatGPT. The following thoughts on this topic are advisory from the two of us: They have no official standing within even our department, and certainly not within the Institute. Nonetheless, we hope you find them useful.

Newly available systems go well beyond grammar and style checking to produce nontrivial amounts of text. There are potentially positive uses of these systems; for instance, to stimulate thinking or to provide a variety of ideas about how to continue from which students may choose. In some cases, however, the generated text can be used without a great deal of critical thought to constitute almost all of a typical college essay. Our main four suggestions for those teaching a writing subject are as follows:

  1. Explore these technologies yourself and read what has been written about them in peer-reviewed and other publications,
  2. understand how these systems relate to your learning goals,
  3. construct your assignments to align with learning goals and the availability of these systems, and
  4. include an explicit policy regarding AI/LLM assistance in your syllabus.

Exploring AI and LLMs

LLMs (Large Language Models) such as those in the GPT series have many uses, for instance in machine translation and speech recognition, but their main implications for writing education have to do with natural language generation. A language model is a probability distribution over sequences of words; ones that are “large” have been trained on massive corpora of texts. This allows the model to complete many sorts of sentences in cohesive, highly plausible ways that are sometimes semantically correct. An LLM can determine, for instance, that the most probable completion of the word sequence “World War I was triggered by” is “the assassination of Archduke Franz Ferdinand” and can continue from there. While impressive in many ways, these models also have several limitations. We are not currently seeking to provide a detailed critique of LLMs, but advise that instructors read about the capabilities and limitations of AI and LLMs.

To understand more about such systems, it is worth spending some time with those that are freely available. The one attracting the most attention is ChatGPT. The TextSynth Playground also provides access to several free/open-source LLMs, including the formidable GPT-NeoX-20B. ChatGPT uses other AI technologies and is presented in the form of a chatbot, while GPT-NeoX-20B is a pure LLM that allows users to change parameters in addition to providing prompts.

Without providing a full bibliography, there is considerable peer-reviewed literature on LLMs and their implications. We suggest “GPT-3: What’s it good for?” by Robert Dale and “GPT-3: Its Nature, Scope, Limits, and Consequences” by Luciano Floridi & Massimo Chiriatti. These papers are from 2020 and refer to GPT-3; their insights about LLMs remain relevant. Because ChatGPT was released in late November 2022, the peer-reviewed research about it is scant. One recent article, “Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education,” offers a short human-authored introduction and conclusion, presenting sample text generated by ChatGPT between these.

Understanding the Relationship of AI and LLMs to Learning Goals

The advisability of any technology or writing practice depends on context, including the pedagogical goals of each class.

It may be that the use of a system like ChatGPT is not only acceptable to you but is integrated into the subject, and should be required. One of us taught a course dealing with digital media and writing last semester in which students were assigned to computer-generate a paper using such a freely-available LLM. Students were also assigned to reflect on their experience afterwards, briefly, in their own writing. The group discussed its process and insights in class, learning about the abilities and limitations of these models. The assignment also prompted students to think about human writing in new ways.

There are, however, reasons to question the practice of AI and LLM text generation in college writing courses.

First, if the use of such systems is not agreed upon and acknowledged, the practice is analogous to plagiarism. Students will be presenting writing as their own that they did not produce. To be sure, there are practices of ghost writing and of appropriation writing (including parody) which, despite their similarity to plagiarism, are considered acceptable in particular contexts. But in an educational context, when writing of this sort is not authorized or acknowledged, it does not advance learning goals and makes the evaluation of student achievement difficult or impossible.

Second, and relatedly, current AI and LLM technologies provide assistance that is opaque. Even a grammar checker will explain the grammatical principle that is being violated. A writing instructor should offer much better explanatory help to a student. But current AI systems just provide a continuation of a prompt.

Third, the Institute’s Communication Requirement was created (in part) in response to alumni reporting that writing and speaking skills were essential for their professional success, and that they did not feel their undergraduate education adequately prepared them to be effective communicators.[1] It may be, in the fullness of time, that learning how to use AI/LLM technologies to assist writing will be an important or even essential skill. But we are not at this juncture yet, and the core rhetorical skills involving in written and oral communication—invention, style, grammar, reasoning and argument construction, and research—are ones every MIT undergraduate still needs to learn.

For these reasons, we suggest that you begin by considering what the objectives are for your subject. If you are aiming to help students critique digital media systems or understand the implications of new technologies for education, you may find the use of AI and LLMs not only acceptable but important. If your subject is Communication Intensive, however, an important goal for your course is to develop and enhance your students’ independent writing and speaking ability. For most CI subjects, therefore, the use of AI-assisted writing should be at best carefully considered. It is conceivable at some point that it will become standard procedure to teach most or all students how to write with AI assistance, but in our opinion, we have not reached that point.The cognitive and communicative skills taught in CI subjects require that students do their own writing, at least at the beginning of 2023.

Constructing Assignments in Light of Learning Goals and AI/LLMs

Assigning students to use AI and LLMs is more straightforward, so we focus on the practical steps that can be taken to minimize use when the use of these system does not align with learning goals. In general, the more detailed and specific a writing assignment, the better, as the prose generated by ChatGPT (for example) tends to be fairly generic and plain.

Furthermore, instructors are encouraged to consult MIT’s writing and communication resources to seek specific advice as to how current assignments can be used while minimizing the opportunities for student use of AI assistance. These resources include Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program It is our understanding that MIT’s Teaching + Learning Lab will be offering advice and resources as well.

Other possible approaches include:

• In-class writing assignments
• Reaction papers that require a personalized response or discussion
• Research papers requiring quotations and evidence from appropriate sources
• Oral presentations based on notes rather than a script
• Assignments requiring a response to current events, e.g., from the past week

The last of these approaches is possible because LLMs are trained using a fixed corpus and there is a cutoff date for the inclusion of documents.

Providing an Explicit Policy

Announcing a policy clearly is important in every case. If your policy involves the prohibition of AI/LLM assistance, we suggest you have an honest and open conversation with students about it. It is appropriate to explain why AI assistance is counter to the pedagogical goals of the subject. Some instructors may want to go into more detail by exploring apps like ChatGPT in class and illustrating to students the relative strengths and weaknesses of AI-generated text.

In any case: What this policy should be depends on the kinds and topics of writing in your subject. If you do not wish your students to use AI assisted writing technologies, you should state so explicitly in your syllabus. If the use of this assistance is allowable within bounds, or even required because students are studying these technologies, that should be stated as well.

In the case of prohibition, you could simply state: “The use of AI software or apps to write or paraphrase text for your paper is not allowed.” Stronger wording could be: “The use of AI software or apps to write or paraphrase text for your paper constitutes plagiarism, as you did not author these words or ideas.”

There are automated methods (such as Turnitin and GPTZero) that can search student papers for AI-generated text and indicate, according to their own models of language, how probable it is that some text was generated rather than human-written. We do not, however, know of any precedent for disciplinary measures (such as failing an assignment) being instituted based on probabilistic evidence from such automated methods.

Conclusion

The use of AI/LLM text generation is here to stay. Those of us involved in writing instruction will need to be thoughtful about how it impacts our pedagogy. We are confident that the Institute’s Subcommittee on the Communication Requirement, the Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program will provide appropriate resources and counsel when appropriate. We also believe students are here at MIT to learn, and will be willing to follow thoughtfully advanced policies so they can learn to become better communicators. To that end, we hope that what we have offered here will help to open an ongoing conversation.[2]


[1] L. Perelman, “Data Driven Change Is Easy; Assessing and Maintaining It Is the Hard Part,” Across the Disciplines 6.2 (Fall 2009). Available: http://wac.colostate.edu/atd/assessment/perelman.cfm. L. Perelman, “Creating a Communication-Intensive Undergraduate Curriculum in Science and Engineering for the 21st Century: A Case Study in Design and Process.” In Liberal Education in 21st Century Engineering, Eds. H. Luegenbiehl, K. Neeley, and David Ollis. New York: Peter Lang, 2004, pp. 77-94.

[2] Our thanks to Eric Klopfer for conversing with us concerning an earlier draft of this memo.

Golem and My Other Seven Computer-Generated Books in Print

Dead Alive Press has just published my Golem, much to my delight, and I am launching the book tonight in a few minutes at WordHack, a monthly event run by the NYC community gallery Babycastles.

This seems like a great time to credit the editors and presses I have worked with to publish several of these books, and to let you all know where they can be purchased, should you wish to indulge yourselves:

  • Golem, 2021, Dead Alive’s New Sight series. Thank you, Augusto Corvalan!
  • Hard West Turn, 2018, published by my Bad Quarto thanks to John Jenkins’s work on the Espresso Book Machine at the MIT Press Bookstore.
  • The Truelist, 2017, Counterpath. This book was the first in the Using Electricity series which I edit, and was selected and edited by Tim Roberts—thanks! Both Counterpath publications and my book from Les Figues are distributed by the nonprofit Small Press Distribution.
  • Autopia, 2016, Troll Thread. Thank you, Holly Melgard!
  • 2×6, 2016, Les Figues. This book is a collaboration between me and six others: Serge Bouchardon, Andrew Campana, Natalia Fedorova, Carlos León, Aleksandra Ma?ecka, and Piotr Marecki. Thank you, Teresa Carmody!
  • Megawatt, 2014, published by my Bad Quarto thanks to Jeff Mayersohn’s Espresso Book Machine at the Harvard Book Store.
  • #!, 2014, Counterpath. Thank you, Tim Roberts!
  • World Clock, 2013, published by my Bad Quarto thanks to Jeff Mayersohn’s Espresso Book Machine at the Harvard Book Store.

The code and text to these books are generally free, and can be found on nickm.com, my site. They are presented in print for the enjoyment of those who appreciate book objects, of course!

Generative Unfoldings, Opening April 1, 2021

Generative Unfoldings, 14 images from 14 generative artworks

Generative Unfoldings is an online exhibit of generative art that I’ve curated. The artworks run live in the browser and are entirely free/libre/open-source software. Sarah Rosalena Brady, D. Fox Harrell, Lauren Lee McCarthy, and Parag K. Mital worked with me to select fourteen artworks. The show features:

  • Can the Subaltern Speak? by Behnaz Farahi
  • Concrete by Matt DesLauriers
  • Curse of Dimensionality by Philipp Schmitt
  • Gender Generator by Encoder Rat Decoder Rat
  • Greed by Maja Kalogera
  • Hexells by Alexander Mordvintsev
  • Letter from C by Cho Hye Min
  • Pac Tracer by Andy Wallace
  • P.S.A.A. by Juan Manuel Escalante
  • Seedlings_: From Humus by Qianxun Chen & Mariana Roa Oliva
  • Self Doubting System by Lee Tusman
  • Someone Tell the Boyz by Arwa Mboya
  • Songlines by Ágoston Nagy
  • This Indignant Page: The Politics of the Paratextual by Karen ann Donnachie & Andy Simionato

There is a (Screen) manifestation of Generative Unfoldings, which lets people run the artworks in their browsers. In addition, a (Code) manifestation provides a repository of all of the free/libre/open-source source code for these client-side artworks. This exhibit is a project of MIT’s CAST (Center for Art, Science & Technology) and part of the Unfolding Intelligence symposium. The opening, remember, is April 1, 2021! See the symposium page, where you can register (at no cost) and find information about joining us.

Sonnet Corona

Sonnet Corona, detail from a particular generated poem in the browser

“Sonnet Corona” is a computer-generated sonnet, or if you look at it differently, a sonnet cycle or very extensive crown of sonnets. Click here to read some of the generated sonnets.

The sonnets generated are in monometer. That is, each line is of a single foot, and in this case, is of strictly two syllables.

They are linked not by the last line of one becoming the first line of the next, but by being generated from the same underlying code: A very short web page with a simple, embedded JavaScript program.

Because there are three options for each line, there are 314 = 4,782,969 possible sonnets.

I have released this (as always) as free software, so that anyone may share, study, modify, or make use of it in any way they wish. To be as clear as possible, you should feel free to right-click or Command-click on this link to “Sonnet Corona,” choose “Save link as…,” and then edit the file that you download in a text editor, using this file as a starting point for your own project.

This extra-small project has as its most direct antecedent the much more extensive and elaborate Cent mille milliards de poèmes by Raymond Queneau.

My thanks go to Stephanie Strickland, Christian Bök, and Amaranth Borsuk for discussing a draft of this project with me, thoroughly and on short notice.

Sea and Spar Between 1.0.2

When it rains, it pours, which matters even on the sea.

Thanks to bug reports by Barry Rountree and Jan Grant, via the 2020 Critical Code Studies Working Group (CCSWG), there is now another new version of Sea and Spar Between which includes additional bug fixes affecting the interface as well as the generation of language.

As before, all the files in this version 1.0.2.are available in a zipfile, for those who care to study or modify them.

Sea and Spar Between 1.0.1

Stephanie Strickland and I published the first version of Sea and Spar Between in 2010, in Dear Navigator, a journal no longer online. In 2013 The Winter Anthology republished it. That year we also provided another version of this poetry system for Digital Humanities Quarterly (DHQ), cut to fit the toolspun course, identical in terms of how it functions but including, in comments within the code, what is essentially a paper about the detailed workings of the system. In those comments, we wrote:

The following syllables, which were commonly used as words by either Melville or Dickinson, are combined by the generator into compound words.

However, due to a programming error, that was not the case. In what we will now have to call Sea and Spar Between 1, the line:

syllable.concat(melvilleSyllable);

does not accomplish the purpose of adding the Melville one-syllable words to the variable syllable. It should have been:

syllable = syllable.concat(melvilleSyllable);

I noticed this omission only years later. As a result, the compound or kenning “toolspun” never was actually produced in any existing version of Sea and Spar Between, including the one available here on nickm.com. This was a frustrating situation, but after Stephanie and I discussed it briefly, we decided that we would wait to consider an updated version until this defect was discovered by someone else, such as a critic or translator.

It took a while, but a close reading of Sea and Spar Between by Aaron Pinnix, who considered the system’s output rather than its code, has finally brought this to the surface. Pinnix is writing a critique of several ocean-based works in his Fordham dissertation. We express our gratitude to him.

The result of adding 11 characters to the code (obviously a minor sort of bug fix, from that perspective) makes a significant difference (to us, at least!) in the workings of the system and the text that is produced. It restores our intention to bring Dickinson’s and Melville’s language together in this aspect of text generation. We ask that everyone reading Sea and Spar Between use the current version.

Updated 2020-02-02: Version 1.0.2 is now out, as explained in this post.

We do not have the ability to change the system as it is published in The Winter Anthology or DHQ, so we are presenting Sea and Spar Between 1.0.1 1.0.2 here on nickm.com. The JavaScript and the “How to Read” page indicate that this version, which replaces the previous one, is 1.0.1 1.0.2.

Updated 2020-02-02: Version 1.0.2 is the current one now and the one which we endorse. If you wish to study or modify the code in Sea and Spar Between and would like the convenience of downloading a zipfile, please use this version 1.0.2.

Previous versions, not endorsed by us: Version 1 zipfile, and version 1.0.1 zipfile. These would be only of very specialized interest!

Incidentally, there was another mistake in the code that we discovered after the 2010 publication and before we finished the highly commented DHQ version. We decided not to alter this part of the program, as we still approved of the way the system functioned. Those interested are invited to read the comments beginning “While the previous function does produce such lines” in cut to fit the toolspun course.

Nano-NaNoGenMo or #NNNGM

Ah, distinctly I remember it was in the bleak November;
And each separate bit and pixel wrought a novel on GitHub.

April may be the cruelest month, and now the month associated with poetry, but November is the month associated with novel-writing, via NaNoWriMo, National Novel Writing Month. Now, thanks to an offhand comment by Darius Kazemi and the work of Hugo van Kemenade, November is also associated with the computer-generation of novels, broadly speaking. Any computer program and its 50,000 word+ output qualifies as an entry in NaNoGenMo, National Novel Generation Month.

NaNoGenMo does have a sort of barrier to entry: People often think they have to do something elaborate, despite anyone being explicitly allowed to produce a novel consisting entirely of meows. Those new to NaNoGenMo may look up to, for instance, the amazingly talented Ross Goodwin. In his own attempt to further climate change, he decided to code up an energy-intensive GPT-2 text generator while flying on a commercial jet. You’d think that for his next trick this guy might hop in a car, take a road trip, and generate a novel using a LSTM RNN! Those who look up so such efforts — and it’s hard not to, when they’re conducted at 30,000 feet and also quite clever — might end up thinking that computer-generated novels must use complex code and masses of data.

And yet, there is so much that can be done with simple programs that consume very little energy and can be fully understood by their programmers and others.

Because of this, I have recently announced Nano-NaNoGenMo. On Mastodon and Twitter (using #NNNGM) I have declared that November will also be the month in which people write computer programs that are at most 256 characters, and which generate 50,000 word or more novels. These can use Project Gutenberg files, as they are named on that site, as input. Or, they can run without using any input.

I have produced three Nano-NaNoGenMo (or #NNNGM) entries for 2019. In addition to being not very taxing computationally, one of these happens to have been written on an extremely energy-efficient electric train. Here they are. I won’t gloss each one, but I will provide a few comments on each, along with the full code for you to look at right in this blog post, and with links to both bash shell script files and the final output.

OB-DCK; or, THE (SELFLESS) WHALE


perl -0pe 's/.?K/**/s;s/MOBY(.)DI/OB$1D/g;s/D.r/Nick Montfort/;s/E W/E (SELFLESS) W/g;s/\b(I ?|me|my|myself|am|us|we|our|ourselves)\b//gi;s/\r\n\r\n/
/g;s/\r\n/ /g;s//\n\n/g;s/ +/ /g;s/(“?) ([,.;:]?)/$1$2/g;s/\nEnd .//s’ 2701-0.txt #NNNGM

WordPress has mangled this code despite it being in a code element; Use the following link to obtain a runnable version of it:

OB-DCK; or, THE (SELFLESS) WHALE code

OB DCK; or, THE (SELFLESS) WHALE, the novel

The program, performing a simple regular expression substitution, removes all first-person pronouns from Moby-Dick. Indeed, OB-DCK is “MOBY-DICK” with “MY” removed from MOBY and “I” from DICK. Chapter 1 begins:

Call Ishmael. Some years ago—never mind how long precisely—having little or no money in purse, and nothing particular to interest on shore, thought would sail about a little and see the watery part of the world. It is a way have of driving off the spleen and regulating the circulation. Whenever find growing grim about the mouth; whenever it is a damp, drizzly November in soul; whenever find involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral meet; and especially whenever hypos get such an upper hand of , that it requires a strong moral principle to prevent from deliberately stepping into the street, and methodically knocking people’s hats off—then, account it high time to get to sea as soon as can. This is substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with .

Because Ishmael is removed as the “I” of the story, on a grammatical level there is (spoiler alert!) no human at all left at the end of book.

consequence


perl -e 'sub n{(unpack"(A4)*","backbodybookcasedoorfacefacthandheadhomelifenamepartplayroomsidetimeweekwordworkyear")[rand 21]}print"consequence\nNick Montfort\n\na beginning";for(;$i<12500;$i++){print" & so a ".n;if(rand()<.6){print n}}print".\n"' #NNNGM

consequence code

consequence, the novel

Using compounding of the sort found in my computer-generated long poem The Truelist and my “ppg 256-3,” this presents a sequence of things — sometimes formed from a single very common four-letter word, sometimes from two combined — that, it is stated, somehow follow from each other:

a beginning & so a name & so a fact & so a case & so a bookdoor & so a head & so a factwork & so a sidelife & so a door & so a door & so a factback & so a backplay & so a name & so a facebook & so a lifecase & so a partpart & so a hand & so a bookname & so a face & so a homeyear & so a bookfact & so a book & so a hand & so a head & so a headhead & so a book & so a face & so a namename & so a life & so a hand & so a side & so a time & so a yearname & so a backface & so a headface & so a headweek & so a headside & so a bookface & so a bookhome & so a lifedoor & so a bookyear & so a workback & so a room & so a face & so a body & so a faceweek & so a sidecase & so a time & so a body & so a fact […]

Too Much Help at Once


python -c "help('topics')" | python -c "import sys;print('Too Much Help at Once\nNick Montfort');[i for i in sorted(''.join(sys.stdin.readlines()[3:]).split()) if print('\n'+i+'\n') or help(i)]" #NNNGM

Too Much Help at Once code

Too Much Help at Once, the novel

The program looks up all the help topics provided within the (usually interactive) help system inside Python itself. Then, it asks for help on everything, in alphabetical order, producing 70k+ words of text, according the GNU utility wc. The novel that results is, of course, an appropriation of text others have written; it arranges but doesn’t even transform that text. To me, however, it does have some meaning. Too Much Help at Once models one classic mistake that beginning programmers can make: Thinking that it’s somehow useful to read comprehensively about programming, or about a programming language, rather than actually using that programming language and writing some programs. Here’s the very beginning:

Too Much Help at Once
Nick Montfort

ASSERTION

The “assert” statement
**********************

Assert statements are a convenient way to insert debugging assertions
into a program:

assert_stmt ::= “assert” expression [“,” expression]

A plot

So far I have noted one other #NNNGM entry, A plot by Milton Läufer, which I am reproducing here in corrected form, according to the author’s note:


perl -e 'sub n{(split/ /,"wedding murder suspicion birth hunt jealousy death party tension banishment trial verdict treason fight crush friendship trip loss")[rand 17]}print"A plot\nMilton Läufer\n\n";for(;$i<12500;$i++){print" and then a ".n}print".\n"'

Related in structure to consequence, but with words of varying length that do not compound, Läufer’s novel winds through not four weddings and a funeral, but about, in expectation, 735 weddings and 735 murders in addition to 735 deaths, leaving us to ponder the meaning of “a crush” when it occurs in different contexts:

and then a wedding and then a murder and then a trip and then a hunt and then a crush and then a trip and then a death and then a murder and then a trip and then a fight and then a treason and then a fight and then a crush and then a fight and then a friendship and then a murder and then a wedding and then a friendship and then a suspicion and then a party and then a treason and then a birth and then a treason and then a tension and then a birth and then a hunt and then a friendship and then a trip and then a wedding and then a birth and then a death and then a death and then a wedding and then a treason and then a suspicion and then a birth and then a jealousy and then a trip and then a jealousy and then a party and then a tension and then a tension and then a trip and then a treason and then a crush and then a death and then a banishment […]

Share, enjoy, and please participate by adding your Nano-NaNoGenMo entries as NaNoGenMo entries (via the GitHub site) and by tooting & tweeting them!

A Web Reply to the Post-Web Generation

At the recent ELO conference in Montréal Leonardo Flores introduced the concept of “3rd Generation” electronic literature. I was at another session during his influential talk, but I heard about the concept from him beforehand and have read about it on Twitter (a 3rd generation context, I believe) and Flores’s blog (more of a 2nd generation context, I believe). One of the aspects of this concept is that the third generation of e-lit writers makes use of existing platforms (Twitter APIs, for instance) rather than developing their own interfaces. Blogging is a bit different from hand-rolled HTML, but one administers one’s own blog.

When Flores & I spoke, I realized that I have what seems like a very similar idea of how to divide electronic literature work today. Not exactly the same, I’m sure, but pretty easily defined and I think with a strong correspondence to this three-generation concept. I describe it like this:

  • Pre-Web
  • Web
  • Post-Web

To understand the way I’m splitting things up, you first have to agree that we live in a post-Web world of networked information today. Let me try to persuade you of that, to begin with.

The Web is now at most an option for digital communication of documents, literature, and art. It’s an option that fewer and fewer people are taking. Floppy disks and CD-ROMs also remain options, although they are even less frequently used. The norm today has more to do with app-based connectivity and less with the open Web. When you tweet, and when you read things on Twitter, you don’t need to use the Web; you can use your phone’s Twitter client. Facebook, Instagram, and Snapchat would be just fine if the Web was taken out behind the shed and never seen again. These are all typically used via apps, with the Web being at most an option for access.

The more companies divert use of their social networks from the Web to their own proprietary apps, the more they are able to shape how their users interact — and their users are their products, that which they offer to advertisers. So, why not keep moving these users, these products, into the better-controlled conduits of app-based communication?

Yes, I happen to be writing a blog entry right now — one which I don’t expect anyone to comment on, like they used to in the good old days. There is much more discussion of things I blog about on Twitter than in the comment section of my blog; this is evidence that we are in the post-Web era. People can still do Web (and even pre-Web) electronic literature and art projects. As Jodi put it in an interview this year, “You can still make websites these days.” This doesn’t change that we reached peak Web years ago. We live now in a post-Web era where some are still doing Web work, just as some are still doing pre-Web sorts of work.

In my view, the pre-Web works are ones in HyperCard and the original Mac and Windows Storyspace, of course. (It may limit your audience, but you can still make work in these formats, if you like!) Some early pieces of mine, such as The Help File (written in the standard Windows help system) and my parser-based interactive fiction, written in Inform 6, are also pre-Web. You can distribute parser-based IF on the Web, and you can play it in the browser, but it was being distributed on an FTP site, the IF Archive, before the Web became the prevalent means of distribution. (The IF Archive now has a fancy new Web interface.) Before the IF Archive, interactive fiction was sold on floppy disk. I consider that the significant number of people making parser-based interactive fiction today are still doing pre-Web electronic literature work that happens to be available on the Web or sometimes in app form.

Also worth noting is that Rob Wittig’s Blue Company and Scott Rettberg’s Kind of Blue are best considered pre-Web works by my reckoning, as email, the form used for them, was in wide use before the Web came along. (HTML is used in these email projects for formatting and to incorporate illustrations, so the Web does have some involvement, but the projects are still mainly email projects.) The Unknown, on the other hand, is definitely an electronic literature work of the Web.

Twitterbots, as long as they last, are great examples of post-Web electronic literature, of course.

With this for preface, I have to say that I don’t completely agree with Flores’s characterization of the books in the Using Electricity series. It could be because my pre-Web/Web/post-Web concept doesn’t map onto his 1st/2nd/3rd generation idea exactly. It could also be that it doesn’t exactly make sense to name printed books, or for that matter installations in gallery spaces, as pre-Web/Web/post-Web. This type of division makes the most sense for work one accesses on one’s own computer, whether it got there via a network, a floppy disk, a CD-ROM, or some other way. But if we wanted to see where the affinities lie, I would have to indicate mostly pre-Web and Web connections; I think there is only one post-Web Using Electricity book that has been released or is coming out soon:

  1. The Truelist (Nick Montfort) is more of a pre-Web project, kin to early combinatorial poetry but taken to a book-length, exhaustive extreme.

  2. Mexica (Rafael Pérez y Pérez) is more of a pre-Web project based on a “Good Old-Fashioned AI” (GOFAI) system.

  3. Articulations (Allison Parrish) is based on a large store of textual data, Project Gutenberg, shaped into verse with two different sorts of vector-space analyses, phonetic and syntactical. While Project Gutenberg predates the Web by almost two decades, it became the large-scale resource that it is today in the Web era. So, this would be a pre-Web or Web project.

  4. Encomials (Ranjit Bhatnagar), coming in September, relies on Twitter data, and indeed the firehose of it, so is a post-Web/3rd generation project.

  5. Machine Unlearning (Li Zilles), coming in September, is directly based on machine learning on data from the open Web. This is a Web-generation project which wouldn’t have come to fruition in the walled gardens of the post-Web.

  6. A Noise Such as a Man Might Make (Milton Läufer), coming in September, uses a classic algorithm from early in the 20th Century — one you could read about in Scientific American in the 1980s, and see working on USENET — to conflate two novels. It seems like a pretty clear pre-Web project to me.

  7. Ringing the Changes (Stephanie Strickland), coming in 2019, uses the combinatorics of change ringing and a reasonably small body of documents, although larger than Läufer’s two books. So, again, it would be pre-Web.

Having described the “generational” tendencies of these computer-generated books, I’ll close by mentioning one of the implications of the three-part generational model, as I see it, for what we used to call “hypertext.” The pre-Web allowed for hypertexts that resided on one computer, while the Web made it much more easily possible to update a piece of hypertext writing, collaborate with others remotely, release it over time, and link out to external sites.

Now, what has happened to Hypertext in the post-Web world? Just to stick to Twitter, for a moment: You can still put links into tweets, but corporate enclosure of communications means that the wild wild wild linking of the Web tends to be more constrained. Links in tweets look like often-cryptic partial URLs instead of looking like text, as they do in pre-Web and Web hypertexts. You essentially get to make a Web citation or reference, not build a hypertext, by tweeting. And hypertext links have gotten more abstruse in this third, post-Web generation! When you’re on Twitter, you’re supposed to be consuming that linear feed — automatically produced for you in the same way that birds feed their young — not clicking away of your own volition to see what the Web has to offer and exploring a network of media.

The creative bots of Twitter (while they last) do subvert the standard orientation of the platform in interesting ways. But even good old fashioned hypertext is reigned in by post-Web systems. If there’s no bright post-post-Web available, I’m willing to keep making a blog post now and then, and am glad to keep making Web projects — some of which people can use as sort of free/libre/open-source 3rd-generation platforms, if they like.

VIdeo of My PRB Reading

Thanks to host Joseph Mosconi, I read at the Poetics Research Bureau in Los Angeles from two recent computer-generated books. Sophia Le Fraga and Aaron Winslow read with me on this evening, on July 21.

I have now posted 360 video of my readings of both The Truelist and Hard West Turn.

Montfort’s Poetic Research Bureau reading of July 21, 2018

I read from The Truelist (Counterpath, 2017). The Truelist is available as an offset-printed book from Counterpath, as a short, deterministic, free software program that generates the full text of the book, and as a free audiobook, thanks to the generosity of the University of Pennsylvania’s Kelly Writers House, its Wexler Studio, and PennSound.

After this, I read from Hard West Turn (Bad Quarto, 2018), a computer-generated novel about gun violence in the United States, the first of a series. Each novel, copy-edited by the author/programmer, will be re-generated annually for release on July 4. Hard West Turn (2018) is available in print in a very limited edition, only 13 copies for sale + 3 artist’s proofs. The short free software program that generated the text is available as well. The first draft of this project was done as a NaNoGenMo (National Novel Generation Month) program in November 2017.

Concise Computational Literature is Now Online in Taper

I’m pleased to announce the release of the first issue of Taper, along with the call for works for issue #2.

Taper is a DIY literary magazine that hosts very short computational literary works — in the first issue, sonic, visual, animated, and generated poetry that is no more than 1KB, excluding comments and the standard header that all pages share. In the second issue, this constraint will be relaxed to 2KB.

The first issue has nine poems by six authors, which were selected by an editorial collective of four. Here is how this work looked when showcased today at our exhibit in the Trope Tank:

Weights and Measures and for the pool players at the Golden Shovel, Lillian Yvonne-Bertram
“Weights and Measures” and “for the pool players at the Golden Shovel,” Lillian Yvonne-Bertram
193 and ArcMaze, Sebastian Bartlett
“193” and “ArcMaze,” Sebastian Bartlett
Alpha Riddims, Pierre Tchetgen and Rise, Angela Chang
“Alpha Riddims,” Pierre Tchetgen and “Rise,” Angela Chang
US and Field, Nick Montfort
“US” and “Field,” Nick Montfort
God, Milton Läufer
“God,” Milton Läufer

This issue is tiny in size and contains only a small number of projects, but we think they are of very high quality and interestingly diverse. This first issue of Taper also lays the groundwork for fairly easy production of future issues.

The next issue will have two new editorial collective members, but not me, as I focus on my role as publisher of this magazine though my very small press, Bad Quarto.

Using Electricity readings, with video of one

I’m writing now from the middle of a four-city book tour which I’m on with Rafael Pérez y Pérez and Allison Parrish – we are the first three author/programmers to develop books (The Truelist, Mexica, and Articulations) in this Counterpath series, Using Electricity.

I’m taking the time now to post a link to video of a short reading that Allison and I did at the MLA Convention, from exactly a month ago. If you can’t join us at an upcoming reading (MIT Press Bookstore, 2018-02-06 6pm or Babycastles in NYC, 2018-02-07 7pm) and have 10 minutes, the video provides an introduction to two of the three projects.

Rafael wasn’t able to join us then; we are very glad he’s here from Mexico City with us this week, and has read with us in Philadelphia and Providence so far!

Author Function

The exhibit Author Function, featuring computer-generated literary art in print, is now up in MIT’s Rotch Library (77 Mass Ave, Building 7, 2nd Floor) and in my lab/studio, The Trope Tank (Room 14N-233, in building 14, the same building that houses the Hayden Library). Please contact me by email if you are interested in seeing the materials in the Trope Tank, as this part of the exhibit is accessible by appointment only.

There are three events associated with the exhibit happening in Cambridge, Mass:

February 7, 6pm-7pm, a reading and signing at the MIT Press bookstore. Nick Montfort, Rafael Pérez y Pérez, and Allison Parrish.

March 5, 4:30pm-6pm, a reception at the main part of the exhibit in the Rotch Library.

March 5, 7pm-8pm, a reading and signing at the Harvard Book Store. John Cayley, Liza Daly, Nick Montfort, and Allison Parrish.

In addition to a shelf of computer-generated books that is available for perusal, by appointment, in the Trope Tank, the following items of printed matter are displayed in the exhibit:

  • 2×6, Nick Montfort, Serge Bouchardon, Andrew Campana, Natalia Fedorova, Carlos León, Aleksandra Ma?ecka, and Piotr Marecki
  • A Slow Year: Game Poems, Ian Bogost
  • Action Score Generator, Nathan Walker
  • American Psycho, Mimi Cabell and Jason Huff
  • Anarchy, John Cage
  • Articulations, Allison Parrish
  • Autopia, Nick Montfort
  • Brute Force Manifesto: The Catalog of All Truth, Version 1.1, Series AAA-1, Vol 01, Brian James
  • Clear Skies All Week, Alison Knowles
  • Firmy, Piotr Puldzian P?ucienniczak
  • for the sleepers in that quiet earth., Sofian Audry
  • From the Library of Babel: Axaxaxas Mlo – The Combed Thunderclap LXUM,LKWC – MCV – The Plaster Cramp, Christian Bök
  • Generation[s], J.R. Carpenter
  • Google Volume 1, King Zog
  • How It Is In Common Tongues, Daniel C. Howe and John Cayley
  • Incandescent Beautifuls, Erica T. Carter [Jim Carpenter]
  • Irritant, Darby Larson
  • Love Letters, Letterpress Broadside, Output by a reimplementation of a program by Christopher Strachey
  • Mexica: 20 Years – 20 Stories / 20 años – 20 historias, Rafael Pérez y Pérez
  • My Buttons Are Blue: And Other Love Poems From the Digital Heart of an Electronic Computer, A Color Computer
  • My Molly [Departed], Talan Memmott
  • no people, Katie Rose Pipkin
  • Phaedrus Pron, Paul Chan
  • Puniverse, Volumes 32 and 38 of 57, Stephen Reid McLaughlin
  • Re-Writing Freud, Simon Morris
  • Seraphs, Liza Daly
  • The Appearances of the Letters of the Hollywood Sign in Increasing Amounts of Smog and at a Distance, Poster, David Gissen
  • The Poiceman’s Beard is Half Constructed: Computer prose and poetry by Racter
  • The Truelist, Nick Montfort
  • Tristano, Nanni Balestrini
  • Written Images, Eds. Matrin Fuchs and Peter Bichsel

Here are some photos documenting the exhibit:

Author Function Rotch main display case

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Author Function book displays and gallery walls

Sentaniz Nimerik, E-Lit in Haitian Creole

A week ago, on October 2, we put Sentaniz Nimerik online. This is an electronic literature work, an example of digital storytelling and digital poetry, that is by Sixto & BIC and was facilitated by Michel DeGraff & Nick Montfort. It is in Haitian Creole — Kreyòl, as the language is called in the language itself. This language has a community of about 12 million speakers worldwide and is the language shared by everyone in Haiti. It is not the same as Haitian French or mutually intelligible with Haitian French (or any other kind of French).

You can read more about Maurice Sixto, a famous Haitian storyteller who died in 1984, on Wikipedia, in English — of course there is an entry in Haitian Creole as well. His story “Sentaniz,” well-known in Haiti, is the storytelling basis for our digital work.

BIC is a singer, songwriter, and poet who is also known as B.I.C. (Brain. Intelligence. Creativity.) He came to MIT to work on this project with us and to do a concert, which was very well-attended. His songs and poems are mostly in Haitian Creole; some in French; not in English — although BIC is fluent in English and has worked as an English teacher.

Professor Michel DeGraff is a linguist and is my colleague at MIT. Among other things, he heads the MIT-Haiti Initiative and works to advance STEM education in the Boston area in schools where education is in Haitian Creole.

We (BIC, Michel DeGraff, and I) sat down together and looked at and discussed several simple JavaScript poems, some historical, some of mine, some done by others recently. We settled on “Through the Park” (a work of mine from 2008) as a starting point for our collaboration. We changed several things about the workings of the page, and the text used in this piece is also a new text related to “Sentaniz,” not any sort of translation of anything I have written.

To make concrete a few of the formal and conceptual differences: The final result has two generated versions presented one after the other. The underlying “story” is not only an story that originated in Haitian Creole, but has been elaborated into its digital version with frame statements and questions that do not correspond to anything in “Through the Park.” The visual design is simple, but also a bit different from the simple earlier version.

To be more specific about our roles in the project, for the most part I dealt with the JavaScript code, Michel typed in what was to be written in Haitian Creole (using my different keyboard layout), and BIC said what lines we should use. But Michel and BIC consulted about particular phrasings, as you might expect, and all of us talked a bit about the types of sentences that could be used, the linguistic constraint (no reference between sentences), and the design and functioning of the page.

We spent a while in discussion beforehand, and did some work to polish the project after the three of us met, but BIC was only at MIT for one full day. It took us about an hour to actually do the core creative and development work on Sentaniz Nimerik. The project was thanks to many people and offices at MIT, with the main support for BIC’s trip coming from CAMIT, the Council for the Arts at MIT.

I recorded a video of Michel DeGraff explaining the piece (in Haitian Creole) and have posted that on YouTube with a CC license. He explains how to “view souce” and that the piece can be studied and modified. The piece itself, although very short, is released under an explicit all-permissive license to make it clear that it is available to everyone for any purpose. I hope people in Haiti and speakers of Haitian Creole elsewhere will enjoy it and develop many new ideas, stories, and poems.

The Gathering Cloud

The Gathering Cloud, J. R. Carpenter, 2017
The Gathering Cloud, J. R. Carpenter, 2017. (I was given a review copy of this book.)

J.R. Carpenter’s book is an accomplishment, not just in terms of the core project, but also by virtue of how the codex is put together. The introduction is by Jussi Parikka, the after-poem by Lisa Robertson. While social media and ethereal imaginations of the network keep us from being lonely as a cloud these days, they obscure the material nature of computing, the cost of linking us in terms of wire and heat. Carpenter’s computer-generated Generation[s] was concerned with the computational production of text; The Gathering Cloud also engages with the generation of power. This book and the corresponding digital performance, for instance at the recent ELO Festival in Porto, yields up the rich results of research, cast in accomplished verse. As with Carpenter’s other work that is rooted in zines and the handmade Web, it is personal rather than didactic. Deftly, despite the gravity of the topic, the book still affects the reader with a gesture, not a deluge of facts — more by waving than drowning.

My @party Talk on Computer-Generated Books

I just gave a talk at the local demoparty, @party. While I haven’t written out notes and it wasn’t recorded, here are the slides. The talk was “Book Productions: The Latest in Computer-Generated Literary Art,” and included some discussion of how computer-generated literary books related to demoscene productions.

Digital Lengua, the launch of 2×6 and Autopia, Nov 20 in NYC

Clouds of Digital Lengua palabras

Digital Lengua – Babycastles, 137 West 14th St, Manhattan –
5:30pm Sunday November 20

This reading of computer-generated literature in English and Spanish
serves as the global book launch for two titles:

2×6
Nick Montfort, Serge Bouchardon, Andrew Campana, Natalia Fedorova,
Carlos León, Aleksandra Ma?ecka, Piotr Marecki
Les Figues, Los Angeles: Global Poetics Series
http://lesfigues.com/book/2×6/
256 pp.

Autopia
Nick Montfort
Troll Thread, New York
http://trollthread.tumblr.com/post/152339108524/nick-montfort-autopia-troll-thread-2016-purchase
256 pp.

Montfort will read from these two books, reading English and Spanish
texts from 2×6. Paperback copies will be available for purchase. The
short programs that generated these books are printed in the books and also
available as free software online.

Läufer will read from his projects Bigrammatology and WriterTools™, in
both cases, in Spanish and English.

Montfort and Läufer will read from work done as part of the Renderings
project and as part of another project, Heftings.

The Renderings project, organized by Montfort and based at his
lab, The Trope Tank, involves locating computational literature (such as
poetry generating computer programs) from around the globe and translating
these works into English. Läufer and Montfort will read from two
Spanish-language poetry generators, from Argentina and Spain, and from
translations of them.

The Heftings project, also organized by Montfort through The
Trope Tank, involves making attempts, often many, at translating conceptual,
constrained, concrete & visual, and other types of literary art that are
generally considered to be impossible to translate. Montfort and Läufer will
read from some short works that are originally in Spanish or English and
works that have Spanish or English translations.

Nick Montfort develops computational art and poetry, often
collaboratively. His poetry books are #!, Riddle & Bind, and
Autopia;
he co-wrote 2002: A Palindrome Story and 2×6. His
more than fifty digital projects, at http://nickm.com, include the
collaborations The Deletionist, Sea and Spar Between, and the
Renderings
project. His collaborative and individual books from the MIT
Press are: The New Media Reader, Twisty Little Passages, Racing the Beam,
10 PRINT CHR$(205.5+RND(1)); : GOTO 10,
and most recently Exploratory
Programming for the Arts and Humanities.
He lives in New York and
Boston, offers naming services as Nomnym, and is a professor at MIT.

Milton Läufer is an Argentinian writer, journalist and teacher.
Currently he is doing a PhD at New York University focused on digital
literature in Latin America. He is the 2016-2017 writer-in-residence of
The Trope Tank
, MIT. In 2015 he published Lagunas, a partially
algorithmic-generated novel, which —as most of his work— is available online
at http://www.miltonlaufer.com.ar. He has participated in art exhibitions in
Latin America, the US and Europe. He lives in Brooklyn.

Digital Lengua – Babycastles, 137 West 14th St, Manhattan – 5:30pm Domingo, Noviembre 20

Esta lectura de literatura generada por computadora en español e inglés
oficiará, a la vez, de lanzamiento para los siguientes dos títulos:

2×6
Nick Montfort, Serge Bouchardon, Andrew Campana, Natalia Fedorova,
Carlos León, Aleksandra Ma?ecka, Piotr Marecki
Les Figues, Los Angeles: Global Poetics Series
http://lesfigues.com/book/2×6/
256 págs.

Autopia
Nick Montfort
Troll Thread, New York
http://trollthread.tumblr.com/post/152339108524/nick-montfort-autopia-troll-thread-2016-purchase
256 págs.

Montfort leerá de ambos libros, en español e inglés para el caso de
2×6
. Habrá copias impresas disponibles para su compra. Los breves
programas que generan el código se encuentran en dichos libros y también en
línea como software libre (y gratuito).

Läufer leerá de sus proyectos Bigrammatology y WriterTools™, en español e inglés en ambos casos.

Los autores leerán también de los trabajos realizados en el marco de los
proyecto Renderings y Heftings.

El proyecto Renderings, organizado por Montfort con base en su
laboratorio, The Trope Tank, involucra la búsqueda de literatura
computacional (tal como poesía generada por programas de computadora) a lo
largo del globo y la traducción de estos proyectos al inglés. Läufer y
Montfort leerán de dos generadores de poesía en español, uno de Argentina y
otro de España, así como sus traducciones.

El proyecto Heftings, también organizado por Montfort a través de
The Trope Tank, consiste en la producción de intentos, a menudo
muchos, de traducir obras literarias conceptuales, formalistas, concretas o
visuales tales que son generalmente consideradas imposibles de traducir.
Montfort y Läufer leerán algunos trabajos breves originalmente en español o
inglés y trabajos que poseen traducciones españolas o inglesas.

Nick Montfort desarrolla arte y poesía computacional,
frecuentemente en colaboración. Entre sus libros se destacan #!,
Riddle & Bind
y Autopia; y, en colaboración, 2002: A
Palindrome Story
y 2×6. Entre sus más de cincuenta proyectos
digitales, en http://nickm.com, se encuentran las colaboraciones The
Deletionist
, Sea and Spar Between y Renderings, un
proyecto centrado en la traducción. Sus libros de MIT Press son The New
Media Reader
, Twisty Little Passages, Racing the Beam,
10 PRINT CHR$(205.5+RND(1)); : GOTO 10
y, recientemente, Exploratory
Programming for the Arts and Humanities
. Vive en New York y Boston,
ofrece servicios de nombres como Nomnym, y es un profesor en MIT.

Milton Läufer es un escritor, periodista y docente argentino.
Actualmente se encuentra realizando un PhD en la New York University acerca
de literatura digital in América Latina. Es el escritor en residencia de
The Trope Tank
para el período 2016-2017, en MIT. En 2015 publicó la
novela generada parcialmente por algoritmos Lagunas, la cual —como el
resto de su obra el literatura digital— es accesible desde su sitio,
http://www.miltonlaufer.com.ar. Ha participado de exposiciones en América
Latina, Estados Unidos y Europa. Vive en Brooklyn.