The OUTPUT Anthology is Out!

OUTPUT: An Anthology of Computer-Generated Text 1953–2023 in a hand model’s hands (or an AI facsimile thereof?)

I’m delighted that after more than four years of work by Lillian-Yvonne Bertram and myself — we’re co-editors of this book — the MIT Press and Counterpath have jointly published

Output: An Anthology of Computer-Generated Text, 1953–2023

Book launch events are posted here and will be updated as new ones are scheduled!

This anthology spans seven decades of computer-generated text, beginning before the term “artificial intelligence” was even coined. While not restricted to poetry, fiction, and other creative projects, it reveals the rich work that has been done by artists, poets, and other sorts of writers who have taken computing and code into their own hands. The anthology includes examples of powerful and principled rhetorical generation along with story generation systems based on cognitive research. There are examples of “real news” generation that has already been informing us — along with hoaxes and humor.

Page spread from OUTPUT with Everest Pipkin’s i’ve never picked a protected flower

Page spread from OUTPUT with Talan Memmott’s Self Portrait(s) [as Other(s)]

Page spread from OUTPUT with thricedotted’s The Seeker

It’s all contextualized by brief introductions to each excerpt, longer introductions to each fine-grained genre of text generation, and an overall introduction that Lillian-Yvonne and I wrote. There are 200 selections in the 500-page book, which we hope will be a valuable sourcebook for academics and students — but also a way for general readers to learn about innovations in computing and writing.

You can buy Output now from several sources. I suggest your favorite independent bookseller! If you’re in the Boston area, stop by the MIT Press Bookstore which as of this writing, has 21 on hand as of actually publishing this post, has 14 copies!

Upcoming Book Launches & Talks

January 13 (Monday) “The Output Anthology at Computer-Generated Text’s Cultural Crux”, a talk of mine at the UCSC Computational Media Colloquium, Engineering 2 Room 280, 12:30pm–1:30pm. Free & open to the public.

January 20 (Monday) Toronto book launch with me, Matt Nish-Lapidus, Kavi Duvvoori, and others TBA at the University of Toronto’s Centre for Culture & Technology, 6pm–7:30pm. Free & open to the public.

March 11 (Tuesday) Massachusetts Institute of Technology book launch with the editors, MIT’s Room 32-155, 5pm-6:30pm. Free & open to the public. Book sales thanks to the MIT Press Bookstore.

Previous Events

November 11 (Monday): Both editors spoke at the University of Virginia, Bryan Hall, Faculty Lounge, Floor 2. Free & open to the public. 5pm.

November 20 (Wednesday): Online book launch for Output, hosted by the University of Maryland. Both editors in conversation with Matt Kirschenbaum. Free, register on Zoom. 12noon Eastern Time.

November 21 (Thursday) Book launch at WordHack with me, David Gissen, Sasha Stiles, Andrew Yoon, and open mic presenters. Wonderville, 1186 Broadway, Brooklyn, 7pm. $15. Book sales.

December 6 (Friday) Output will be available for sale and I’ll be at the Bad Quarto / Nick Montfort table at Center for Book Arts Winter Market, 28 W 27th St Floor 3, 4pm–8pm.

December 9 (Monday) Book launch at Book Club Bar with the editors, Charles Bernstein, Robin Hill, Stephanie Strickland, and Leonard Richardson. 197 E 3rd St (at Ave B), New York City’s East Village. Free, RSVP required. 8pm. Book sales thanks to Book Club.

December 13 (Friday) European book launch with the editors, Scott Rettberg, and Tegan Pyke. University of Bergen’s Center for Digital Narrative, Langesgaten 1-2, 3:30pm. Free & open to the public, book sales thanks to Akedemika. This event was streamed & recorded and is available to view on YouTube.

Gram’s Fairy Tales: Manual & Grimoire

Taper, an online literary magazine published twice yearly, is now in its 12th issue. An independent editorial collective (Kyle Booten, Angela Chang, Kavi Duvvoori, Leonardo Flores, Helen Shewolfe Tseng, and Andy Wallace, for this issue) makes all the decisions about selections, themes for forthcoming issues, and so on, and also handles all communication with authors. They do all the work! Editors are allowed to submit works, in which case they recuse themselves from the collective’s discussion. I’m proud to be publisher of this magazine — although it’s really the editorial collective that makes it happen.

My “Gram’s Fairy Tales” was selected for this issue, with “Tools” as its theme!

A story grammar in a textarea (below) and a generated story about a heroic lily, a villainous droplet, and a chanterelle who helps the hero.

For those who haven’t visited Taper, when you do, you’ll find that the poems are computational and are no more than 2KB (2048 bytes) in size. (Issue #1 hosted 1KB poems and issue #3 had ones that were 3KB, but Goldilocks found the ideal constraint in 2KB.) Issue #12 features a lot of general-purpose programs that are true tools, textual or otherwise, as well as dynamic text generators that speak to the theme. In fact, the issue is our largest yet, with 32 selections.

To read what the author/programmers say about their work, in their statements, you generally have to “View Page Source” (or choose the similar option in your browser) and look at a comment at the top of each page, which is licensed as free (libre) software so you can make use of it in any way you like. However, given the theme of this issue, I wrote “Gram’s Fairy Tales” to not only give access to the underlying HTML5, but also to let people write grammars that can be placed in a text area and used to generate stories. Given this, I’m going to pull a bit from my “author statement,” found in complete form in the comments, and post it here. I’ll also provide three novel story grammars written by others: Jhave Jhonston (author of ReRites), Kyle Booten (author of Salon des Fantômes), and Kavi Duvvoori (author of Common Is That They).

Inspirations, InstructionsJhave’sKyle’sKavi’s

Two Inspiring Systems, and Instructions

As I developed “Gram’s Fairy Tales,” I was thinking about two remarkable story generation systems, very different ones, both quite simple.

One is the mid-1980s Story Machine, which I’ve known about for a while in its Commodore 64 incarnation. It is a strangely animistic system in which any entity can perform any transitive or intransitive action. While there are multimedia elements and some notion of state, it essentially develops narratives one independent sentence at a time.

The other is a system by Joseph Grimes, operational in Mexico City in 1963. Only one text from this story generator is documented. It seems to have produced stories based on a story grammar, with formalist structures. So, a hero would be challenged in some way and overcome the challenge. The one-page magazine article about the system suggested that it could be called “Grimes’s Fairy Tales.”

“Gram’s Fairy Tales” uses a simple grammar to generate stories. There are special “hero” and “villain” tokens, which always refer to the same two entities, selected from the “entity” rule. You don’t have a pool of good guys and bad guys; any entities are equally likely to be heroes and villains.

Beyond those special tokens, the other nonterminal tokens (the lowercase ones) expand according to rules, with each represented as one line in the text field.

The “|” indicates alternatives; these are split apart first.

The “+” indicates a conjunction of elements; all of those will appear.

Tokens keep getting expanded until eventually there is a run of text which in my example grammars are in all caps. These are “terminals,” and ends up being presented as is, without any further transformation.

Your grammar needs to begin with a “story” rule, and it should have “adj” and “entity” rules so a hero and villain can be determined. Anything else is up to you. Indeed, you don’t actually have to refer to a hero or villain.

My terminals are in all caps (making for an all-caps story) simply because it’s tricky to get the capitalization of sentences right. It would make things harder for those who want to edit the grammar and create their own stories, and the grammar is already a bit tricky.

Spaces have to be included properly at the beginning and ending of terminals, for instance, except for the terminals that come at the very end of stories.

Also, if you have entities or adjectives that begin with a vowel sound, you’ll end up with grammatical mistakes. The solution in this simple tool (or toy) — simply use adjectives and entities that all begin with consonant sounds!

To get an idea of how to write your own grammar, you may want to start with an extremely simple one, such as:

story>hero+ FOUGHT +villain+.|hero+ TOOK A NAP.
adj>CLEVER|STRONG
entity>FOX|CHICKEN

Jhave’s Grammar: A Harmonious Moment

Here’s a striking project by Jhave Johnston, who is my colleague at the University of Bergen Center for Digital Narrative and winner of the Robert Coover Award. You’ll notice that Jhave wrote his grammar to enable proper capitalization, despite the difficulty of that. Although the right side of the text is hidden, you can select it all and paste it into “Gram’s Fairy Tales” to see how it works.

story>intro+conclusion
intro>In the heart of +emotion+, +entities+ +verb+ in +concept+|Abiding as  +emotion+, +entities+ +verb+ in +concept+|Before +emotion+, +entities+ +verb+ in +concept+|Imperturbable in +emotion+, +entities+ +verb+ in +concept+|Serendipitous as +emotion+, +entities+ +verb+ in +concept+|Surrounded by  +emotion+, +entities+ +verb+ in +concept+|Amidst +emotion+, +entities+ +verb+ in +concept+|Beyond +emotion+, +entities+ +verb+ in +concept+
emotion>serenity|peace|tranquility|joyousness|tenderness|compassion|love|radiance|integrity
entities>entities_nature+ and +entities_tech
entities_nature>brooks|forest|cascade|swamp|torrent|ravines|refugees|stars|orchards|plateaus|tendrils|alleyways|groves|mountains|rivers|fields|meadows|ridges|oceans|orchards|moons|blossoms|nebulae|subduction trenches
entities_tech>LEDs|torrents|media|villains|dollar-stores|ditches|pop-ups|server-farms|antennae|antagonists|satellites|philosophies|trinkets|video-walls|entrails|exiles|synapses|heroes|plastics
verb>merged|thrived|replenished|enjoyed|trembled|reveried|prospered
concept>precon+conce
precon>harmony, |wisdom's light, |exuberance, |cohesion, |unity, |solidarity, |changelessness, |insight, 
conce>seeking mutual actualization|observing mutual transcendence|flourishing with supple realization|witnessing collective discovery|imploding within perfect revelation|imploding within effortless actualization|observing ripe achievement|listening to shared success
conclusion>.

A custom grammar generates “Surrounded by peace, blossoms and antennae merged in solidarity, listening to shared success.”

Kyle Booten’s Grammar: How to Compose an Ode

Kyle is a longtime member of the Taper editorial collective — since issue #5 in fall 2020! — and has long been working to have computer text generation provoke his own (and others’) writing. His grammar produces instructions for writing odes of different sorts, and is the most elaborate by far, clocking in at 70 lines, with several that are essentially comments or blank. Again, you can select all of these and paste them into “Gram’s” without trouble, although some of the text on the right side is obscured.

Pindar Horace Hordar 
(An Ode-Gym)
Kyle Booten


story>HOW TO WRITE AN ODE: +ode_type
ode_type>pindaric|pindaric|pindaric|pindaric|horatian|horatian|horatian|horatian|blended



PINDARIC

pindaric> (TURN) +strophe+ (COUNTERTURN) +antistrophe+ (STAND) +epode
strophe>IN A STANZA OF +p_lines+ +short_or_long+ +meter1+ LINES, ADOPTING A +style1+ TENOR, DESCRIBE +hero+, WITH PARTICULAR ATTENTION TO THE +p_feature+ OF +hero+.
meter1>DACTYLLIC|DACTYLLIC|DACTYLLIC|QUASI-AEOLIC|LOGAOEDIC
meter2>meter_adjust+ +meter1|TROCHAIC|IAMBIC
meter_adjust>NOT|ANYTHING BUT|ONLY IMPERFECTLY
p_lines>6|8|8|10|12
short_or_long>BRIEF|BRIEF|BRIEF|LENGTHY|LENGTHY|LENGTHY|ABSURDLY LONG
p_feature>LAUREL|ORDER|VIRTUE|LAW|DECREE|STRENGTH|CRIME|FATE
style1>SOLEMN|MYSTICAL|GNOMIC|JAGGED|SUBLIME|MORALISTIC|GOVERNMENTAL|MUSCULAR|LAUDATORY|REVERENT|JUBILANT
antistrophe>USING THE SAME STANZAIC STRUCTURE BUT NOW +style_or_mentioning+, CONTRAPOSE +hero+ WITH ITS INVERSE AND ENEMY, +villain+.
style_or_mentioning>style2|MENTIONING +p_trait
style2>ZANY|PIXELATED|EUPHORIC|REVERENT|BLOODTHIRSTY|STERN|RECURSIVE|VITUPERATIVE|BRUTALIST|PARANOID|ANGUISHED
p_trait>THE STATE|A GOD|A WAR|A SPORT|A MORAL|A VIRTUE|A CONQUEST|A VICTORY|A NATURAL DISASTER
epode>NOW, WITH +subtract+ FEWER LINES, AND THESE +ep_style+, IMAGINE HOW +hero+ SHALL +do_to_villain+ +villain+.
ep_style>meter2|style1|style2|short_or_long|short_or_long+ AND +meter2|style1+ AND +style2|style2+ AND +meter2
subtract>2|2|2|3|3|3|4|4|5
do_to_villain>OVERCOME|OVERCOME|OVERCOME|REFORM|BANISH|DEMOTE|CENSURE|STEAL FROM|WED|WOUND|BECOME


HORATIAN

horatian>IN +stanzas+ STANZAS, EACH OF 4 LINES, DESCRIBE +hero+, +gradient+.|IN +stanzas+ STANZAS, EACH OF 4 LINES, DESCRIBE +hero+, +gradient+. +extra+.
gradient>PROGRESSING GENTLY FROM+ +concept_statement|PROGRESSING GENTLY FROM+ +concept_statement+ AND SIMULTANEOUSLY FROM +concept_statement2
concept_statement>concept+ TO +concept2
concept>THE +cognizing+ OF THE +abstraction+ OF +object|THE +cognizing2+ OF THE +abstraction2+ OF +object
concept2>THE +cognizing2+ OF THE +abstraction+ OF +object|THE +cognizing+ OF THE +abstraction2+ OF +object2
concept_statement2>GREEN TO BLUE|BLUE TO GREEN|WEALTH TO POVERTY|MORNING TO NIGHT|RAIN TO SLEEP|YOUTH TO OLD AGE|FALL TO SPRING|WINTER TO SUMMER
abstraction>PAIN|SADNESS|LOSS|HUMOR|FRAILTY|YEARNING
abstraction2>LOSS|GROWTH|MERCY|HISTORY|TIMELESSNESS|MORTALITY
cognizing>AWARENESS|FORGIVENESS|HATRED|NON-AWARENESS|APPRECIATION|MISUNDERSTANDING|SYMPATHY
cognizing2>ACCEPTANCE|PERCEPTION|REGRETTING|UNDERSTANDING
object>hero|hero|hero|hero|LIFE IN GENERAL
object2>hero|hero|hero|hero|villain|LIFE IN GENERAL
stanzas>3|4|4|6|6|8|10
extra>REMARK IN THE +count_line+ LINE UPON +extra_object|prosody
extra_object>THE +h_feature+ OF +hero|WINE|WINE|WINE|A MORSEL|A MORSEL|A FRIEND
h_feature>FLAVOR|SIZE|TEXTURE|FAMILY|POSITION|TEMPERATURE|ECHO|INTERIOR|FORM|TEMPERAMENT|HONOR|HUE
count_line>1ST|rel_num|rel_num+ TO LAST
prosody>MIMIC +poet+'S METER
poet>SAPPHO|ALCAEUS
rel_num>2ND|3RD|4TH


BLENDED

blended>bl1|bl2|bl3|bl4|bl5|bl6
bl1>[STANZA 1: +pindaric_bit+] [STANZA 2: +pindaric_bit+] [STANZA 3: +horatian_bit+]
bl2>[STANZA 1: +pindaric_bit+] [STANZA 2: +horatian_bit+] [STANZA 3: +pindaric_bit+]
bl3>[STANZA 1: +horatian_bit+] [STANZA 2: +pindaric_bit+] [STANZA 3: +pindaric_bit+]
bl4>[STANZA 1: +pindaric_bit+] [STANZA 2: +horatian_bit+] [STANZA 3: +horatian_bit+]
bl5>[STANZA 1: +horatian_bit+] [STANZA 2: +horatian_bit+] [STANZA 3: +pindaric_bit+]
bl6>[STANZA 1: +horatian_bit+] [STANZA 2: +pindaric_bit+] [STANZA 3: +horatian_bit+]
pindaric_bit>meter1|meter2|short_or_long|style1|style1|style_or_mentioning|strophe|ep_style
horatian_bit>gradient|gradient|extra|extra_object|cognizing|cognizing2|concept_statement|concept|concept2|concept_statement2|prosody


adj>VINTAGE|COMMON|CIVIC|BROKEN|HONEYED|PAINTED|SACRED|HUNGRY|VERNAL|FLEETING
entity>VASE|FRIEND|DOVE|DEER|FRUIT|LAKE|HOUSE|GAME|TUNE|STORM|GLEN|KNOT|VEIL|WRIT

Kavi Duvvoori’s Most Grammatical Grammar

Kavi joined the editorial collective of Taper for this issue (#12) and has really put their shoulder to the wheel! They (like Jhave & Kyle) have been producing serious computer-generated texts and adjacent projects, of course with particular twists. Their grammar uses story elements (hero, villain) and does tell a story — but one about grammars.

story>theory+.|theory+. +coda+.
theory>A +academic+ +critiqued+ +hero+, +noting+ +villain+ IS +really+ A +metaphor+|+villain+ WAS +endorsed+ BY A +academic+ WITH +hero+ USED AS A +vessel+ TO CATCH THE +metaphor|+villain+ IS ITSELF +really+ +surface+, ESPECIALLY +hero+|+hero+ BY A +academic+ IS +villain+ THAT DESCRIBES, FINALLY, THE +metaphor+
coda>+villain+ +ofthis+ IS THE +metaphor+ PUT IN A +vessel+ BY +hero+reversal+|+villain+ +ofthis+ IS +hero+reversal+
ofthis>OF THIS STORY|IN OUR TALE|FOR OUR PURPOSES HERE
reversal> (OR DO I HAVE IT BACKWARDS?)| (WAS IT REALLY THAT +coda+?)|
response>+critiqued+ THE +problem+ IN|+endorsed+
critiqued>FOUND A GAP IN|DISPUTED|ELABORATED TO APORIA
endorsed>REINFORCED|ELABORATED ON|SHOWN TO NOT POSSESS THE ALLEGED +problem+
academic>linguist|+specialty+ +linguist+|SOCIOLOGIST OF THE +academic+
specialty>COMPUTATIONAL|CRITICAL|CREATIVE|HISTORICAL
linguist>POET|LINGUIST|PHILOLOGIST|LITERARY THEORIST
problem>LACK OF RIGOUR|IDEALISM|CONTRADICTIONS|PROBLEMS
noting>NOTING THAT|SHOWING HOW|FOR
really>REALLY|IN FACT|IN TRUTH
metaphor>image+ ON +surface|churn+ +metaphor
image>SHADOW|REFLECTION|PICTURE
vessel>NET|SIEVE|DISPLAY CASE
churn>SHIFTING|FLOWING|CHURNING
surface>A BUILDING'S WALL|A PAGE|SKIN|+churn+ +flow
flow>CURRENTS|WATER|LEAVES|WIND|MUD
adj>CHOMSKYAN|PANINIAN|MOSTLY ADELE GOLDBERG-INSPIRED|PRIMARILY IRENE HEIM-DERIVED|MONTAGUE|LAKOFFIAN|CHARLES SANDERS PIERCE-ROOTED
entity>GRAMMAR|MODEL|LOGIC|FORMALISM

“A FORMALISM IS ITSELF IN TRUTH A PAGE, ESPECIALLY A LAKOFFIAN MODEL.” Plus a grammar.

Poetix

February 14 is Valentine’s Day for many; this year, it’s also Ash Wednesday for Western Christians, both Orthodox and unorthodox. Universally, it is Harry Mathews’s birthday. Harry, who would have been 94 today, was an amazing experimental writer. He’s known to many as the first American to join the Oulipo.

Given the occasion, I thought I’d write a blog post, which I do very rarely these days, to discuss my poetics — or, because mine is a poetics of concision, my “poetix.” Using that word saves one byte. The term may also suggest an underground poetix, as with comix, and this is great.

Why write poems of any sort?

Personally, I’m an explorer of language, using constraint, form, and computation to make poems that surface aspects of language. As unusual qualities emerge, everything that language is entangled with also rises up: Wars, invasions, colonialism, commerce and other sorts of exchange between language communities, and the development of specialized vocabularies, for instance.

While other poets have very different answers, which very often include personal expression, this is mine. Even if I’m an unusual conceptualist, and more specifically a computationalist, I believe my poetics have aspects that are widely shared by others. I’m still interested in composing poems that give readers pleasure, for instance, and that awaken new thoughts and feelings.

Why write computational poems?

Computation, along with language, is an essential medium of my art. Just as painters work with paint, I work with computation.

This allows me to investigate the intersection of computing, language, and poetry, much as composing poems allows me to explore language.

Why write tiny computational poems?

Often, although not always, I write small poems. I’ve even organized my computational poetry page by size.

Writing very small-scale computational poems allows me to learn more about computing and its intersection with language and poetry. Not computing in the abstract, but computing as embodied in particular platforms, which are intentionally designed and have platform imaginaries and communities of use and practice surrounding them.

For instance, consider the modern-day Web browser. Browsers can do pretty much anything that computers can. They’re general-purpose computing platforms and can run Unity games, mine Bitcoin, present climate change models, incorporate the effects of Bitcoin mining into climate change models, and so on and so on. But it takes a lot of code for browsers to do complex things. By paring down the code, limiting myself to using only a tiny bit, I’m working to see what is most native for the browser, what this computational platform can most **essentially* accomplish.

Is the browser best suited to let us configure a linked universe of documents? It’s easy to hook pages together, yes, although now, social media sites prohibit linking outside their walled gardens. Does it support prose better than anything else, even as flashy images and videos assail us? Well, the Web is predisposed to that: One essential HTML element is the paragraph, after all. When boiled down as much as possible, there might be something else that HTML5 and the browser is really equipped to accomplish. What if one of the browser’s most essential capabilities is that of … a poetry machine?

One can ask the same questions of other platforms. I co-authored a book about the Atari VCS (aka Atari 2600), and while one can develop all sorts of things for it (a BASIC interpreter, artgames, demos, etc.), and I think it’s an amazing platform for creative computing, I’m pretty sure it’s not inherently a poetry machine. The Atari VCS doesn’t even have built-in characters, a font in which to display text. On the other hand, the Commodore 64 allows programmers to easily manipulate characters; change the colors of them individually; make them move around the screen; replace the built-in font with one of their own; and mix letters, numbers, and punctuation with an set of other glyphs specific to Commodore. This system can do lots of other things — it’s a great music machine, for instance. But visual poetry, with potentially characters presented on a grid, is also a core capability of the platform, and very tiny programs can enact such poetry.

I’ve written at greater length about this in “A Platform Poetics / Computational Art, Material and Formal Specificities, and 101 BASIC POEMS.” In that article, I focus on a specific, ongoing project that involves the Commodore 64 and Apple II. More generally, these are the reasons I continue to pursue to composition of very small computational poems on several different platforms.

Advice Concerning the Increase in AI-Assisted Writing

Edward Schiappa, Professor of Rhetoric
Nick Montfort, Professor of Digital Media
10 January 2023

[In response to a request, this is an HTML version of a PDF memo for our colleagues at MIT]

There has been a noticeable increase in student use of AI assistance for writing recently. Instructors have expressed concerns and have been seeking guidance on how to deal with systems such as GPT-3, which is the basis for the very recent ChatGPT. The following thoughts on this topic are advisory from the two of us: They have no official standing within even our department, and certainly not within the Institute. Nonetheless, we hope you find them useful.

Newly available systems go well beyond grammar and style checking to produce nontrivial amounts of text. There are potentially positive uses of these systems; for instance, to stimulate thinking or to provide a variety of ideas about how to continue from which students may choose. In some cases, however, the generated text can be used without a great deal of critical thought to constitute almost all of a typical college essay. Our main four suggestions for those teaching a writing subject are as follows:

  1. Explore these technologies yourself and read what has been written about them in peer-reviewed and other publications,
  2. understand how these systems relate to your learning goals,
  3. construct your assignments to align with learning goals and the availability of these systems, and
  4. include an explicit policy regarding AI/LLM assistance in your syllabus.

Exploring AI and LLMs

LLMs (Large Language Models) such as those in the GPT series have many uses, for instance in machine translation and speech recognition, but their main implications for writing education have to do with natural language generation. A language model is a probability distribution over sequences of words; ones that are “large” have been trained on massive corpora of texts. This allows the model to complete many sorts of sentences in cohesive, highly plausible ways that are sometimes semantically correct. An LLM can determine, for instance, that the most probable completion of the word sequence “World War I was triggered by” is “the assassination of Archduke Franz Ferdinand” and can continue from there. While impressive in many ways, these models also have several limitations. We are not currently seeking to provide a detailed critique of LLMs, but advise that instructors read about the capabilities and limitations of AI and LLMs.

To understand more about such systems, it is worth spending some time with those that are freely available. The one attracting the most attention is ChatGPT. The TextSynth Playground also provides access to several free/open-source LLMs, including the formidable GPT-NeoX-20B. ChatGPT uses other AI technologies and is presented in the form of a chatbot, while GPT-NeoX-20B is a pure LLM that allows users to change parameters in addition to providing prompts.

Without providing a full bibliography, there is considerable peer-reviewed literature on LLMs and their implications. We suggest “GPT-3: What’s it good for?” by Robert Dale and “GPT-3: Its Nature, Scope, Limits, and Consequences” by Luciano Floridi & Massimo Chiriatti. These papers are from 2020 and refer to GPT-3; their insights about LLMs remain relevant. Because ChatGPT was released in late November 2022, the peer-reviewed research about it is scant. One recent article, “Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education,” offers a short human-authored introduction and conclusion, presenting sample text generated by ChatGPT between these.

Understanding the Relationship of AI and LLMs to Learning Goals

The advisability of any technology or writing practice depends on context, including the pedagogical goals of each class.

It may be that the use of a system like ChatGPT is not only acceptable to you but is integrated into the subject, and should be required. One of us taught a course dealing with digital media and writing last semester in which students were assigned to computer-generate a paper using such a freely-available LLM. Students were also assigned to reflect on their experience afterwards, briefly, in their own writing. The group discussed its process and insights in class, learning about the abilities and limitations of these models. The assignment also prompted students to think about human writing in new ways.

There are, however, reasons to question the practice of AI and LLM text generation in college writing courses.

First, if the use of such systems is not agreed upon and acknowledged, the practice is analogous to plagiarism. Students will be presenting writing as their own that they did not produce. To be sure, there are practices of ghost writing and of appropriation writing (including parody) which, despite their similarity to plagiarism, are considered acceptable in particular contexts. But in an educational context, when writing of this sort is not authorized or acknowledged, it does not advance learning goals and makes the evaluation of student achievement difficult or impossible.

Second, and relatedly, current AI and LLM technologies provide assistance that is opaque. Even a grammar checker will explain the grammatical principle that is being violated. A writing instructor should offer much better explanatory help to a student. But current AI systems just provide a continuation of a prompt.

Third, the Institute’s Communication Requirement was created (in part) in response to alumni reporting that writing and speaking skills were essential for their professional success, and that they did not feel their undergraduate education adequately prepared them to be effective communicators.[1] It may be, in the fullness of time, that learning how to use AI/LLM technologies to assist writing will be an important or even essential skill. But we are not at this juncture yet, and the core rhetorical skills involving in written and oral communication—invention, style, grammar, reasoning and argument construction, and research—are ones every MIT undergraduate still needs to learn.

For these reasons, we suggest that you begin by considering what the objectives are for your subject. If you are aiming to help students critique digital media systems or understand the implications of new technologies for education, you may find the use of AI and LLMs not only acceptable but important. If your subject is Communication Intensive, however, an important goal for your course is to develop and enhance your students’ independent writing and speaking ability. For most CI subjects, therefore, the use of AI-assisted writing should be at best carefully considered. It is conceivable at some point that it will become standard procedure to teach most or all students how to write with AI assistance, but in our opinion, we have not reached that point.The cognitive and communicative skills taught in CI subjects require that students do their own writing, at least at the beginning of 2023.

Constructing Assignments in Light of Learning Goals and AI/LLMs

Assigning students to use AI and LLMs is more straightforward, so we focus on the practical steps that can be taken to minimize use when the use of these system does not align with learning goals. In general, the more detailed and specific a writing assignment, the better, as the prose generated by ChatGPT (for example) tends to be fairly generic and plain.

Furthermore, instructors are encouraged to consult MIT’s writing and communication resources to seek specific advice as to how current assignments can be used while minimizing the opportunities for student use of AI assistance. These resources include Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program It is our understanding that MIT’s Teaching + Learning Lab will be offering advice and resources as well.

Other possible approaches include:

• In-class writing assignments
• Reaction papers that require a personalized response or discussion
• Research papers requiring quotations and evidence from appropriate sources
• Oral presentations based on notes rather than a script
• Assignments requiring a response to current events, e.g., from the past week

The last of these approaches is possible because LLMs are trained using a fixed corpus and there is a cutoff date for the inclusion of documents.

Providing an Explicit Policy

Announcing a policy clearly is important in every case. If your policy involves the prohibition of AI/LLM assistance, we suggest you have an honest and open conversation with students about it. It is appropriate to explain why AI assistance is counter to the pedagogical goals of the subject. Some instructors may want to go into more detail by exploring apps like ChatGPT in class and illustrating to students the relative strengths and weaknesses of AI-generated text.

In any case: What this policy should be depends on the kinds and topics of writing in your subject. If you do not wish your students to use AI assisted writing technologies, you should state so explicitly in your syllabus. If the use of this assistance is allowable within bounds, or even required because students are studying these technologies, that should be stated as well.

In the case of prohibition, you could simply state: “The use of AI software or apps to write or paraphrase text for your paper is not allowed.” Stronger wording could be: “The use of AI software or apps to write or paraphrase text for your paper constitutes plagiarism, as you did not author these words or ideas.”

There are automated methods (such as Turnitin and GPTZero) that can search student papers for AI-generated text and indicate, according to their own models of language, how probable it is that some text was generated rather than human-written. We do not, however, know of any precedent for disciplinary measures (such as failing an assignment) being instituted based on probabilistic evidence from such automated methods.

Conclusion

The use of AI/LLM text generation is here to stay. Those of us involved in writing instruction will need to be thoughtful about how it impacts our pedagogy. We are confident that the Institute’s Subcommittee on the Communication Requirement, the Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program will provide appropriate resources and counsel when appropriate. We also believe students are here at MIT to learn, and will be willing to follow thoughtfully advanced policies so they can learn to become better communicators. To that end, we hope that what we have offered here will help to open an ongoing conversation.[2]


[1] L. Perelman, “Data Driven Change Is Easy; Assessing and Maintaining It Is the Hard Part,” Across the Disciplines 6.2 (Fall 2009). Available: http://wac.colostate.edu/atd/assessment/perelman.cfm. L. Perelman, “Creating a Communication-Intensive Undergraduate Curriculum in Science and Engineering for the 21st Century: A Case Study in Design and Process.” In Liberal Education in 21st Century Engineering, Eds. H. Luegenbiehl, K. Neeley, and David Ollis. New York: Peter Lang, 2004, pp. 77-94.

[2] Our thanks to Eric Klopfer for conversing with us concerning an earlier draft of this memo.

The Engagements of Difference Machines

Insufficent Memory with two viewers/readers behind the hanging photos

It’s been a while since I stopped over in Buffalo, but I’m finally unfrozen, and I’m unfreezing my blog, too, to comment a bit on the exhibit I saw — this was the purpose of my brief wintry sojourn — Difference Machines: Technology and Identity in Contemporary Art at the Albright-Knox Northland. I visited the show with my spouse, Flourish; Tina Rivers Ryan (who curated the exhibit with Paul Vanouse) was kind enough to give us a big chunk of her day and provide a detailed tour.

When we speak of “identity,” we speak of difference. This is true whether we are philosophers like John Perry, who has interrogated what it is that makes objects identical to one another, or are focused on the social world. The works in Difference Machines explore how, in the context of computer and network technologies, people’s identities persist, rather than being erased as some imagined would have happened. This means that some can be grouped together and identified in stereotypical, harmful, even lethal ways — but there are also glimpses of how, more positively, people can identify with one another.

I got to see Difference Machines just before it closed on January 16, so I have to settle for telling you about what you missed. One of the more remarkable aspects of the exhibit is that it was light-flooded and in a space where sound could call out invitingly from some of the pieces without derailing one’s experience of artworks nearby. While I would exhaust myself (and you, dear reader) were I to try to thoroughly review the show and relate much of what was so compelling about it, I’ll at least write some about three of the works in it, to give an idea of what machines were running and how they engaged with shared themes.

Many of the artworks spoke against the concept of a flattened, egalitarian cyberspace (metaverse?), but the photos and texts by Sean Fader in his 2020 Insufficient Memory go beyond this. They document hate crimes against LGBTQ+ people, including many murders and instances where people have been provoked to kill themselves. As a viewer approaches, the low-res photos, taken on a very early consumer digital camera, have a beauty that inheres in the often natural scenes, is intensified by how they are framed, and becomes even more compelling because of the lossy, “cool” (McLuhan’s term) nature of these highly compressed images. Walking around to the back, the viewer, become reader, discovers how each photo is the scene of a brutal crime. The victims did not find positive community through common identity; they were marked as different and in most cases directly killed because of their difference.

‘Ye or Nay at Difference Machines, two computers set up with the game
‘Ye or Nay, A.M. Darky, 2020 (Photo by Tina Rivers Ryan)

To given an idea of the range of this show, turn to ‘Ye or Nay by A.M. Darky (2020). This artwork is a game for two that invites a chat as each player tries to pick out the card that is showing on the other’s screen. It’s not some antigame or disorientation machine, but an actual fun game with a compelling soundtrack, graphics, and interface. (Go play it online!) The cards all display portraits of Black male celebrities. Part of the concept is that some will be better equipped to play this game for various reasons — they will have a more extensive vocabulary to describe the hairstyles shown, for instance, or they will know more about who was born where. The game/artwork playfully tickles players to consider what they’re able to articulate (or not). My only critique is that the piece is already showing its age in one minor way: The apostrophe at the beginning should now be removed!

Level of Confidence’s face-recognition software finds a match for Flourish
Level of Confidence, Rafael Lozano-Hemmer, 2015 (Photo by Tina Rivers Ryan)

The last piece I’ll turn to is by a well-known digital media artist, Rafael Lozano-Hemmer. His Level of Confidence (2015) is a free and open source software project in which biometric software matches the face of the viewer (also the viewed, in this case) to one of 43 students who were kidnapped from the Ayotzinapa normalista school in Iguala, Guerrero, Mexico. The project memorializes this event and exposes the working of a face-recognition system which, it seems, is doomed to never make a match. As I read the piece, though, this artwork also uses computer technology to present the viewer with someone who looks like them, making the connection to this kidnapping more personal and acute. This does not result in some sort of naïve empathy machine. A museum visitor in Buffalo is not likely to suddenly feel, “ah! It could have been me being kidnapped!” But the piece is also not purely a slam on surveillance technology, as I see it. The face-recognition system that is central to this interactive artwork is used to invite a novel viewing and works to help keep an event alive in memory.

Well, I could go on! There were many other great works in the show. But I can’t write a catalogic discussion of them all, so I’ll have to let those three stand as examples of the different approaches artists in Difference Machines took.

A 6 byte Commodore 64 Demo

What if I told you ... 7e 00 8d 20 d0 50 is a Commodore 64 demo

If you thought my last post about a 32 byte (plus 2 byte load address) Commodore 64 demo was esoteric, wait until you burrow into this one.

Back in March at Lovebyte I released a C64 demo that is a total of 6 bytes. I contrived this one so that the 4b of code ends up overwriting part of a zero-page routine that runs every time RETURN is pressed. The effect is a pulsing pattern on the border. (You can just as easily make the screen pulse, which I personally find less aesthetically pleasing because the pulsing in that case happens over any text that is on the screen. It’s also a bit more eye watering and more likely to trigger seizures.) While it’s a very simple effect, I don’t know of any demo at all for this platform that has this file size or any smaller one. Some extensive trickery was involved in injecting my code into existing memory contents to produce this effect.

You can download the demo, NONMONOCHROME.

For my several readers who enjoy reading 6502 assembly, here’s what I did:

; NONMONOCHROME
; 6b C64 demo (2b header + 4b)
; by nom de nom/TROPE

; RETURN to begin.
;
; This demo loops about every 1.45 seconds (on an NTSC
; machine).
;
; Holding down a key while this runs will produce an extra
; effect.
;
; RUN STOP - RESTORE will stop the program, but as soon as
; RETURN is pressed the demo will just start again. To
; restore normal function, it's necessary to power-cycle
; the C64.

.outfile    "nonmonochrome"

.word $007e  ; Wedge the code into CHRGET on the zero page.
.org  $007e  ; Note that CHRGET itself begins at $0073.

sta $d020    ; Set the border color with whatever's in the
; accumulator. The value varies as the code from $73
; through $7e runs. It oscillates between black and white
; for one phase and colorful for another phase. A detailed
; explanation of why is provided below.

.byte $50    ; This makes the next instruction bvc, clobbering
; $20. The following value in memory, $f0, had been used as
; beq. $50 $f0 = bvc $73, which (infinitely) takes us back to
; the beginning of CHRGET at $0073.

; A disassembly of CHRGET, left, and how it's been modified,
; right:
;
; .org  $0073                 .org  $0073
;
; chrget   inc txtptr         nonmono  inc txtptr
;          bne chrgot                  bne chrgot
;          inc txtptr+1                inc txtptr+1
; chrgot   lda                chrgot   lda
; txtptr   .word $0207        txtptr   .word $0207
; pointb   cmp #$3a                    cmp #$3a
;          bcs exit                    sta $d020
;          cmp #$20                    bvc nonmono
;          beq chrget
;          sec
;          sbc #$30
;          sec
;          sbc #$d0
; exit     rts

; CHRGET keeps increasing TXTPTR until it reaches the end of
; whatever BASIC input it's reading -- whether in immediate
; mode or when reading a BASIC program.
;
; NONMONO, however, loops forever and keeps increasing TXTPTR
; until it overflows past $ffff back to $0000.
;
; Because of the LDA TXTPTR, the accumulator is being
; consecutively loaded with each of the bytes in the C64's
; 64kb of accessible RAM and ROM. Then, after the CMP
; (which does nothing but slow the demo down by two cycles
; per loop), the contents of the accumulator are stored
; so as to set the border color.
;
; A large region of the C64's memory has $0 or $1 as the
; low nibble -- at least at power on, with no BASIC program
; loaded. Thus, black or white. Then, another large region
; (including ROM and the zeropage) has more "interesting"
; contents which manifest themselves as colors.

My demo was also taken up as an object of discussion in the first 7 minutes and 30 seconds of this YouTube video, where some of this explanation is provided as the demo runs on a C64c. This a later model of the Commodore 64, not the system I used in testing it, so the pulsing effect is slightly different. Among other things, my name for the demo, NONMONOCHROME, makes less sense when it’s running on this computer!

It could be an interesting challenge for others to try writing a Commodore 64 demo of only 6 bytes total, or of fewer bytes. For reasons I’d be glad to discuss, it seems to me that injecting the code into some productive context (an existing routine such as CHRGET or elsewhere in memory) is the only workable approach.

C64 Coding Under (Many) Constraints

Screenshot from Tyger Tyger; black and white border, scattering of orange and black characters on blue

Yesterday I wrote a little demoscene production, an intro, called “Tyger Tyger.” It’s a Commodore 64 machine language program with 32 bytes of code and the requisite 2 byte header, found on all C64 PRG files. It only garnered third place out of five entries in the 256b compos at @party 2021, behind two impressive entries that were for a different platform (DOS) and went to the limit of allowable code (eight times as much).

I wrote this intro under a formal constraint (32 bytes of machine language code) and several process constraints. Specifically, I wrote it all yesterday, mostly on a Amtrak train, finishing up the sound once I got to the party itself in Somverville. I also exclusively used a hardware Commodore 64 running Turbo Macro Pro v1.2, rather than cross-assembling the code on a contemporary computer and testing it on an emulator. To be specific, I used an unofficial hardware version of the Commodore 64 called the Ultimate 64, which is a modern FPGA implementation of Commodore’s hardware—not, however, an emulator. I placed a SID chip (a 6581) in the machine, so the only analog component that would otherwise need to be emulated was an authentic original.

Writing the code exclusively on the train provided me with some additional challenges, because the power kept cutting out and my system (which has to be plugged into AC power) shut off each time.

I’d never used any version of Turbo Macro Pro before, neither the C64 assembler nor TMPx, the cross-assembler. I figured out the basics of how to use the editor and learned the syntax the assembler used. It’s quite an excellent piece of software—thanks, Style!

And tremendous thanks to Dr. Claw, who put on a safe, responsible, demoparty—masks required when not eating, full vaccination required for all. It was a tiny one, yes, but it was a party! The event continues the important social and artistic traditions of the North American demoscene.

Golem and My Other Seven Computer-Generated Books in Print

Dead Alive Press has just published my Golem, much to my delight, and I am launching the book tonight in a few minutes at WordHack, a monthly event run by the NYC community gallery Babycastles.

This seems like a great time to credit the editors and presses I have worked with to publish several of these books, and to let you all know where they can be purchased, should you wish to indulge yourselves:

  • Golem, 2021, Dead Alive’s New Sight series. Thank you, Augusto Corvalan!
  • Hard West Turn, 2018, published by my Bad Quarto thanks to John Jenkins’s work on the Espresso Book Machine at the MIT Press Bookstore.
  • The Truelist, 2017, Counterpath. This book was the first in the Using Electricity series which I edit, and was selected and edited by Tim Roberts—thanks! Both Counterpath publications and my book from Les Figues are distributed by the nonprofit Small Press Distribution.
  • Autopia, 2016, Troll Thread. Thank you, Holly Melgard!
  • 2×6, 2016, Les Figues. This book is a collaboration between me and six others: Serge Bouchardon, Andrew Campana, Natalia Fedorova, Carlos León, Aleksandra Ma?ecka, and Piotr Marecki. Thank you, Teresa Carmody!
  • Megawatt, 2014, published by my Bad Quarto thanks to Jeff Mayersohn’s Espresso Book Machine at the Harvard Book Store.
  • #!, 2014, Counterpath. Thank you, Tim Roberts!
  • World Clock, 2013, published by my Bad Quarto thanks to Jeff Mayersohn’s Espresso Book Machine at the Harvard Book Store.

The code and text to these books are generally free, and can be found on nickm.com, my site. They are presented in print for the enjoyment of those who appreciate book objects, of course!

Generative Unfoldings, Opening April 1, 2021

Generative Unfoldings, 14 images from 14 generative artworks

Generative Unfoldings is an online exhibit of generative art that I’ve curated. The artworks run live in the browser and are entirely free/libre/open-source software. Sarah Rosalena Brady, D. Fox Harrell, Lauren Lee McCarthy, and Parag K. Mital worked with me to select fourteen artworks. The show features:

  • Can the Subaltern Speak? by Behnaz Farahi
  • Concrete by Matt DesLauriers
  • Curse of Dimensionality by Philipp Schmitt
  • Gender Generator by Encoder Rat Decoder Rat
  • Greed by Maja Kalogera
  • Hexells by Alexander Mordvintsev
  • Letter from C by Cho Hye Min
  • Pac Tracer by Andy Wallace
  • P.S.A.A. by Juan Manuel Escalante
  • Seedlings_: From Humus by Qianxun Chen & Mariana Roa Oliva
  • Self Doubting System by Lee Tusman
  • Someone Tell the Boyz by Arwa Mboya
  • Songlines by Ágoston Nagy
  • This Indignant Page: The Politics of the Paratextual by Karen ann Donnachie & Andy Simionato

There is a (Screen) manifestation of Generative Unfoldings, which lets people run the artworks in their browsers. In addition, a (Code) manifestation provides a repository of all of the free/libre/open-source source code for these client-side artworks. This exhibit is a project of MIT’s CAST (Center for Art, Science & Technology) and part of the Unfolding Intelligence symposium. The opening, remember, is April 1, 2021! See the symposium page, where you can register (at no cost) and find information about joining us.

Amazing Quest Q&A

Amazing Quest should be completely open to the interpretation of players, to their appreciation of it, and, if they choose, to their rejection of it.

I refrained from discussing anything about the game during the IF Comp. Now that it’s over, I am glad to answer some questions that have arisen—with the earnest hope that my answers don’t preclude people from coming up with their own interpretations and responses.

These aren’t really frequently asked questions, but they are all actually questions that have been asked at least once. When I quote directly, the quotations are from anonymous feedback from IF Comp players. Whether quoted or in paraphrase, all of these are real questions or responses that I’ve gotten.

Q: Are you trolling? Making a joke? Playing a prank?

A: No.

Q: Why did you enter this thing, which isn’t very much like IF as we understand it, in the IF Comp?

A: Because I hoped some people would enjoy it and be provoked by it (in a positive way) when they encountered this game in the context of the Comp. That may have only happened twelve times, based on the ratings that were greater than 5, but perhaps I provided some interest and joy to these twelve players that outweighed the other players’ confusion and disappointment.

Q: As a computer program, what exactly is Amazing Quest?

A: A quasi-spoiler here, but: it is a 12-line Commodore 64 BASIC program, with each line being of the maximum length that can be typed in: 80 characters, using all keyword abbreviations and other shortcuts. That makes it the maximal Commodore 64 BASIC program that will fit on one screen. On the second half of the last logical line (line 11), which is the last physical line of the screen, I included a sort of informal all-permissive free software license.

Q: Is this an experiment?

A: I don’t find it interesting to produce any IF, other digital games, other digital literature, or other digital art, unless what I’m doing is an experiment. So, of course.

Q: Is this a parody?

A: I don’t consider it to be, since you’re asking me. If you have thoughtfully considered the game and that’s what you take away from it, I respect your view.

Q: What am I supposed to do when playing this?

A: Since you’ve asked me, I’ve tried to explain this quite explicitly in the introduction: “If you allow your imagination to help you elaborate each stop on your journey, and if you truly get into the mindset of the returning wanderer, Amazing Quest will offer you rewards as you play it again and again.” You can read that, and the game, however you want, but I’d say that Amazing Quest prompts you to imagine, from the standpoint of the main character. So, imagine.

Q: “Oh hey, it’s A Space Odyssey!”

A: For me at least, I did consider Amazing Quest to be a serious engagement with Homer’s Odyssey, and that was one of the things that motivated my work on the project. I’m glad that some people saw this aspect of the project. I hope that enriched some people’s experience of it.

Q: “I found the intro and strategy guide more compelling than the game itself.”

A: Personally I would have preferred that all elements of Amazing Quest (the introduction, the strategy guide, the BASIC game itself) had worked together to good effect for you, but I’m glad that you liked something about the project. My development of the short framing texts was meant to relate to the way that early games had similar supporting materials that were essential to the experience. In any case, I am glad someone liked these texts, because it took me a long time to type them up on my Smith Corona Classic 12 without making any mistakes.

Q: “I don’t suspect that you’re in it for the ratings.”

Only one of more than 100 games was going to win the Comp. (Well, this year it happens to be two, because of a tie—but you get my point.) I made this game for the IF Comp so that players would encounter Amazing Quest and play it in that context, in case they found it compelling in that context.

Q: “I guess the point was go ‘behind the scenes’ and read the program in BASIC? That isn’t a particularly interesting goal…”

A: It wasn’t interesting to you, but there are many people in the world. One player ported the game to BBC Micro BASIC, so it seems like the nature of this game as a BASIC program was somehow interesting to that person. Even apparently simple projects like this may have many complex aspects to them which aren’t evident at first. And perhaps only a very few people will be interested in particular aspects. So in this case, perhaps this “behind the scenes” aspect is important to the person who ported it to the BBC Micro, and maybe it will be interesting to people who know Commodore 64 BASIC very well. I’m frankly surprised that someone engaged with the BASIC program so thoroughly during the Comp. It easily could have taken a few decades for anyone to really become interested in the game’s code, if that was ever going to happen.

Q: SPOILER, select the following text to read: The player’s input has no effect on the outcome, and that’s stupid.

A: Although this isn’t a question, I’ll mention: If you understand a little more about Commodore 64 BASIC, you’d find that the user’s input does have an effect on the outcome.

Q: SPOILER, select the following text to read: Okay, but whether you answer yes or no has no effect on the outcome.

A: This, while also not a question, is true. So, let’s begin with whether this is fair to players of the game. Does that contradict anything said on the main page (“I must decide as if it all depends on me, trust as if it all depends on the gods”), within the game itself (“The gods grant victory”), in the introduction, or in the strategy guide? And in terms of how this game relates to the Odyssey, in which a character is fated to return home, alone, from the beginning, and is then brought home by the will of the gods—is there any contradiction there? To continue along these lines, even understanding this aspect of the game, if you decided to go on raids at every opportunity, plundering away, would that be any different than if you always declined to raid? Is your intentional choice irrelevant, even if the outcome is “random”? With all of this in mind, consider something more concrete and immediate. Let’s consider people who live in the United States who voted in the 2020 election. Why would they do that? Realistically, an individual’s vote had no more effect than any one particular Y/N response in Amazing Quest. Let’s heap one more thing on here, purely related to what one might call gameplay. If you want to imagine what’s going on—a journey home—why would this particular property of the game in any way stifle your imagination?

Q: Was this project motivated by anything besides the Odyssey?

A: Yes. I’ll be discussing this and other matters related to Amazing Quest at the Seattle area IF meetup, which will take place online on December 13, 2020 at 2pm PST. If you want to join, you’ll need to email the organizers; see the intfiction.org forum for more information. I hope this group doesn’t regret their plan to host the author of the 98th place IF Comp 2020 game! I’m certainly still looking forward to discussing Amazing Quest.

“Peaceful Protesters” but no “Peaceful Police”

About four million Google hits for “peaceful protesters,” only about 55,000 for “peaceful police.” Anyone who has been reading the news will have seen the phrase “peaceful protesters” again and again—and probably will not have seen this other phrase. Does that mean peaceful protesters outnumber peaceful police 80 to 1? Or at least that we think and speak as if this is the case?

Linguistics does not support this conclusion. In her 1949 book The Second Sex, Simone De Beauvoir gave us the basis for understanding how maleness is the norm in society and language. The phenomenon here is that of markedness, having a default form and a marked form. “Actor” can be a generic term for anyone who acts, but “actress” is used only for the special, marked case—women. As Edwin L. Battistella discusses in The Logic of Markedness, there are exceptions: “male nurse” is the marked case for this profession, because of “the social fact that nurses are most commonly female.”

“Peaceful protesters” is the marked case. It’s understood implicitly that “protesters” are not generally peaceful.

So when the news media speaks or writes about “peaceful protesters,” they are using the marked case. It’s understood implicitly that “protesters” are not generally peaceful. The exceptional ones are the peaceful ones, like the small percentage of male nurses. This is quite evidently false, but doesn’t prevent journalists from using the phrase again and again.

Amirite? Let’s check to see approximately how many Google search results there are for “violent protesters.” In my filter bubble, and admitting that these searches cover all English-language discussion of all protesters globally and historically, it’s 445,000 results, a tiny sliver compared to the “peaceful” ones.

I admit that there are other reasons people would use an adjective such as “peaceful.” It can be used for emphasis rather than to indicate the marked case. But if that’s what’s going on, that’s a huge amount of emphasis. And, I would expect such emphasis to often be accompanied by some sort of marker, an adverb such as “just” or “only” or something along those lines, which I seldom see in recent news.

Now, if you don’t buy my argument about markedness and that think “peaceful” and “violent” just represent how many protesters we think are peaceful and how many we think are violent, this still makes no sense at all. Watch the videos. Watch the livestreams. If you can, go out on the streets. One in ten protesters are not violent in any objective sense.

If you do agree that “peaceful protesters” is being used as the marked case, as if these protesters were in a slim minority, that is an extraordinary and bizarre confusion, truly unhinged from reality.

Why not just call the great mass of so-called peaceful protesters “protesters,” explaining—if for some reason it needs to be explained—that we are lawfully exercising our rights to speak and assemble?

Post Hoc, An Online Art Show

Please enjoy Post Hoc, a show I’ve put together with generous contributions from a baker’s dozen artists and eight writers. There was no pre-established theme for Post Hoc, which was prompted by our inability to get to IRL galleries and museums. Artists were simply asked for digital images, any digital image they considered an artwork. (Several works in the show do have other manifestations.) The work in the show is all from 2020. I solicited 1000–1200 character responses to each piece.

Agnieszka Kurant   response by Mary Flanagan

Christian Bök   response by Paul Stephens

Daniel Temkin   response by Craig Dworkin

Derek Beaulieu   response by Amaranth Borsuk

Forsyth Harmon   response by Simon Morris & Valérie Steunou

Lauren Lee McCarthy   response by Daniel Temkin

Lilla LoCurto & Bill Outcault   response by Fox Harrell

Olia Lialina   response by Mary Flanagan

Manfred Mohr   response by Craig Dworkin

Mark Klink   response by Daniel Temkin

Renée Green   response by Paul Stephens

Sly Watts   response by Fox Harrell

Susan Bee   response by Amaranth Borsuk

You can scroll through the entire Post Hoc show as a single page. However, you’ll only see the images at their original size, and be able to read the responses, if you go to each post individually.