WordHack Book Table

This May 21, 2020 at 7pm Eastern Time is another great WordHack!

A regular event at Babycastles here in New York City, this WordHack will be fully assumed into cyberspace, hosted as usual by Todd Anderson but this time with two featured readings (and open mic/open mouse) viewable on Twitch. Yes, this is the link to the Thursday May 21, 2020 WordHack!

There are pages for this event up on Facebook and withfriends.

I’m especially enthusiastic about this one because the two featured readers will be sharing their new, compelling, and extraordinary books of computer-generated poetry. This page is a virtual “book table” linking to where you can buy these books (published by two nonprofit presses) from their nonprofit distributor.

Travesty Generator coverLillian-Yvonne Bertram will present Travesty Generator, just published by Noemi Press. The publisher’s page for Travesty Generator has more information about how, as Cathy Park Hong describes, “Bertram uses open-source coding to generate haunting inquiring elegies to Trayvon Martin, and Eric Garner, and Emmett Till” and how the book represents “taking the baton from Harryette Mullen and the Oulipians and dashing with it to late 21st century black futurity.”

Data Poetry coverJörg Piringer will present Data Poetry, just published in his own English translation/recreation by Counterpath. The publisher’s page for Data Poetry offers more on how, as Allison Parrish describes it, Jörg’s book is a “wunderkammer of computational poetics” that “not only showcases his thrilling technical virtuosity, but also demonstrates a canny sensitivity to the material of language: how it looks, sounds, behaves and makes us feel.”

Don’t Venmo me! Buy Travesty Generator from Small Press Distribution ($18) and buy Data Poetry from Small Press Distribution ($25).

SPD is well equipped to send books to individuals, in addition to supplying them to bookstores. Purchasing a book helps SPD, the only nonprofit book distributor in the US. It also gives a larger share to the nonprofit publishers (Noemi Press and Counterpath) than if you were to get these books from, for instance, a megacorporation.

Because IRL independent bookstores are closed during the pandemic, SPD, although still operating, is suffering. You can also support SPD directly by donating.

I also suggest buying other books directly from SPD. Here are several that are likely to interest WordHack participants, blatantly including several of my own. The * indicates an author who has been a featured presenter at WordHack/Babycastles; the books next to those asterisks happen to all be computer-generated, too:

Thanks for those who want to dig into these books as avid readers, and thanks to everyone able to support nonprofit arts organizations such as Babycastles, Small Press Distribution, Noemi Press, and Counterpath.

Sonnet Corona

Sonnet Corona, detail from a particular generated poem in the browser

“Sonnet Corona” is a computer-generated sonnet, or if you look at it differently, a sonnet cycle or very extensive crown of sonnets. Click here to read some of the generated sonnets.

The sonnets generated are in monometer. That is, each line is of a single foot, and in this case, is of strictly two syllables.

They are linked not by the last line of one becoming the first line of the next, but by being generated from the same underlying code: A very short web page with a simple, embedded JavaScript program.

Because there are three options for each line, there are 314 = 4,782,969 possible sonnets.

I have released this (as always) as free software, so that anyone may share, study, modify, or make use of it in any way they wish. To be as clear as possible, you should feel free to right-click or Command-click on this link to “Sonnet Corona,” choose “Save link as…,” and then edit the file that you download in a text editor, using this file as a starting point for your own project.

This extra-small project has as its most direct antecedent the much more extensive and elaborate Cent mille milliards de poèmes by Raymond Queneau.

My thanks go to Stephanie Strickland, Christian Bök, and Amaranth Borsuk for discussing a draft of this project with me, thoroughly and on short notice.

Against “Epicenter”

New York City, we are continually told, is now the “epicenter” of the COVID-19 pandemic in the United States. Italy is the world’s “epicenter.” This term is used all the time in the news and was recently deployed by our mayor here in NYC.

I’m following up on a February 15 Language Log post by Mark Liberman about why this term is being used in this way. Rather than asking why people are using the term, I’m going to discuss how this word influences our thinking. “Epicenter” leads us to think about the current global pandemic in some unhelpful ways. Although less exciting, simply saying something like “New York City has the worst outbreak” would actually improve our conceptual understanding of this crisis.

“Epicenter” (on the center) literally means the point on the surface of the earth nearest to the focus of a (below-ground) earthquake. It can be used figuratively — Sam did not keep the apartment very tidy; Upon entering the kitchen, we found that it was the epicenter — but there’s only the one literal meaning.

One effect of the term is a sort of morphological pun: There’s a COVID-19 “epi-center” because this is an “epi-demic,” a disease which is “on the people.” But on March 11, the World Health Organization said that COVID-19 is worse than an epidemic, it’s a pandemic. Indeed, it is a global pandemic, spreading everywhere. So what might have been a useful pun back in February is now an understatement.

Beyond that, “epicenter” is metaphorical in the sense of directing us to a particular conceptual metaphor (see George Lakoff and Mark Johnson’s Metaphors We Live By, 1980, and Lakoff’s article “The Contemporary Theory of Metaphor,” 1993). This sort of metaphor is not just a rhetorical figure or flourish, not a surface feature of language like the “epi-” pun. It is a fundamental way of understanding a target domain (the pandemic) in terms of a source domain (an earthquake).

To give this complex metaphorical mapping a name, we can call it A HEALTH DISASTER IS A SEISMOLOGICAL DISASTER or more simply A PANDEMIC IS AN EARTHQUAKE. But these are just names; we’d need to explore what is being mapped from our earthquake-schema to our pandemic-schema to really think about what’s going on there.

I will note that this metaphor usefully emphasizes some things about the pandemic: its unexpected onset, the extensive devastation, and the cost to human life. While such mappings are always incomplete, if we try to make a more extensive mapping, we can find ourselves more confused than helped.

Earthquakes are localized to a certain place (unlike a global pandemic) and they occur during a short period of time (unlike a global pandemic). So the metaphor incorrectly suggests that our current disaster is happening once, in one place, and thus we will be able to clean up from this localized, short-term event with appropriate disaster relief.

The most critical aspect of our current crisis is not part of the earthquake metaphor at all. There is no entailment or suggestion from the earthquake metaphor that we are dealing with something that can be transmitted or is communicable. Instead of rallying to an earthquake meeting point, we now need to remain apart from each other to slow the spread of disease.

I’d going to go even deeper in thinking about the “epicenter” concept. Why does a global pandemic (a disease affecting “all the people”) have any sort of center at all?

Metaphor, as it is now understood, is based on ways of thinking that are grounded in bodily experience, including those cognitive structures called image schemas. One of these, detailed by Mark Johnson in The Body in the Mind, 1987 and also discussed by George Lakoff in Women, Fire, and Dangerous Things, 1987, is the CENTER-PERIPHERY image schema. It relates to body, and our recognition of the heart, for instance, as being central and one of our toes as being more peripheral. We worry less if we’ve cut a toe than if we’ve cut our heart. Even if lose a toe, we can keep living, which we can’t do without our heart.

So, since this pandemic has one or more CENTERS, does that mean that it’s not so important to worry about the PERIPHERIES of COVID-19? Do we need to be concerned about the center but, if we are somewhere on the margins, is it okay to have parties or get into other situations that may foster transmission of the virus? Is that the way to think about a global pandemic, where outbreaks are occurring everywhere? Obviously, I feel that it’s not best.

The alternative image schemas we can use to develop alternative metaphors are CONTAINMENT and LINER SCALE. “Outbreak” is pretty clearly based on the CONTAINMENT image schema. Some places do have outbreaks that are worse and more serious than other places. Hence, LINEAR SCALE. This would mean saying that, for instance:

Italy has the “worst outbreak of the global pandemic”
NYC has the “worst US outbreak of the global pandemic”

These statements indicate that COVID-19 is a global outbreak that must be CONTAINED, and the implication is that it must be contained everywhere. It isn’t something that hits one place at one time, and doesn’t have a CENTER people can simply flee from in order to be safe. It also admits that the various outbreaks around the world are on a SCALE and can be worse in some places, as of course they are.

I shouldn’t need to explain that I have no expertise or background in public health; What I know, and the information I trust, comes from reading official public health sources (the WHO and CDC). It may seem silly to some (even if you’ve read this far) that I’ve gone off at such length about a single word that’s in the current discourse. I’ve bothered to write this because I’m a poet and have been studying metaphor (in the sense of conceptual metaphor) for many years. I believe the metaphors we live by are very important.

In George Lakoff and Mark Turner’s More Than Cool Reason, 1989, they make the case that metaphor is not what it was assumed to be in poetry discussions for many centuries. It is completely different, certainly not restricted to literature, and indeed is central to everyday thinking. However, in their view, the role of poets is extremely important: “Poets are artists of the mind.” They argue that we (poets) can help to influence our culture and open up more productive and powerful ways of thinking about matters of crucial importance, via developing and inflecting metaphors. I believe this, and hope that other poets will, too.

Sea and Spar Between 1.0.2

When it rains, it pours, which matters even on the sea.

Thanks to bug reports by Barry Rountree and Jan Grant, via the 2020 Critical Code Studies Working Group (CCSWG), there is now another new version of Sea and Spar Between which includes additional bug fixes affecting the interface as well as the generation of language.

As before, all the files in this version 1.0.2.are available in a zipfile, for those who care to study or modify them.

Sea and Spar Between 1.0.1

Stephanie Strickland and I published the first version of Sea and Spar Between in 2010, in Dear Navigator, a journal no longer online. In 2013 The Winter Anthology republished it. That year we also provided another version of this poetry system for Digital Humanities Quarterly (DHQ), cut to fit the toolspun course, identical in terms of how it functions but including, in comments within the code, what is essentially a paper about the detailed workings of the system. In those comments, we wrote:

The following syllables, which were commonly used as words by either Melville or Dickinson, are combined by the generator into compound words.

However, due to a programming error, that was not the case. In what we will now have to call Sea and Spar Between 1, the line:

syllable.concat(melvilleSyllable);

does not accomplish the purpose of adding the Melville one-syllable words to the variable syllable. It should have been:

syllable = syllable.concat(melvilleSyllable);

I noticed this omission only years later. As a result, the compound or kenning “toolspun” never was actually produced in any existing version of Sea and Spar Between, including the one available here on nickm.com. This was a frustrating situation, but after Stephanie and I discussed it briefly, we decided that we would wait to consider an updated version until this defect was discovered by someone else, such as a critic or translator.

It took a while, but a close reading of Sea and Spar Between by Aaron Pinnix, who considered the system’s output rather than its code, has finally brought this to the surface. Pinnix is writing a critique of several ocean-based works in his Fordham dissertation. We express our gratitude to him.

The result of adding 11 characters to the code (obviously a minor sort of bug fix, from that perspective) makes a significant difference (to us, at least!) in the workings of the system and the text that is produced. It restores our intention to bring Dickinson’s and Melville’s language together in this aspect of text generation. We ask that everyone reading Sea and Spar Between use the current version.

Updated 2020-02-02: Version 1.0.2 is now out, as explained in this post.

We do not have the ability to change the system as it is published in The Winter Anthology or DHQ, so we are presenting Sea and Spar Between 1.0.1 1.0.2 here on nickm.com. The JavaScript and the “How to Read” page indicate that this version, which replaces the previous one, is 1.0.1 1.0.2.

Updated 2020-02-02: Version 1.0.2 is the current one now and the one which we endorse. If you wish to study or modify the code in Sea and Spar Between and would like the convenience of downloading a zipfile, please use this version 1.0.2.

Previous versions, not endorsed by us: Version 1 zipfile, and version 1.0.1 zipfile. These would be only of very specialized interest!

Incidentally, there was another mistake in the code that we discovered after the 2010 publication and before we finished the highly commented DHQ version. We decided not to alter this part of the program, as we still approved of the way the system functioned. Those interested are invited to read the comments beginning “While the previous function does produce such lines” in cut to fit the toolspun course.

Nano-NaNoGenMo or #NNNGM

Ah, distinctly I remember it was in the bleak November;
And each separate bit and pixel wrought a novel on GitHub.

April may be the cruelest month, and now the month associated with poetry, but November is the month associated with novel-writing, via NaNoWriMo, National Novel Writing Month. Now, thanks to an offhand comment by Darius Kazemi and the work of Hugo van Kemenade, November is also associated with the computer-generation of novels, broadly speaking. Any computer program and its 50,000 word+ output qualifies as an entry in NaNoGenMo, National Novel Generation Month.

NaNoGenMo does have a sort of barrier to entry: People often think they have to do something elaborate, despite anyone being explicitly allowed to produce a novel consisting entirely of meows. Those new to NaNoGenMo may look up to, for instance, the amazingly talented Ross Goodwin. In his own attempt to further climate change, he decided to code up an energy-intensive GPT-2 text generator while flying on a commercial jet. You’d think that for his next trick this guy might hop in a car, take a road trip, and generate a novel using a LSTM RNN! Those who look up so such efforts — and it’s hard not to, when they’re conducted at 30,000 feet and also quite clever — might end up thinking that computer-generated novels must use complex code and masses of data.

And yet, there is so much that can be done with simple programs that consume very little energy and can be fully understood by their programmers and others.

Because of this, I have recently announced Nano-NaNoGenMo. On Mastodon and Twitter (using #NNNGM) I have declared that November will also be the month in which people write computer programs that are at most 256 characters, and which generate 50,000 word or more novels. These can use Project Gutenberg files, as they are named on that site, as input. Or, they can run without using any input.

I have produced three Nano-NaNoGenMo (or #NNNGM) entries for 2019. In addition to being not very taxing computationally, one of these happens to have been written on an extremely energy-efficient electric train. Here they are. I won’t gloss each one, but I will provide a few comments on each, along with the full code for you to look at right in this blog post, and with links to both bash shell script files and the final output.

OB-DCK; or, THE (SELFLESS) WHALE


perl -0pe 's/.?K/**/s;s/MOBY(.)DI/OB$1D/g;s/D.r/Nick Montfort/;s/E W/E (SELFLESS) W/g;s/\b(I ?|me|my|myself|am|us|we|our|ourselves)\b//gi;s/\r\n\r\n/
/g;s/\r\n/ /g;s//\n\n/g;s/ +/ /g;s/(“?) ([,.;:]?)/$1$2/g;s/\nEnd .//s’ 2701-0.txt #NNNGM

WordPress has mangled this code despite it being in a code element; Use the following link to obtain a runnable version of it:

OB-DCK; or, THE (SELFLESS) WHALE code

OB DCK; or, THE (SELFLESS) WHALE, the novel

The program, performing a simple regular expression substitution, removes all first-person pronouns from Moby-Dick. Indeed, OB-DCK is “MOBY-DICK” with “MY” removed from MOBY and “I” from DICK. Chapter 1 begins:

Call Ishmael. Some years ago—never mind how long precisely—having little or no money in purse, and nothing particular to interest on shore, thought would sail about a little and see the watery part of the world. It is a way have of driving off the spleen and regulating the circulation. Whenever find growing grim about the mouth; whenever it is a damp, drizzly November in soul; whenever find involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral meet; and especially whenever hypos get such an upper hand of , that it requires a strong moral principle to prevent from deliberately stepping into the street, and methodically knocking people’s hats off—then, account it high time to get to sea as soon as can. This is substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with .

Because Ishmael is removed as the “I” of the story, on a grammatical level there is (spoiler alert!) no human at all left at the end of book.

consequence


perl -e 'sub n{(unpack"(A4)*","backbodybookcasedoorfacefacthandheadhomelifenamepartplayroomsidetimeweekwordworkyear")[rand 21]}print"consequence\nNick Montfort\n\na beginning";for(;$i<12500;$i++){print" & so a ".n;if(rand()<.6){print n}}print".\n"' #NNNGM

consequence code

consequence, the novel

Using compounding of the sort found in my computer-generated long poem The Truelist and my “ppg 256-3,” this presents a sequence of things — sometimes formed from a single very common four-letter word, sometimes from two combined — that, it is stated, somehow follow from each other:

a beginning & so a name & so a fact & so a case & so a bookdoor & so a head & so a factwork & so a sidelife & so a door & so a door & so a factback & so a backplay & so a name & so a facebook & so a lifecase & so a partpart & so a hand & so a bookname & so a face & so a homeyear & so a bookfact & so a book & so a hand & so a head & so a headhead & so a book & so a face & so a namename & so a life & so a hand & so a side & so a time & so a yearname & so a backface & so a headface & so a headweek & so a headside & so a bookface & so a bookhome & so a lifedoor & so a bookyear & so a workback & so a room & so a face & so a body & so a faceweek & so a sidecase & so a time & so a body & so a fact […]

Too Much Help at Once


python -c "help('topics')" | python -c "import sys;print('Too Much Help at Once\nNick Montfort');[i for i in sorted(''.join(sys.stdin.readlines()[3:]).split()) if print('\n'+i+'\n') or help(i)]" #NNNGM

Too Much Help at Once code

Too Much Help at Once, the novel

The program looks up all the help topics provided within the (usually interactive) help system inside Python itself. Then, it asks for help on everything, in alphabetical order, producing 70k+ words of text, according the GNU utility wc. The novel that results is, of course, an appropriation of text others have written; it arranges but doesn’t even transform that text. To me, however, it does have some meaning. Too Much Help at Once models one classic mistake that beginning programmers can make: Thinking that it’s somehow useful to read comprehensively about programming, or about a programming language, rather than actually using that programming language and writing some programs. Here’s the very beginning:

Too Much Help at Once
Nick Montfort

ASSERTION

The “assert” statement
**********************

Assert statements are a convenient way to insert debugging assertions
into a program:

assert_stmt ::= “assert” expression [“,” expression]

A plot

So far I have noted one other #NNNGM entry, A plot by Milton Läufer, which I am reproducing here in corrected form, according to the author’s note:


perl -e 'sub n{(split/ /,"wedding murder suspicion birth hunt jealousy death party tension banishment trial verdict treason fight crush friendship trip loss")[rand 17]}print"A plot\nMilton Läufer\n\n";for(;$i<12500;$i++){print" and then a ".n}print".\n"'

Related in structure to consequence, but with words of varying length that do not compound, Läufer’s novel winds through not four weddings and a funeral, but about, in expectation, 735 weddings and 735 murders in addition to 735 deaths, leaving us to ponder the meaning of “a crush” when it occurs in different contexts:

and then a wedding and then a murder and then a trip and then a hunt and then a crush and then a trip and then a death and then a murder and then a trip and then a fight and then a treason and then a fight and then a crush and then a fight and then a friendship and then a murder and then a wedding and then a friendship and then a suspicion and then a party and then a treason and then a birth and then a treason and then a tension and then a birth and then a hunt and then a friendship and then a trip and then a wedding and then a birth and then a death and then a death and then a wedding and then a treason and then a suspicion and then a birth and then a jealousy and then a trip and then a jealousy and then a party and then a tension and then a tension and then a trip and then a treason and then a crush and then a death and then a banishment […]

Share, enjoy, and please participate by adding your Nano-NaNoGenMo entries as NaNoGenMo entries (via the GitHub site) and by tooting & tweeting them!

Gomringer’s Untitled Poem [“silencio”], an Unlikely Sonnet

The untitled poem by Eugen Gomringer that we can only call “silencio” is a classic, perhaps the classic, concrete poem. According to Marjorie Perloff’s Unoriginal Genius, the “silencio” version of the poem dates from 1953. In my 1968 edition of The Book of Hours and Constellations I find the German manifestation of this poem (with the word “schweigen”) and the English poem (with the word “silence”), on the same page at the very beginning of the book — but no “silencio.” The place where I do find “silencio” is An Anthology of Concrete Poetry from 1967, edited by Emmett Williams. My copy is the re-issue by Primary Information.

Williams mentions tendencies and tries not to too strongly characterize any particular poets in the anthologies when he writes, in the introduction:

The visual element of their poetry [the concrete poets’ poetry] tended to be structural, a consequence of the poem, a “picture” of the lines of force of the work itself, and not merely textural. It was poetry beyond paraphrase … the word, not words, words, words or expressionistic squiggles …

There are several essential points here about the project of concrete poetry and how it differs from, for instance, the shapes of “Easter Wings” and the other poems in George Herbert’s The Altar, as well as the way Lewis Carroll presented the image of a mouse’s tail in words that tell the mouse’s tale in Alice’s Adventures in Wonderland. However brilliant these two writers were, in these cases they were using language to make pictures; the concrete poets, beginning with Gomringer, worked to create structures. Their poems are not just verse (lineated language), but made from lines of force. In many cases, as with the unnamed poem I must call “silencio,” an entire concrete poem can be understood to cohere as a word.

There are other interpretations of Gomringer’s poem that situate it in history, but I will give a simple one that situates it within the project of concrete poetry — followed by another that places it in a different and much longer-lived poetic tradition.

The lines of force of this poem are, most obviously, those that allow for the gap in the middle where the ground (the absence of text, the absence of “silencio”) becomes figure. As ink declares silence, or, if we read the text aloud, as our voice declares silence, attentive readers can’t help but notice a truer silence in the middle of the page.

At the next stage, there is silence between each “silencio,” horizontally and vertically. We overlook this gap, which is seen even when text is not presented on a grid. It too will be represented if we read the poem aloud, however, between each spoken word.

We can go further, although ear and eye would not agree about the silences. There are spaces, and thus silences of a visual sort, between each of the letters in “silencio,” too.

Fascinating, isn’t it, that John Cage’s 4’33” was composed and presented in 1952, preceding this poem? This poem, too, seems to structurally show, through its lines of force, that silence can take center stage.

In any case, without offering more than a brief appreciation, I mean to make it clear that this is a quintessential concrete poem. One can read it out loud, but that does not provide the listener with the effect of apprehending the structure of the poem on the page. The poem is not a picture of anything. It is a structure. And it is not squiggles or simply a bunch of words, even if the single lexeme “silencio” is repeated fourteen times. It is fitting to apprehend and read the whole poem as a word, not a bunch of words.

Accepting this, I would like to offer an interpretation of this poem that may seem perverse, but which I believe shows this poem’s radical versatility: It can be seen in the light of a poetic tradition that long predates concrete poetry. This poem is not only a concrete poem, but also a sonnet. Specifically, I’ll argue that although the repeated word is a Spanish word, it fits into the English-language tradition of the sonnet. Because concrete poetry is a transnational phenomenon and Gomringer writes in English as well as German and Spanish, this disjunction may be less unusual that it otherwise would be.

Consider that the poem consists of fourteen occurrences of “silencio,” which despite their unusual arrangement on the page can be read aloud as fourteen lines. It would be hard not to read them this way.

Because each word is the same, the poem follows the rhyme scheme of a sonnet — any rhyme scheme, including the Petrarchan or Shakespearean in English, including those typical in Spanish.

If some reader finds it impossible for the same line to be repeated fourteen times in a sonnet, I refer this reader to the 2002 “Sonnet” by Terrance Hayes, which consists of fourteen repetitions of the line “We sliced the watermelon into smiles.”

But is it metrical? The word “silencio” pronounced by itself has two metrical feet ( x / | x / ) and is in perfectly regular iambic dimeter. This is also the meter of Elizabeth Bishop’s last poem, “Sonnet,” which begins:

Caught — the bubble
in the spirit level,
a creature divided;
and the compass needle

There’s much more variation in Bishop’s poem, but the metrical regularity of Gomringer’s poem shouldn’t preclude it from being in this particular form. While I don’t have an example of a sonnet with repeated lines (like the one by Hayes) from before 1953, there are earlier sonnets in dimeter, or one, at least: a piece of light verse by Arthur Guiterman, published in The New Yorker on July 7, 1939.

Sonnets can be about anything, although the form does have a heritage. Reading the poem as a sonnet allows us to make a connection to the sonnet tradition if we wish. We can, for instance, ask whether this sonnet has anything to do with love, whether in the most traditional sense of love for a woman or, in John Donne and Herbert’s senses, religious love. Could the silence of this sonnet be that of being understood, and of not needing to say anything aloud?

Seeing this Gomringer poem as a sonnet also allows us to put it into conversation with other one-word texts (those that have several tokens but repeat a single type) that can also be viewed as sonnets, because they have fourteen tokens.

The one I know of, and which fascinates me, is Dance, a typing by Christoper Knowles that I saw contextualized as visual art in his 2015 solo show at the Philadelphia ICA. The page of this work is blank except for a line at the top that repeats the word “DANCE” (in capital letters) fourteen times, with a space between each occurrence. This makes for 83 characters: 5 × 14 = 70 for the word DANCE, plus the 13 spaces that go between each pair of words. While a sheet of paper is typically thought to accommodate 80 typewritten characters across its width, Knowles found that by beginning at the extreme left edge of the page and typing to the extreme right edge, he could fit exactly 83 onto it.

The typing Dance can be read as a sonnet in hemimeter — a term used by George Starbuck for “half-feet,” and associated with light verse. Where “silencio” offers a more static and contemplative structure, I can’t help but imagine Knowles typing DANCE repeatedly, his hands dancing on the typewriter, as he also produced a text that is a score, instructing us to dance. Not so much a structure, it seems to me, but an exhortation and a trace of its making. And, of course, a text that can be read in the sonnet tradition, asking us to consider how dance, repeated, insistent, filling the width of the page completely, relates to love.

A Bit about Alphabit

During Synchrony 2019, on the train from New York City to Montreal, two of us (nom de nom and shifty) wrote a 64 byte Commodore 64 program which ended up in the Old School competition. (It could have also gone into the Nano competition for <=256 byte productions.) Our Alphabit edged out the one other fine entry in Old School, a Sega Genesis production by MopeDude also written on the train.

The small program we wrote is not a conventional or spectacular demo; like almost all of the work by nom de nom, it uses character graphics exclusively. But since we like sizecoding on the Commodore 64, we wanted to explain this small program byte by byte. We hope this explanation will be understandable to interested people who know how to program, even if they may not have much assembly or C64 experience.

To get Alphabit itself, download the program from nickm.com and run it in a C64 emulator or on some hardware Commodore 64. You can see a short video of Alphabit running on Commodore 64 and CRT monitor, for the first few seconds, for purposes of illustration.

              starting here,
              these bytes load
at:     02 08 01 00 00 9E 32 30
$0808   36 31 00 00 00 20 81 FF
$0810   C8 8C 12 D4 8C 14 D4 C8
$0818   8C 20 D0 AD 12 D0 9D F4
$0820   D3 8C 18 D4 D0 F5 8A 8E
$0828   0F D4 AE 1B D4 E0 F0 B0
$0830   F6 9D 90 05 9D 90 D9 AA
$0838   88 D0 E0 E8 E0 1B D0 DB

Load address. Commodore 64 programs (PRG files) have a very simple format: a two-byte load address, least significant byte first, followed by the machine code which will be loaded at that address. So this part of the file says to load at $0802. The BASIC program area begins at $0801, but as explained next, it’s possible to cheat and load the program one byte higher in memory, saving one byte in the PRG file.

BASIC bootloader, $0802–$080c: This program starts with a tiny BASIC program that will run when the user types RUN and presses ENTER. When run, this program, a bootloader, will execute the main machine code. In this case the program is “0 SYS2061” with the line number represented as 00 00, the BASIC keyword SYS represented by a single byte, 9E, and its argument “2061” represented by ASCII-encoded digits: 32 30 36 31. When run, this starts the machine code program at decimal address 2061, which is $080D, the beginning of the next block of bytes.

Advanced note: Normally a BASIC program would need at least one more byte, because two bytes at $0801 and $0802 are needed to declare the “next line number.” You would have to specify where to go after the first line has finished executing. But for our bootloader, any non-null next line number will work as the next line number. Our program is going to run the machine code at $080d (decimal 2061) and then break. So we only need to fulfill one formal requirement: Some nonzero value has to be written to either $0801 or $0802. For our purposes, whatever is already in $0801 can stay there. That’s what allows this program to load at $0802, saving us one byte.

On the 6502: There are three “variables” provided by this processor, the accumulator (a general-purpose register, which can be used with arithmetic operations) and the x and y registers (essentially counters, which can be incremented and decremented).

Initialization, $080d–$081a: This sets up two aspects of the demo, sound and graphics. Actually, after voice 3 is initialized, it is used not only to make sound, but also to generate random numbers for putting characters on screen. This is a special facility of the C64’s sound chip; when voice 3 is set to generate noise, one can also retrieve random numbers from the chip.

The initialization proceeds by clearing the screen using the Kernal’s SCINIT routine. When SCINIT finishes, the y register has $84 in it. It turns out that for our purposes the noise waveform register and the sustain-decay register can both be set to $85, so instead of using two bytes to load a new value into y (ldy #$85), the program can simply increment y (iny), which takes only one byte. After storing $85 in those two registers, the goal is to set the border color to the same as the default screen color, dark blue, $06. Actually any value with 6 for a second hex digit will work, so again the program can increment y to make it $86 and then use this to set the border color. Finally, the y register is going to count down the number of times each letter (A, B, C … until Z) will be written onto the screen. Initially, the program puts ‘A’ on screen $86 times (134 decimal); for every subsequent letter, it puts the letter on screen 256 times — but that comes later. The original assembly for this initialization:
    iny         ; $85 works for the next two...
    sty $d412   ; voice 3 noise waveform
    sty $d414   ; voice 3 SR
    iny         ; $86 works; low nybble needs to be $6
    sty $d020   ; set the border color to dark blue
Each letter loop, first part, $081b–$0826: This loop counts through each of the 26 letters. The top part of the loop has a loop within it in which some of the sound is produced; then there is just a single instruction after that.

Fortunately, the x register already is set up with $01, the screen code of the letter ‘A’, thanks to SCINIT. In this loop, the value of the current raster line (the lowest 8 bits of a 9-bit value, to be precise) is loaded into the accumulator. The next instruction stores that value in a memory location indexed by x; as x increases during the run of the program, this memory location will eventually be mapped to the sound chip registers for voices 1 and 2, starting at $d400, and this will make some sounds. This is what gives some higher-level structure to the sound in the piece, which would otherwise be completely repetitive. After this instruction, however many characters are left to put onto the screen (counting down from 255 to 0) goes into the volume register, which causes the volume to quickly drop and then spike to create a rhythmic effect. With the noise turned on it makes a percussive sound. All of this takes place again and again until that raster line value is 0, which happens twice per frame, 120 times a second.

After all of this, the value in x (which letter, A–Z, is the current one) is transferred into the accumulator, necessary because of how the rest of the outer loop is written. The original assembly for the beginning of the outer loop:
raster:
    lda $d012   ; get raster line (lowest 8 bits)
    sta $d3f4,x ; raster line --> some sound register
    sty $d418   ; # of chars left to write --> volume
    bne raster
    txa
Get random, $0827–$0830: This code does a bit more sound work, using the x register to set the frequency. Since this is the current letter value, it increases throughout the run of the program, and the pitch generally rises. Then, a random value (well, not truly random, but “noisy” and produced by the sound chip’s noise generator) is loaded in that x register, with the program continuing to get the value until it is in the range $00–$ef (decimal 0–239). If the value has to be obtained multiple times, frequency gets set multiple times, too, adding some glitchiness to the sound. Because the random value is bounded, the program will place the characters in a 40 character × 6 line (240 character) region.
random:
    stx $d40f       ; current letter --> freq
    ldx $d41b       ; get random byte from voice 3
    cpx #240
    bcs random
Each letter loop, last part, $0831–$083a: In the bottom part of this loop, the characters are put onto the screen by writing to screen memory and color memory. Screen memory starts at $0400, and $0590 is the starting point of our 6-line rectangle in the middle of the screen. The corresponding point in color memory is $d990. Our current character (A–Z) is in the accumulator at this point, while the x register, used to offset from $0590 and $d990, has a random value. After putting the accumulator’s value (as a letter) into screen memory and (as a color) into color memory, the accumulator is transferred back into the x register, a counter. Then the y register (counting down to 0) is decremented. The program keeps doing this whole process, the “each letter loop,” until y reaches 0.
    sta $0590,x  ; jam the current letter on screen
    sta $d990,x  ; make some colors with the value
    tax
    dey
    bne raster
Outer loop, $083b–$083f: This is the code for counting from 1 to 26, A to Z. Since the x register stores the current letter, it is incremented here. It is compared with decimal 27; if the register has that value, the program is done and it will fall through to whatever is next in memory … probably $00, which will break the program, although anything might be in memory there. It would have been nice to have an explicit brk as part of this PRG, but hey, this is a 64-byte demo with a BASIC bootloader, written one day on a train. If the program has more letters to go through, it branches all the way back up to the beginning of the “each letter loop.”
    inx
    cpx #27     ; have we gotten past ‘Z’?
    bne raster

Taper #2 Is Out

The second issue of Taper, a literary magazine featuring small-scale computational work, is now online.

The second issue was edited by Sebastian Bartlett, Lillian-Yvonne Bertram, Angela Chang, Judy Heflin, and Rachel Paige Thompson, working collectively. Bad Quarto (my micropress) publishes the journal.

The call for issue #3 is posted. The deadline is February 18 (2019).

Taper #2 features 18 works by six a., Sebastian Bartlett, Kyle Booten, Angela Chang, Augusto Corvalan, Kavi Duvvoori, Esen Espinsa, Leonardo Flores, Judy Heflin, Chris Joseph, Vinicius Marquet, Stuart Moulthrop, Everest Pipkin, Mark Sample, and William Wu. Go take a look!

Hard West Turn at Time Farm

For two weeks only (today through October 23), my limited-edition computer-generated book, Hard West Turn, is available for reading in an installation at Time Farm, underneath the MIT Press Bookstore, 301 Massachusetts Ave, Cambridge, MA.

Time Farm entrance

Hard West Turn awaiting a reader

Hard West Turn open to the title page

Hard West Turn is a computer-generated novel about gun violence in the United States. The copy exhibited is one of three artist’s proofs; only 13 copies (one for each of the original states) were made for sale. The generating program is free software, but the specific copy-edited text of this book has only been made available in print. Hard West Turn will be regenerated annually for limited-edition publication each July 4.

Consult the staff of the MIT Press Bookstore for access to Time Farm. Visitors are asked to remain for an hour without using devices such as phones or computers.

A Web Reply to the Post-Web Generation

At the recent ELO conference in Montréal Leonardo Flores introduced the concept of “3rd Generation” electronic literature. I was at another session during his influential talk, but I heard about the concept from him beforehand and have read about it on Twitter (a 3rd generation context, I believe) and Flores’s blog (more of a 2nd generation context, I believe). One of the aspects of this concept is that the third generation of e-lit writers makes use of existing platforms (Twitter APIs, for instance) rather than developing their own interfaces. Blogging is a bit different from hand-rolled HTML, but one administers one’s own blog.

When Flores & I spoke, I realized that I have what seems like a very similar idea of how to divide electronic literature work today. Not exactly the same, I’m sure, but pretty easily defined and I think with a strong correspondence to this three-generation concept. I describe it like this:

  • Pre-Web
  • Web
  • Post-Web

To understand the way I’m splitting things up, you first have to agree that we live in a post-Web world of networked information today. Let me try to persuade you of that, to begin with.

The Web is now at most an option for digital communication of documents, literature, and art. It’s an option that fewer and fewer people are taking. Floppy disks and CD-ROMs also remain options, although they are even less frequently used. The norm today has more to do with app-based connectivity and less with the open Web. When you tweet, and when you read things on Twitter, you don’t need to use the Web; you can use your phone’s Twitter client. Facebook, Instagram, and Snapchat would be just fine if the Web was taken out behind the shed and never seen again. These are all typically used via apps, with the Web being at most an option for access.

The more companies divert use of their social networks from the Web to their own proprietary apps, the more they are able to shape how their users interact — and their users are their products, that which they offer to advertisers. So, why not keep moving these users, these products, into the better-controlled conduits of app-based communication?

Yes, I happen to be writing a blog entry right now — one which I don’t expect anyone to comment on, like they used to in the good old days. There is much more discussion of things I blog about on Twitter than in the comment section of my blog; this is evidence that we are in the post-Web era. People can still do Web (and even pre-Web) electronic literature and art projects. As Jodi put it in an interview this year, “You can still make websites these days.” This doesn’t change that we reached peak Web years ago. We live now in a post-Web era where some are still doing Web work, just as some are still doing pre-Web sorts of work.

In my view, the pre-Web works are ones in HyperCard and the original Mac and Windows Storyspace, of course. (It may limit your audience, but you can still make work in these formats, if you like!) Some early pieces of mine, such as The Help File (written in the standard Windows help system) and my parser-based interactive fiction, written in Inform 6, are also pre-Web. You can distribute parser-based IF on the Web, and you can play it in the browser, but it was being distributed on an FTP site, the IF Archive, before the Web became the prevalent means of distribution. (The IF Archive now has a fancy new Web interface.) Before the IF Archive, interactive fiction was sold on floppy disk. I consider that the significant number of people making parser-based interactive fiction today are still doing pre-Web electronic literature work that happens to be available on the Web or sometimes in app form.

Also worth noting is that Rob Wittig’s Blue Company and Scott Rettberg’s Kind of Blue are best considered pre-Web works by my reckoning, as email, the form used for them, was in wide use before the Web came along. (HTML is used in these email projects for formatting and to incorporate illustrations, so the Web does have some involvement, but the projects are still mainly email projects.) The Unknown, on the other hand, is definitely an electronic literature work of the Web.

Twitterbots, as long as they last, are great examples of post-Web electronic literature, of course.

With this for preface, I have to say that I don’t completely agree with Flores’s characterization of the books in the Using Electricity series. It could be because my pre-Web/Web/post-Web concept doesn’t map onto his 1st/2nd/3rd generation idea exactly. It could also be that it doesn’t exactly make sense to name printed books, or for that matter installations in gallery spaces, as pre-Web/Web/post-Web. This type of division makes the most sense for work one accesses on one’s own computer, whether it got there via a network, a floppy disk, a CD-ROM, or some other way. But if we wanted to see where the affinities lie, I would have to indicate mostly pre-Web and Web connections; I think there is only one post-Web Using Electricity book that has been released or is coming out soon:

  1. The Truelist (Nick Montfort) is more of a pre-Web project, kin to early combinatorial poetry but taken to a book-length, exhaustive extreme.

  2. Mexica (Rafael Pérez y Pérez) is more of a pre-Web project based on a “Good Old-Fashioned AI” (GOFAI) system.

  3. Articulations (Allison Parrish) is based on a large store of textual data, Project Gutenberg, shaped into verse with two different sorts of vector-space analyses, phonetic and syntactical. While Project Gutenberg predates the Web by almost two decades, it became the large-scale resource that it is today in the Web era. So, this would be a pre-Web or Web project.

  4. Encomials (Ranjit Bhatnagar), coming in September, relies on Twitter data, and indeed the firehose of it, so is a post-Web/3rd generation project.

  5. Machine Unlearning (Li Zilles), coming in September, is directly based on machine learning on data from the open Web. This is a Web-generation project which wouldn’t have come to fruition in the walled gardens of the post-Web.

  6. A Noise Such as a Man Might Make (Milton Läufer), coming in September, uses a classic algorithm from early in the 20th Century — one you could read about in Scientific American in the 1980s, and see working on USENET — to conflate two novels. It seems like a pretty clear pre-Web project to me.

  7. Ringing the Changes (Stephanie Strickland), coming in 2019, uses the combinatorics of change ringing and a reasonably small body of documents, although larger than Läufer’s two books. So, again, it would be pre-Web.

Having described the “generational” tendencies of these computer-generated books, I’ll close by mentioning one of the implications of the three-part generational model, as I see it, for what we used to call “hypertext.” The pre-Web allowed for hypertexts that resided on one computer, while the Web made it much more easily possible to update a piece of hypertext writing, collaborate with others remotely, release it over time, and link out to external sites.

Now, what has happened to Hypertext in the post-Web world? Just to stick to Twitter, for a moment: You can still put links into tweets, but corporate enclosure of communications means that the wild wild wild linking of the Web tends to be more constrained. Links in tweets look like often-cryptic partial URLs instead of looking like text, as they do in pre-Web and Web hypertexts. You essentially get to make a Web citation or reference, not build a hypertext, by tweeting. And hypertext links have gotten more abstruse in this third, post-Web generation! When you’re on Twitter, you’re supposed to be consuming that linear feed — automatically produced for you in the same way that birds feed their young — not clicking away of your own volition to see what the Web has to offer and exploring a network of media.

The creative bots of Twitter (while they last) do subvert the standard orientation of the platform in interesting ways. But even good old fashioned hypertext is reigned in by post-Web systems. If there’s no bright post-post-Web available, I’m willing to keep making a blog post now and then, and am glad to keep making Web projects — some of which people can use as sort of free/libre/open-source 3rd-generation platforms, if they like.