nickm.com > older writing > essays

The Coding and Execution of the Author

Nick Montfort

[Published in print in The Cybertext Yearbook 2002-2003, eds. Markku Eskelinen & Raine Kosimaa, pp. 201-217, Research Centre for Contemporary Culture, University of Jyväskylä, Finland, 2003. This text may not correspond exactly with the printed text; the latter should be considered definitive.]

One seldom-discussed cybertextual typology is offered by Espen Aarseth in chapter 6 of Cybertext, "The Cyborg Author: Problems of Automated Poetics." As someone who writes using computers—and who writes entire works whose course is influenced by this use of computers—this neglected topic in cybertextual studies seems to demand my attention not only as theorist and a critic but as an author. Am I crediting my computer properly when I attribute the authorship of works that my computer helped to create? Should I give myself and my computer a "cyborg name" (like a "DJ name") for just this purpose? When I write or use a new program, or replace my computer with a faster one, am I a new cyborg and thus a different author? Should my computer have a say in the publishing and promotion of works that we authored together? And should other important and inspirational mechanisms—my CD player, for instance, and my bookshelves—get cut in on the action as well?

The phrase "cyborg author" may not have a long history, but it was used as early as 1994, in a paper by David Wall. He conceptualized the World Wide Web as a cyborg author. Wall discussed the Web more as a publishing system (or "author of community") than as an author in the sense that the term is usually used. While the concept of the cyborg author seems difficult to discuss in any formal sense, there are clearly reasons to be interested in the authorship of texts by humans and computers working together. The two difficulties that immediately present themselves regarding the cyborg author concept are the nature of the cyborg (and, more broadly, the new sorts of relationships humans and computers might have with one other as works are authored) and the nature of authorship. I will look at these briefly and also give a short account of my own experience writing 2002: A Palindrome Story in 2002 Words with William Gillespie and with the assistance of a suite of computer programs. Then, I will turn to more critically consider a recent set of poems, Static Void: Fifty-Nine Sonnets, and a Fragment, which was created by two human authors using an open source computer program they devised. I will close by trying to offer, not a new typology for human-computer co-authorship, but a model for this co-authorial process, one which is more sensitive to the actual practice of electronic literary composition and is particularly informed by the work of poets using procedures. The idea of a computer co-author, and the formal nature of the computer, certainly calls for a formal idea of co-authorship. While such a description of a co-authorial process cannot capture all the nuances of the process, it can help to point out features of this process that are of particular interest and can help us understand the role of the different participants more fully.

Cyborgs, Simians, and 100 Typewriters

The concept of the "cyborg" as it is has been used in the more specific discussion of cyborg authorship seems chimerical in many ways—as we might expect from a cyborg—and so I will admit up front that where I am really heading is into a discussion of collaborative human-computer authorship, which can also be called human-computer co-authorship. A drive-by of the cyborg is worthwhile on the way, however.

The term "cyborg" was coined by Manfred E. Clynes, who used it in an article he wrote with Nathan S. Kline for Astronautics in September 1960. The cyborg, which the two described as an "exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously" (30-31), was imagined by Clynes and Kline for the express purpose of space travel. The idea was that people should make themselves into cyborgs rather than trying to carry every bit of an earth-like environment along with them on trips into space. Almost all the cyborg enhancements imagined in that original paper involved mechanisms for injecting drugs automatically in response to some change that was detected. Kline was a pharmacologist, and the first creature to be labeled a cyborg was a white lab mouse trailing an apparatus that continually pushed drugs into it. While the continual consumption of pharmecuticals is essential to certain types of creative writing (and, I suspect, to certain types of literary creation for the computer), this is hardly what most of us have in mind when "cyborg author" is mentioned.

Donna Haraway's discussion of the cyborg as a boundary-blurring, forward-looking figure and N. Katherine Hayles's view of the cyborg as a posthuman entity dominate the academic consideration of the cyborg today. Such views are not looked upon kindly by musician and scientist Clynes, who said in a 1995 interview that "parenthetically, the idea of a cyborg in no way implies an it. It's a he or a she. It is either a male or female cyborg; it's not an it. It's an absurd mistake" (48). Clynes, who called the movie Terminator a "travesty" of his "real scientific concept," noted that the purpose of the cyborg was to

make it possible to exist, qua man, as man, not changing his nature, his human nature that evolved here. Not to change that but to simply allow him to make use of his faculties, without having to waste his energies on adjusting the living functions necessary for the maintenance of life. (47)

Later in the interview, Clynes seemed to contradict his insistence that human nature would not change for the cyborg: "Homo sapiens, when he puts on a pair of glasses, has already changed. When he rides a bicycle he virtually has become a cyborg" (49). The idea that wearing glasses and riding bicycles makes us into cyborgs—an idea quite consistent with cybernetics and with how our boundary of our awareness must change for these internalized devices to be useful to us—certainly complicates the idea of cyborg authorship. And it gets worse, as one author wrote soon after the cyborg concept coalesced:

A man with a wooden leg is a cyborg. So is a man in an iron lung. More loosely, a steam-shovel operator or an airplane pilot is a cyborg. As I type this page I am a cybernetic organism, just are you are when you take the pen in hand to sign a check. (Halacy 1965, 13)

Clearly "cyborg" blurs too many boundaries (or draws too large of a boundary) for those concerned with authorship and computers. But whatever the difficulties with the cyborg concept, and leaving aside for now the question of whether (or rather, how) cyborg existence might change human nature, there is one interesting feature of the cyborg. In the archetypal case one becomes a cyborg consciously in order to explore a new environment. This is a piece of the cyborg worth taking along into the discussion of human-computer co-authorship.

Is There an Author in This Classpath?

Aarseth points out a few ways in which the notion of "author" has recently been complicated (noting, for instance, that "today's complex media productions are seldom, if ever, run by a single 'man behind the curtain.'" (165)) He does not take up the rather different way that the concept has been problematized by Barthes in "The Death of the Author" and by the New Historicists. Culture "writes" authors (even the supposedly lone author not working on a complex multimedia production) in many ways in this view, and it is the context of a certain individual's writing, not just the particular individual (or cyborg) with the pen or keyboard, that shapes a text. Indeed, the capital-A Author as a pure voice apart from culture and history is no longer a tenable concept.

Italo Calvino anticipated Barthes's main point about the death of the author and the birth of the reader—whom Barthes called "the space on which all the quotations that make up a writing are inscribed without any of them being lost" (148)—in a speech in 1967, saying "[o]nce we have dismantled and reassembled the process of literary composition, the decisive moment of literary life will be that of reading" (15). He added, "What will vanish is the figure of the author" (16). Calvino noted (giving the epigraph to Cybertext) that literature would only have a "poetic result" due to "the particular effect of one of these permutations on a man endowed with a consciousness and an unconscious, that is, an empirical and historical man. It will be the shock that occurs only if the writing machine is surrounded by the hidden ghosts of the individual and of his society." Based on this, an appropriate role for a human in human-computer co-authorship would certainly be "shock testing," since computer authors cannot be expected (certainly not at this stage) to be surrounded by such individual and cultural ghosts and to react in the same way. The human should be that author who also reads, reflects, and revises. Calvino and his fellow members of the OuLiPo pioneered authorship using formal constraints and procedures. The particular project of co-authorship involving humans and digital computers is one that Calvino contribled to as he wrote his story "The Burning of the Abominable House." It was also taken up by the ALAMO, a group founded by the Oulipo's Paul Braffort and Jacques Roubad in 1981. One of the OuLiPo's most famous procedures for the alteration of the text will be taken up later.

Despite Calvino's assurance that the combinatorial game of literature could easily be played by a computer, some wonder whether a computer can truly be an author. Since questions of authorship are important ones for the courts, perhaps it is understandable that one United States legal scholar has specifically taken up the question at hand in a paper entitled "Can a Computer be an Author? Copyright Aspects of Artificial Intelligence" (Butler 1982). According to an 1884 U.S. court case, an author is "he to whom anything owes its origin; originator; maker; one who completes a work of science or literature." Authorship, in U.S. law, must involve the expression of an idea. Yet someone who modifies a text can be an author. While an author must make some original contribution, but this standard of originality is very low—only truly trivial variations would be considered insufficient for a determination of authorship. (729) Thus, an editor who corrects a text is legally an author of the resulting text. Butler considered both the automatic creation of texts and the automatic creation of software by computer programs. This covers a case not highlighted in Aarseth's typology: a computer and human working together may generate a not just texts but cybertexts. Thus, a cyborg author might write a computer program that then generates text, or might write a computer program that writes another computer program that generates text. Without getting too dizzy about these possibilities, the important thing is simply to realize that when we discuss human computer co-authorship we are not restricted to the case of the authorship of texts.

The law would seem to hold that computers cannot be authors since their being authors would require that they have certain legal responsibilities and status. In considering whether a computer or "man-machine hybrid" might legally be designated as a author, Butler asked, "How could a machine be a real party in interest in a lawsuit?" (739) Machines are already parties in interest in lawsuits every day: such machines are called "corporations" and have been given the legal status of individuals. In fact is difficult to imagine that modern copyright law could possibly exist for the benefit of human beings rather than for the enrichment of such machines. Butler admits that "[p]ublic policy considerations support the existence of trusts and corporations" but he claims (without naming any specific problems) that the problems with giving computers the status of authors "outweigh any advantages of this alternative by a wide margin" (739-40). If we consider that making a non-trivial modification of a text is authorship, it certainly seems that (if anyone wanted to provide for the other implications) computers could legally be named as authors.

When considering a particular situation of human-computer co-authorship, the most interesting aspect of the legal discussion is that modifying a text, and not actually generating symbols for the first time, suffices to make one an author. This idea can fit very nicely into a semiotic concept of human-computer co-authorship, in which the production and manipulation of signifiers (rather than the expression of an idea) is the main concern.

Deep Speed and 2002

In November 2000 William Gillespie and I began writing 2002: A Palindrome Story in 2002 Words. We anticipated from the beginning that we would write with a computer co-author, whom we named "Deep Speed." In fact, we did employ computer software as we wrote, and the computer had a larger role in our work than simply facilitating email exchanges between us (we only met to write in the same location for a few days) and serving to do the usual word processing. 2002 was published in a limited edition for New Year's Day, 2002; it was published on the Web on 20-02-2002; and it was printed as a trade book, designed by Ingrid Ankerson, in October 2002; illustrations by Shelley Jackson are featured in this volume. When Gillespie and I were finished with the text, neither of us felt that Deep Speed had really been a co-author. I will try to briefly describe how we wrote 2002, why it became clear to us that Deep Speed was not a co-author, and what might have made Deep Speed a true co-author. Although Gillespie and I have spoken and corresponded about this topic of course I can only claim to represent my own views here. I do not know, for instance, if Gillespie would really be willing to entertain the notion of a computer co-author if the computer had participated in a different way in our process.

Deep Speed was comprised of several Web-based perl programs written by Gillespie, several command-line perl programs that I wrote (including pvp, the Palindrome Verification Program, and numerous ad hoc programs that I used to help me in working with particular fragments), a text file called WORDS.TXT, WordNet, and a version of the American Heritage Dictionary that had surprisingly flexible search capabilities. (We also consulted the OED and other online dictionaries, of course.) At the very beginning of our work another individual wrote a program that searched for reversible English text on any Web page that one might specify. Use of this program brought perhaps two short texts to our attention, and they were worked into the draft, but the program was only available to us for a short time.

We wrote 2002 by drafting "reversible" fragments (English-language texts which, when reversed letter-for-letter, could be repunctuated into other English-language texts), emailing these to each other, revising them while maintaining their reversibility, and then repeating the process. A particular text one of us wrote might seem nonsensical to the other, so much so that it did not seem it could be revised into a sensible text, and in this case we might not bother rewriting it. We decided that certain events should happen and we then wrote (or tried to write) reversible texts that described them happening. We also more or less discovered certain very appropriate events in texts that we wrote. Gillespie devised several types of fine food and drink that could be placed on both sides of the palindrome, such as "Regal Lager," "Retro Porter," and "wet stew;" I contributed "broiled deli orb" and "Red Ice Cider." Such short phrases often inspired fragments and episodes. We also discussed what characters would be involved and what would happen to them, and we talked and emailed about the overall tone and style of the palindrome and what our goals were on higher levels. At a certain point we arranged the fragments we had into a single text and took turns revising that. Throughout the process, we used our verification programs to ensure that we were actually writing reversible texts—repairing any problems we found—and we did searches of computer dictionaries. I also wrote programs to find all the palindromic words in a large English word list and to search for other words that were particularly helpful at certain points.

It was probably easy for William and me to decide that Deep Speed was not a co-author because both of us did actually have a co-author to whom Deep Speed could be compared. Deep Speed was not offering new ideas about what characters would be involved, what events would happen in the narrative, how the incidents would be arranged, what sorts of borderline English syntax were acceptable and what sorts were not, or even how to punctuate the text. Deep Speed was not rewriting fragments of eyebrow-raising nonsense to move them toward the edge of meaning or gently suggesting that certain reversible nonsense should really be left out. These were the tasks I considered the most difficult. Although we might use Deep Speed to determine whether a certain phrase could be reversed to a series of English words, our software did not generate text which we might rewrite or directly incorporate. Making numerous first-draft textual contributions (offering fragments to revise and include) would have gone much further toward computer co-authorship. Weighing in on other sorts of authorial discussions, and determining what was sensible and what was not, was something I would have also expected from a co-author, although I did not ever expect Deep Speed to perform these tasks, even if it could have served as something of an oracle at times.

With restrictive rather than generative procedures this point is less clear, but it is worth noting: if there was a non-human co-author of 2002, it was almost certainly the palindrome constraint itself, which led us to certain discoveries about the names and natures of characteras and how the story should progress. By bounding the set of possible texts, only those which read the same forwards and backwards being permitted, the palindrome determined a far smaller set of possible 2002-word stories than in the general case; from this set of stories Gillespie and I chose one.

Gnoetry and Static Void

"What would be the style of a literary automaton? I believe that its true vocation would be for classicism. The test of a poetic-electronic machine would be its ability to produce traditional works, poems with closed metrical forms ..."
—Italo Calvino (12-13)

Static Void: Fifty-Nine Sonnets, and a Fragment is authored, the title page indicates, by "Jon Trowbridge, Eric Elshtain, and the machine." This makes Static Void a rare work in which the computer is actually credited as an author. The book was written using the pre-release open-source software Gnoetry, which was written by the two human authors. Gnoetry took as input thirteen Project Gutenberg texts: the King James Bible and works by Conrad, Dante, Dickens, Dreiser, Joyce, Mary Shelley, Stevenson, Thoreau, Tolstoy, Thorstein Veblen, and H.G. Wells. The words "static void," incidentally, are used in Java to declare that a procedure is to remain at a fixed location in memory and that it does not return any value.

It is important to note several shortcomings of Static Void. It is doggerel. It is made of English words, but the lines are so strained syntactically that they can seldom be imagined as felicitous English. The prosody is just as bad. There is no enjambment at all in any of the poems; each line ends with punctuation and is its own sentence. (This is an acknowledged limitation of the current Gnoetry code; the authors presumably hope that a future version will not have this limitation.) Despite the rigor that pervades the poems, some lines do not scan as pentameter—e.g., "Oh, I need no man, neither raising up?" (line 11, 45) which of course also is not sensible or grammatical—and this is due to another noted defect in Gnoetry. The unspectacular rhyme is schematic with the occasional defect of the sort one might expect: e.g., the word "record" appears so that syntactically it must be a noun, with lexical stress on the first syllable, but it is in a place where the rhyme scheme requires it to be pronounced as the verb "record" with different lexical stress. Some rather obvious additional rules (except in very unusual cases, a quatrain should rhyme abab where a≠b) could have been applied to prevent some of the failings, although the syntactical and semantic problems seem more profound. Reading the book is like actually getting to listen to HAL from 2001 sing "Daisy;" one may be amazed and pleased, but not for any aesthetic reason.

Here, for example, is sonnet 52:

The engine was a very narrow sluice.
The while among the foliage overhead.
The street there as the custom to induce.
They had allowed the chorus girls ahead.
The water, sighed and opened with a loud.
His sorrow, yet he managed did me treads.
Yes, he rejoined the voice of it, allowed.
Extorted with a curse upon the shreds.
Said anything he prattles still, observes.
I never bargained to begin his crook.
That'd do the higher mental state deserves.
As giving them away, death overtook.
I often wondered if he was derived.
She think themselves so long before deprived.

How, then, should one bother to read such a book? It clearly would be unrewarding to read the contents as poems fit as they are for human consumption, or even as parodies of the sonnet form. And if one believes that none of the standards that apply to human poetry should apply to poems of this sort, perhaps it would be better to offer the book to one's toaster instead of reading it oneself. In speaking of a different sense of literature, Ted Nelson declared in his 1981 Literary Machines, that, despite the seemingly quirky way in which fields of research advance, "literature is debugged." With this in mind, and as a way of trying to improve open source software, I will consider Static Void as the output from a process which ultimately may result in interesting literature but which has yet to be debugged. In fact, the book seems to be offered more as output to aid in debugging than as a book of poems. It seems likely that Trowbridge and Elshtain did not edit the poems that Gnoetry produced particularly so that they would give an accurate impression of the type of output that other users might expect from their version 0.1 program. Seen in this light, there was good reason for the authors to present—and there is good reason for others to consider—these outputs.

One thing about Static Void which may not be evident from consideration of a single poem is that a voice, of the beginnings of it, manifests itself over the course of the sonnets. Can this be just a remnant of some authorial voice from the source texts? ("I often wondered if he was derived.") Since the different sonnets are composed using numerous different source texts, for consistent features of a voice to manifest itself requires something new that comes from the combination of these texts and the particular way in which they are being combined. The syntactical problems are varied but one notices consistencies throughout in the way parts of speech are impossibly shunted into place; that consistency begins to invite interest. Perhaps this is the same crash-fetish interest we have in seeing ELIZA/DOCTOR break down as she continues her questioning, but perhaps these systematic ruptures help us to actually see language itself (not just the failing program) in a different way, as early text-altering programs like TRAVESTY may have done. It is difficult to watch English being broken upon the rack of the sonnet by a computer program—and the interrogation conducted in this book does seem, at times, to have simply caused the early death of the prisoner—but the idea itself holds out hope that something intriguing may be tortured out of language by such tactics. To use a less gruesome metaphor, the arrangements of words here can be seen as an alien terrain, or at least a hint of one possible new territory that, cyborg-like, human-computer co-authors might explore.

On a small scale, in terms of generating interesting and valid English, the current process does manage to work at times. Taken out of their contexts, there are good lines, some of which express ideas not in the source texts:

The stranger overcame a hemisphere. (line 13, 24; line 3, 32)

I saw a figure swimming upward rapt. (line 13, 34)

Do with the language movement with a rod. (line 1, 35)

Here is a pair of lines that surprise:

Turn up against the holy day in flesh.
A hundred gleaming windows, their cafes. (lines 9-10, 36)

The first of these turns from abstract to concrete, the second from generic to specific. One is asked by the first line to turn "in flesh" (a seemingly tautological suggestion, but perhaps it means "do not turn metaphorically, but literally"?) against the holy day. The architectural image that follows seems apt; it provides a glorious secular scene and might even be read as a suggestion that one go to eat on a day of fasting. Here the machine voice seems to be speaking poetry, yet with an alluring accent, in a way human poets do not speak. True, mining for such lines is, as Charles O. Hartman writes, "treating the computer as a retarded or psychotic human brain" that might provide "flashes (however far apart) of ordinary or extraordinary lucidity" (82). But if some lines do work, it can only help to identify why they do.

The final difficulty is that no whole quatrain, much less a whole sonnet, holds together semantically at all. If computers are trained on statistical features below the sentence and line level, using models that consider only a small window of context, how can we expect any sense to emerge at the level above the line? Rhyme is not reason. But there is something specific to suggest, based on this difficulty: A program that generated syntactical English could be trained on features above the sentence level; additional ways of generating an overall semantic structure (perhaps even ways that are not statistical) within which the existing statistical methods could be used might lead to the creation of poems that are stronger overall.

A Post-Cyborg Model for Human-Computer Co-Authorship

I will mention two other sorts of human-computer work as I begin the discussion of how to model and understand human-computer co-authorship. First of all, there is grep poetry of the sort produced by Loss Pequeño Glazier and others, as described by Glazier in his book Digital Poetics (96-103). The process involved here is fairly simple: one runs the program "grep" on a text file, specifying a pattern. This searches the file for occurrences of the pattern (performing a "global regular-expression" search, hence "grep") and returns only those lines that contain the pattern. This is, as Glazier writes, "a formal method or program for parsing or altering machine-readable text according to a fairly basic procedure" (99). Of course a grep could be carried out by a person rather than an automatic, electronic computer. In fact one geek might request that another look through a printed list to find some information by asking "could you vgrep [visually grep] the movie time for me?" It is the formal procedure that makes grep a "computer" in the sense we are concerned with here. Thus, we can also include a famous technique of the OuLiPo, the n+7 substitution rule offered by Jean Lescure for transforming a text: replace every noun with the seventh noun ahead of it in the dictionary. (The OuLiPo also made innovations in using constraint; one particular example is George Perec's "Le Grand Palindrome," which was an inspriation for 2002. But this way of altering a text systematically, rather than limiting what can be written, relates to computer authorship in particularly interesting ways.) It actually does not matter whether a person or a computer, with approximately human-like ability to distinguish parts of speech, carries out the n+7 procedure. A human author using the procedure still is co-authoring with a computer, even if the procedure is not carried out automatically by an electronic machine. The human author's contribution to this n+7 authorship process is the selection of the source text.

Here is Aarseth's categorization of several works in their "Cyborg-Author Combinations" (135) along with my own categorization of Glazier's grep process, the use of the n+7 procedure, and writing of Static Void as I understand it:

  Examples                               Pre...   Co...  Post...

  Scene from Manhattan Murder Mystery         X       X
  Tale-Spin                                   X
  Schank's version of Tale-spin               X               X
  Racter                                      X       X
  The Policeman's Beard (stories and poems)   X               X
  The Policeman's Beard (dialogue)            X       X       X

  Glazier's greps                             X
  n+7                                         X
  Static Void                                 X               X

Preprocessing, "in which the machine is programmed, configured, and loaded by the human" (135), is a feature of every work here and, it seems, of every possible work. Formal procedures, even if they might be discovered in nature, have to be selected by humans so that they can be used to generate texts. This typology, then, actually divides works into only four categories, and it does not capture anything about how the roles of human and computer may coincide or differ during the two stages. Tale-Spin, which generates every sentence from scratch, is in the same category as Glazier's greps, which selectively subtract from human-written texts.

It seems an improvement to consider human-computer authorship as a process in which the human and computer may both make moves and which has an outcome. (This type of process model can be seen as a game in the sense in which the term is used in game theory, although since we are not considering—yet—optimizing the outcome in terms of the players' utilities, we will leave aside the appplications of game theory beyond the level of definition.) That the computer has been programmed and configured by a person is taken for granted; the first move worth consideration is the first output of a draft text. The final state of this text (representing the work that is in some way published) is the outcome of the game. There are three basic types of moves: a player may generate draft text (G), remove draft text (R), or provide the other player with some instructions or intermediate text (I). Although generating some text and removing some other text could be represented with a G move followed by R (or vice versa) for the sake of clarity I will also note the move of doing both, altering the draft text (A).

The greps can be described quite well in this scheme, in three moves:

Glazier     G   (Provides the source text.)
Glazier     I   (Gives the computer the regular expression to match.)
Computer    R   (Non-matching lines are removed.)

Note that it does not matter that someone else may have written the source text originally. That matter is outside the system since for this particular authorial game. We are only considering the relationship between Glazier and grep. As grep usually works from the command line, a regular expression is specified after the a file has been prepared to use as the source text. However, it is possible to fix a regular expression first (say by creating an alias that has that regular expression in it) and then select a source text (by then getting a file ready and running the alias on it). To model this, we would exchange the first and second steps.

We can be more specific by indicating that at each move a co-author may be restricted to operating on a certain level. In this first consdieration of a model of this sort, as a crude approximation, we will ignore the existence of such aspects as prosody, diction, and sense and simply consider the lexical level. Alterations may be made to:

  L Letters
  W Words
  S Line or sentences
  C Chapter or larger unit (whole story, poem)

L indicates that individual letters as well as punctuation marks may being changed (for instance, to alter capitalization or correct a misspelling). W indicates that whole words are potentially being added or removed. S indicates the same for whole sentences. Although as sketched out here these are not actually orthogonal (one might create a whole new sentence by adding words and then changing the punctuation) for this first approximation I hope it will suffice to say that an author may generate or remove text from the draft at any of these levels during a given turn. It might seem natural say that an author who can make a change at a high levels like S can also make a change at the L and W levels. However, this does not reflect the nature of co-authorship. For instance, if we allow grep to remove lines (S) from a program that does not imply that it can remove a letter or a word here and there if it likes. The program can only remove lines. Similarly, a conversational program like ELIZA/DOCTOR can only communicate at the level S, in utterances, however short. ELIZA/DOCTOR cannot produce less than a line. It is possible to allow changes at multiple levels, for instance R-L and R-C both to indicate removal of individual letters or whole chapters (but not words and sentences) is allowed. For clarity and since the operations might be allowed in any order, this can be written R-LC.

These particular divisions between levels are, if not exactly arbitrary, certainly not the only possible ones. Looking beyond lexical divisions, one could have a "phrase" level between word and sentence/line, have a "foot" level for metrical works, or separate out punctuation and capitalization from actually changing letters. Indeed, more comprehensive frameworks for the categorization of linguistic objects in literature have been offered that include semantic, syntactical, and sound-based levels (Bénabaou 1986, Gillespie 1996-2002). The purely lexical levels offered here have the advantage of all being sensible and simply defined (based on whitespace and simple tokenization rules) for any string of characters, making the initial discussion easier. The system should still be extensible to other sorts of granularities, however. I also have not considered during this first description of this scheme how these might apply to providing instructions; The I move will be left without any modifiers for the present even though it could involve providing a large corpus of text for statistical training or it could be the simple setting of a true/false switch.

Despite its limitations this scheme does allows us to describe Glazier's greps more precisely as:

  Glazier     G-LWSC
  Glazier     I
  Computer    R-S

Omitting the curious case of the mixup in Manhattan Murder Mystery, which perhaps involved some randomization but which does not seem well-enough defined as a formal procedure for consideration here, the other items Aarseth considered can all be more precisely and helpfully described with this scheme:

  Tale-Spin   G-LWSC

  Schank's version of Tale-Spin:
     Tale-Spin   G-LWSC
     Schank      R-C

  Racter:
     Racter      G-LWS
     Human       G-LWS
     [repeat both moves n times]

  The Policeman's Beard (stories and poems):
     Racter      G-LWSC
     Chamberlain R-SC

  The Policeman's Beard (dialogue):
     Racter      G-LWS
     Chamberlain G-LWS
     [repeat both moves n times]
     Chamberlain R-C

For the n+7 procedure:

  Human       G-LWSC
  n+7         A-W

There is a possible additional step, since the human may decide to select only the best sections from the text that remains. For Static Void, presumably the process was something like:

  Gnoetry     G-LWSC
  Humans      I 
  [repeat both]
  Humans      R-C

In some cases, when the procedure that was used is not documented, the categorization a critic or theorist offers can only be a guess, but all actual processes can be described this way. Various objections might be raised; for instance, who is to say what is a draft text and what is an instruction, since a draft text that one player offers might be read, considered as guidance, and then deleted in its entirety? The scheme does not capture the extent of changes that are made, either. "Human A-LWS" could refer to a few minor changes here and there or what is essentially a complete rewrite. In fact even the extent of the lexical changes could not tell us how much the semantics of a text have changed. This is a defect of considering only the lexical level, which is easiest to deal with but limited in terms of its ability to explain things like which author might be most responsible for different aspects of a work.

Despite the shortcomings of this model, there still seems to be a great deal of descriptive power in it and it seems to point out some interesting features of human-computer collaborations. It distinguishes text-generating co-authors (or those who select text from elsewhere for processing) from those who only delete text. Also, one can use this model to note one type of move that is lacking in all of the human-computer systems considered so far: except for the human deletions that end most of these games, the computer has the last word. Hartman, a preeminent computer collaborator, found (after restricting himself to only performing such deletions at the end of the process for a while) that he could treat the computer's output as a first draft and alter it as saw fit, rather than simply playing the role of "grep" himself and making deletions at the line or sentence level (83-87). In writing, for instance, "Seventy-Six Assertions and Sixty-Three Questions," the last move made was:

  Hartman     A-LW

Yet the computer's role in the generation of text was certainly not trivial; its participation helped Hartman (as in the case of the original cyborg) think about language in new ways and explore new territories.

There is something of a bonus: this perspective on human-computer co-authorship not only applies to a single human and a computer. It also applies to human-human co-authorship, and it can be used to accommodate an arbitrarily large number of authors. I am sure it would be impossible for Gillespie and I to list all the "moves" of the authorship process for 2002, and one would have to consider that our remote collaboration involved diverging and converging games running at times in parallel, but a trace of the whole authorship process would show, even using a purely lexical model like this one, that Deep Speed did not play the same sort of role that Gillespie and I did. It would also show that we two human authors contributed about equally. Various types of editorial and less equal co-authorial relationships can also be modeled in this way. My hope is that by employing this model of co-authorship—and, in the future, by employing a more refined model that looks beyond the lexical level—we can also better understand in what ways computers and humans can work well together and how people author texts together. The problem of human-computer co-authorship is part of the general question of co-authorship, a question that remains open. I hope my bizarre angle of approach to it will not be the only one.

Works Cited

nm 2003-03