The HoMT workshop at the University of Pennsylvania is a place for presenting work in progress, and this is such work. In the text below, I have omitted references, and mention of "the handout" doesn't mean anything here, except that I have linked to things on the handout that exist on the Web. If you'd like to correspond about the topic and correct or inform me about the use of print-based interfaces, please contact me: nickm at this domain.
Update, 1 March 2004: I made several changes, thanks to comments from Tom Van Vleck, whose work I cite in my talk. Update, 20 August 2004: Further work on this topic has resulted in "Continuous Paper: Print Interfaces and Early Computer Writing," a talk given at ISEA.
My topic today is what some call "electronic writing," although "computer writing" is also a reasonable term for it. "Electronic writing" makes this sound a bit like a quadraphonic hi-fi, while "computer writing" is something you might expect to find in PC Magazine — the helpful advice column about defragmenting your hard disk and such. But whatever term we pick or avoid, what I mean to discuss is the use of the general-purpose computer for the specific purpose of composing and editing texts.
I'm not going to dwell on the use of the computer as a typesetting system, or on the importation of print texts into the computer, the digital conversion of books into e-books. I'm more interested in technologies that are designed for the composition of texts and in ways of writing that, although they are certainly based on earlier writing technologies, have at least a few striking, novel elements to them. For instance, the computer has given rise to interactive textual experiences — chatterbots, also called conversational characters, are one early example; interactive fiction is another — where the system provides for a textual exchange: I can read the output, type something in in response, and find that the computer program replies in a sometimes provocative, sometimes meaningful way. I'll offer some details on this later, but programs like this, when they provide literary experiences, fall into an interesting new category, "electronic literature." If we're to understand them very deeply, which I hope we will, I think we should understand both their formal workings (approaching them as computer scientists, aware of how computer programs work) and their material history — how these text machines were physically realized and how people actually programmed them and interacted with them. I consider electronic literature to lie in the category "new media," which encompasses all creative and communicative uses of the computer, textual, visual, musical, or otherwise. Just to be clear, I don't use to term "new media" to refer to things like emerging media, media in transition, multimedia, intermedia, or other interesting categories that are sometimes referred to by this name, but rather, to indicate computer-based works specifically, and to mean about the same thing that Janet Murray calls "the digital medium."
One reason that the material history of texts is of importance here is that many theories of new media, and many perspectives on computer writing, assume that the screen is not just an important part of today's computing experience, but an essential part of creative and communicative computing in all cases, a fixture, perhaps even a basis, for new media. I find this curious, since the screen is a relative newcomer and early interaction with computers happened largely on paper — on paper tape, punched cards, and on print terminals and teletypewriters, such of those of the brand name Teletype, which had a scroll-like supply of continuous paper for printing output and input both. (It is the Teletype as a computer interface that I'll focus on today.) By looking back to early new media and examining the role of paper — the pun, I hate to say it, is intentional — I aim to show how history contradicts the "screen essentialist" assumption about computing that many new media theorists have made.
The early use of the computer for writing generally did not involve a screen at all, so arguments that assume the screen is essential to new media will have to be discarded or revised, probably in one of two ways. Either we will have to decide that the interface, and the material experience of computing, and of writing on the computer, does not really matter at all — a conclusion that is probably unpalatable not only for this group but also for the whole discipline of human-computer interaction — or we will have to look carefully at how the role of earlier interfaces and earlier materials — not things like phosphors and glass, but things like ink and paper — shaped the notions of computing in the last century, how they helped writers and scientists to figure out, for instance, how a device called an Electronic Numerical Integrator And Calculator could give rise to systems that would process words as well as numbers. And at that point, when we should be getting to the exciting conclusion, I'll stop pretending to have any real answers and will instead raise several questions that I hope we can discuss.
In fact, I'm not a scholar of the Teletype or of early human-computer interaction, so in going back to the early era of computing to discuss these topics, most of what I can do is to point out some omissions that I think are particularly serious; not only can I not provide conclusions about how we should revise our views of new media, I can't really provide, as yet, a thorough explanation of what human-computer interaction was like back when word processing, humanities computing, and early electronic literature was being developed. In Twisty Little Passages, my recent book on interactive fiction, I was much more concerned with the formal nature of interactive fiction and the social and intellectual contexts in which it arose, and while I think that was a reasonable place to start an investigation of interactive fiction, I would like to try to begin a discussion today about how early interfaces like the Teletype may have been significant. What I can do at this point is share a few observations and a bit of what I have discovered in prodding around, talking to others, and reading about human-computer interaction before the screen, or at least before the screen became widespread.
But first I want to talk about how many theorists and critics of new media write of the screen as essential to all new media production and experience. I'd like to start with the writing of Sherry Turkle, who brought a psychoanalytic perspective to computing and who brought a much more ethnographic approach than had previously been seen. In her first book on new media, The Second Self, she treated video games very thoughtfully and actually talked to children and adults who played them, at a time when the discussion of video games was even less thoughtful that it is today. But I want to talk about a system, developed at MIT, that Turkle discusses at length in her second book on new media.
This system is called Eliza, and it ran a script called Doctor that became very famous. Joseph Weizenbaum programmed this system in a langauge called MAD, using libraries he had written, called SLIP, in the mid-1960s, publishing the first article about it in 1966. Rather than describe the system, I'll point you to the handout, where I've provided a famous excerpt from an Eliza/Doctor transcript, one that Weizenbaum published in his 1966 paper. This shows what a "dialogue" with Eliza is like. (Some source code from a re-implementation of Eliza is also provided; this is meant to stand in for the sort of thing Weizenbaum was writing, although it is from a reimplementation of Eliza.) The system impersonated a Rogerian psychotherapist, and while it could certainly be shoved into failure modes that would reveal it as a computer program, it could play its role pretty well if the user was willing to play along. Janet Murray has said that Eliza/Doctor provided the first real "aha" moment for the digital medium, when people allowed themselves to believe that the medium was representing reality; it was a moment analogous to when the Lumiere Brothers screened their film of a train entering the station.
It's safe to say that a lot of paper was involved in the creation of the system, and, certainly, the early interactors — the people who conversed with Eliza/Doctor — used print terminals to do so. When I was at UC Santa Barbara and showed the version of Eliza that is included on the New Media Reader CD, a woman in the audience pointed out that an important part of her experience of Eliza, via Teletype, was lost in this modern implementation. On the slow-moving Teletype, you could read the beginning of Eliza's statement and have time to guess what the conclusion of that utterance would be, as is the case in human conversation. The versions of Eliza that are available today, although they may be formally authentic, are not authentic to the originally physical interface, and — even if we must allow for our modern programs to use screens rather than Teletypes — they do not faithfully emulate important aspects of the early interface.
In a thorough and incisive book, Sherry Turkle does not mention that Eliza/Doctor was originally experienced on remote typewriters. In fact, although she records that the program was completed in 1966, she pretends that Eliza/Doctor used a screen for its interface. In a footnote, she writes "ELIZA communicated using only uppercase letters. This typographic convention both aped the teletype machine and reassuringly signaled that a computer, not a person, was speaking." Perhaps Turkle intended to refer to a later version of Eliza/Doctor that ran on a personal computer, but as the statement stands, there is a pre suppositional failure. The interface to Eliza probably didn't ape a Teletype, but was a Teletype. However, the platform that Eliza/Doctor ran on, the IBM 7094 running the Compatible Time-Sharing System [see The Multicians page about the 7094 for more information], apparently did support upper and lower case, which was very unusual for the mid-1960s. It is possible, although I don't know this, that the use of all-uppercase letters was not a typographical convention but was done for compatibility, so that users who didn't have special terminals that supported mixed case could still access the program. But I have no idea if such devices were attached to MIT's IBM 7094 or if they could have worked at all with the system. Another possibility is that the LISP dialect Weizenbaum used, MAD-SLIP, did not support mixed case, although the platform was capable of using mixed case. Maybe Weizenbaum started working on Eliza before mixed case was supported and didn't want to convert everything to mixed case when that capability was added. As I was finishing work on this talk today, I received a forwarded email in which one participant typed a reply entirely in uppercase in order to distinguish his reply from the originally writing — the person who wrote in uppercase typed at the top "below in caps..." to signal this. I think it's safe to say he wasn't trying to imitate a Teletype, but just to be clear about what was a reply and what the original text. Weizanbaum's article prints Eliza/Doctor replies in all-caps and the person's text in mixed case for this very reason, so it the all-uppercase utterances may have been programmed that way just for the sake of clarity. Or it's possible that Eliza's uppercase replies were in fact due to a typographical convention that mimicked the Teletype — it's just that the little I know about the mid-1960s context of computing suggests that there may have been other reasons.
[Tom Van Vleck emailed me shortly after my talk to provide a definitive explanation: The 7094 running CTSS did support mixed case, "if you programmed using special modes and libraries. ELIZA did not use them. All Eliza data was stored in single case, in the BCD character set. The computer typed these letters out in upper case. On M33 and M35 Teletypes, input typing would appear in uppercase too, since there was no lowercase. On Selectric terminals such as the IBM 1050 and IBM 2741, user input would appear as lowercase, and system response as uppercase. MAD's basic character string data type was (caseless) BCD. Using the upper/lowercase libraries and '12-bit mode,' one could handle input that was in 12-bit BCD. This would have been a fairly tedious change to ELIZA. ... I think mixed case output was seen as a 'decoration' in those days.. it made data files twice as big and had other mode switching problems with CTSS, and I think it would have actually detracted from the point Joe was trying to make about computer interaction. ... Understand that the term 'Teletype' is a metaphor. Most Teletypes were phased out at MIT by the time Eliza was written. Joe had an IBM 1050 terminal in his office. (The 'tty' string still persists in Unix because the first terminals to work on Multics were Teletype Model 37 terminals, the first available devices to use the ASCII character set, and Unix inherited this name from Multics.) DEC machines such as the PDP-10 used Teletypes in many places during the early 70s, because they were cheap; but mostly they would have been Model 35s, a very different experience from the Model 33 you used as an illustration."]
However the mystery is resolved in the curious case of the uppercase letters, Turkle's failure to discuss the Teletype interface and other print-based interfaces is particularly striking in light of the title of her book, Life on the Screen! If we're talking about our life with Eliza, it really isn't a life on the screen at all — it's a life on the scroll, and our inappropriate controlling metaphor, or at least our attention-grabbing title, may end up really misleading us. But there has been little discussion of the print-and-paper heritage of computing. Although Lev Manovich devotes an eight-page section of his book The Language of New Media to the genealogy of the screen, he mentions teletypewriters only in passing, in a discussion of character coding schemes. Manovich doesn't even mention that teletypewriters were used as a computer interface! Of course, this might be because Manovich approaches new media from cinema, but a leading literary scholar seems to have a similarly screen-centered view. At the very end of the prologue to her How We Became Posthuman, Katherine Hayles introduces the important figure of the "flickering signifier" in this way: "As you gaze at the flickering signifiers scrolling down the computer screens, no matter what identifications you assign to the embodied entities that you cannot see, you have already become posthuman." In an interview on the Iowa Review Web, Hayles expands on the idea of the "flickering signifier:"
When I coined the phrase "flickering signifier," I had in mind a reconfigured relation between the signifier and signified than that which had been previously articulated in critical and literary theory. As I have argued elsewhere, the signifier as conceptualized by Saussure and others was conceived as unitary in its composition and flat its structure. It had no internal structure, whether seen as oral articulation or written mark, that could properly enter into the discourse of semiotics.
When signifiers appear on the computer screen, however, they are only the top layer of a complex system of interrelated processes. Marks on screen may manifest themselves as simple inscriptions to a user, but properly understood they are the visible, tangible results of coding instructions executed by the machine in a series of interrelated processes, from a high-level programming language like Java all the way down to assembly language and binary code.
I hoped to convey this processural quality by the gerund "flickering," to distinguish the screenic image from the flat durable mark of print or the blast of air associated with oral speech. The signifier on screen is, as you know, a light image produced by a scanning electron beam. The screen image is deeply layered rather than flat, constantly replenished rather than durable, and highly mutable depending on processes mobilized by the layered code, as for example when a writer uses Flash to create animation or layers that move. These qualities are not merely ornamental but enter profoundly into what the marks signif[y] and, more importantly, how they signify. We need a theory of semiotics that can account for all the qualities connoted by "flickering."
Again, among many valuable ideas — that the manifestation of certain signifiers in computer writing is a representation of deeper layers of code, for instance — the controlling metaphor comes from a particular display technology, the CRT (cathode ray tube) screen with its scanning electron beam, as if this display technology were the essence of computing. But it wasn't use in the 1960s and wasn't widely used in the 1970s, and by the 2010s the CRT may have pretty much fallen out of use; they are already falling out of use as more and more people adopt flat screen LCD displays, such as the one I use all time, as their main interface. I would suggest that if we want a theory of materiality that accounts for the layered representation of signs, of the sort that is present in Eliza/Doctor (as you can see from the transcripts and code that I have handed out) and in Adventure (as I'll discuss in a moment) we shouldn't base it on a display technology that is particular to a bit of the late 20th Century. This type of layered representation doesn't depend on the screen; it was present back in the 1960s, when the only flickering parts of the interface were printing mechanisms.
I think there are a few reasons for this fixation on the screen. There were few people writing about new media before the screen became ubiquitous, and those who did were often concerned with the formal nature of the computer systems they wrote about, not the material experience of them. Also, although in the late 1960s I would guess that 99% of computer users did almost all of their work through paper media, there had been a handful of influential and spectacular screen-based systems developed, for instance: Spacewar, the first modern video game, developed at MIT in 1962; Ivan Sutherland's Sketchpad, also developed at MIT in 1962; Doug Englebart's NLS (oNLine System), developed at SRI and shown in the "mother of all demos" in 1968; and Grail, developed at the RAND Corporation in 1969. But these were the high-budget exceptions to the rule; Eliza had plenty of cousins that were developed by programmers using Teletypes and that were available to users via Teletype.
Before I get into a few details of how Teletypes worked, I'll mention one of these cousins of Eliza, an important program called Adventure, developed initially by Will Crowther of BBN on a PDP-10, using an ASR-33 Teletype, and later augmented by Don Woods at Stanford. The dates are uncertain, but it seems Adventure was developed in 1975 and 1976. Whatever the exact dates were, it was the first work of interactive fiction. An all-text experience, it offers a simulated cave environment. By typing commands such as "GET KEYS" "EAST" and "UNLOCK GRATE" — commands to a simulated explorer who is in this simulated environment — the person interacting can discover what secrets the cave holds. To experience or interact with or play Adventure in the early days, people used either a terminal of some sort (probably a print-based terminal, if they did use this more expensive option) or they used a Teletype. In the handout, I have included a brief excerpt from a transcript of an Adventure interaction. Right after that I've included some of the source code to Adventure, written in Fortran. This is early source code, the earliest code that is available, but it isn't truly the "original" that Will Crowther wrote; the very first version that he developed, before it was modified by Don Woods, is no longer available. The transcript is generated by a 1994 Fortran version of Adventure that is a modification of the original Crowther and Woods code; the original code won't compile except on a PDP-10. One important modification is the use of mixed case; the original used uppercase exclusively. The way this particular program deals with input is much more authentic than is the case with later ports and versions of Adventure, however. Print-based interfaces were the standard way of accessing Adventure: I've included on the handout a photograph of a user, Angus Macdonald, playing Adventure in the mid-1970s on the home terminal of Tom Van Vleck in Waltham, Massachusetts. [Taken from this page.]
Now I'd like to mention a few specifics about Teletypes and similar paper-based interfaces. A Teletype is a remote typewriter, an almost entirely mechanical device. It was developed very early in the 1900s, certainly long before computing, and became widespread before it was used as a computer interface. The Teletype Corporation has a nice booklet about the history of the Teletype called The Teletype Story, which has been scanned in and is online. The book is from 1958, and if computer interface is mentioned as an application at all (I haven't read the whole booklet) it certainly isn't mentioned prominently. Teletypes were the workhorse interface for computers through the early 1970s, however. Here's an image of three terminals: the VT55 Decscope, the LA30 Decwriter, and the ASR-33 Teletype. Screen-based displays like the one on the left weren't produced by DEC until 1970 and were very expensive when they did make it to market. Print-based terminals were fairly costly also, but some executives and programmers used the portable models and carried them home after work each day. The most common interface for interactive programming, debugging, and writing was certainly the Teletype in the early days. Here I'm showing a photograph of two programmers in front of a PDP-11 minicomputer, using a Teletype as they work. This would be a typical scene in the early 1970s. The photograph had to have been taken in 1970 or later since the PDP-11 was not released until June 1970. There is a screen in this photo: over on the far right, in the middle, barely visible, is a VT01A display. This had a Tektronics 611 storage display, which I believe was used more as an oscilloscope would be used and not for the display of text. At any rate, the programmers aren't using it here. And in case you have the impression that Teletypes were some sort of low-class interface that top programmers wouldn't use, I'll mention that the two individuals pictured here are Dennis Richie (the one standing up) and Ken Thompson, the two principal creators of the Unix operating system.
I've already mentioned that Teletypes were important interfaces for mainframe computers (such as the IBM 7094 that Weizenbaum used) and minicomputers (such as the PDP-10 that Crowther used and the PDP-11 that Richie and Thompson used). It's also worth nothing that they were used very early on in the home computer era, as microcomputers became available. When you got your Micro Instrumentation Telemetry Systems Altair 8800 in 1975, you did not get a keyboard or display screen, so the few very early adopters of personal computers had to use some sort of terminal, usually a Teletype, as an interface. When a new company, in fact, decided to write software for the Altair 8800, they wrote a scaled-down version of the BASIC programming language and they used the Teletype as their interface when the created this program. This company was Microsoft, and the programmers were Paul Allen, Monte Davidoff, and Bill Gates. Gates had learned to program computers using a Teletype. The three wrote their version of BASIC on an Altair simulator that ran on a Harvard PDP-10 and when they were ready to demo it they punched it onto paper tape using a Teletype — Teletype machines also had paper tape readers and punches so you could write a program and save it in machine-readable format; you can see this pictured on the Teletype picture in the handout, although I truncated that part of the description. There are some other interesting stories there, but the main point is that interactive print-based interfaces were important even in the early history of personal computing.
I just have a few more bits of media to show you at this point, and then several questions that I hope we can discuss. First, here's a video of an ASR33 Teletype being used. (ASR stands for "Automatic Send and Receive," by the way.) ... I also want to show you some Teletype output. I don't have any examples of actual Teletype output to hand around, but you can see on the cover of the DEC PDP-8 book Introduction to Programming, the book from which the description of the workings of the Teletype is taken, that Teletype output is used a design element. It is essentially synonymous with programming and any other form of early computer writing, or at least very closely tied to it. I've also included an example of something that might look, at a glace, like Teletype output, but (according to Mitch Marcus in the Computer and Information Science department) is not. The two paragraphs of all-uppercase text in the handout there are probably from a high-speed line printer, and the bounce in the letters, the way they don't align properly, is characteristic of these devices for more rapid output. Finally, I just wanted to point out one important piece of writing that was composed, early on, using a word processor. This is Terry Winograd's Ph.D. dissertation on his famous system for natural language understanding, SHRDLU. The dissertation was published as a book and as an entire issue of a journal in 1972, but this text, from the acknowledgements, is taken from the January 1971 edition of the document as an MIT technical report. The final image I want to mention is a picture of Geoff Lepper, in between his parents, playing a text-based game on an Execuport in September 1974. [Taken from this page.] This is what early human-computer interaction was like.
So, let me trail off into a few questions about computer writing that I think won't be answered from a purely computer science perspective, but which I hope will be answered in dialogue with those who know about earlier technologies of writing and their place in culture and literature:
But let me end here, and see if you have thoughts on these or other questions.
(Tom Van Vleck pointed me to another great story at The Multicians that bears on these questions, about a programmer who checked off each line of code on the printout as he reviewed it.)