February 14 is Valentine’s Day for many; this year, it’s also Ash Wednesday for Western Christians, both Orthodox and unorthodox. Universally, it is Harry Mathews’s birthday. Harry, who would have been 94 today, was an amazing experimental writer. He’s known to many as the first American to join the Oulipo.

Given the occasion, I thought I’d write a blog post, which I do very rarely these days, to discuss my poetics — or, because mine is a poetics of concision, my “poetix.” Using that word saves one byte. The term may also suggest an underground poetix, as with comix, and this is great.

Why write poems of any sort?

Personally, I’m an explorer of language, using constraint, form, and computation to make poems that surface aspects of language. As unusual qualities emerge, everything that language is entangled with also rises up: Wars, invasions, colonialism, commerce and other sorts of exchange between language communities, and the development of specialized vocabularies, for instance.

While other poets have very different answers, which very often include personal expression, this is mine. Even if I’m an unusual conceptualist, and more specifically a computationalist, I believe my poetics have aspects that are widely shared by others. I’m still interested in composing poems that give readers pleasure, for instance, and that awaken new thoughts and feelings.

Why write computational poems?

Computation, along with language, is an essential medium of my art. Just as painters work with paint, I work with computation.

This allows me to investigate the intersection of computing, language, and poetry, much as composing poems allows me to explore language.

Why write tiny computational poems?

Often, although not always, I write small poems. I’ve even organized my computational poetry page by size.

Writing very small-scale computational poems allows me to learn more about computing and its intersection with language and poetry. Not computing in the abstract, but computing as embodied in particular platforms, which are intentionally designed and have platform imaginaries and communities of use and practice surrounding them.

For instance, consider the modern-day Web browser. Browsers can do pretty much anything that computers can. They’re general-purpose computing platforms and can run Unity games, mine Bitcoin, present climate change models, incorporate the effects of Bitcoin mining into climate change models, and so on and so on. But it takes a lot of code for browsers to do complex things. By paring down the code, limiting myself to using only a tiny bit, I’m working to see what is most native for the browser, what this computational platform can most **essentially* accomplish.

Is the browser best suited to let us configure a linked universe of documents? It’s easy to hook pages together, yes, although now, social media sites prohibit linking outside their walled gardens. Does it support prose better than anything else, even as flashy images and videos assail us? Well, the Web is predisposed to that: One essential HTML element is the paragraph, after all. When boiled down as much as possible, there might be something else that HTML5 and the browser is really equipped to accomplish. What if one of the browser’s most essential capabilities is that of … a poetry machine?

One can ask the same questions of other platforms. I co-authored a book about the Atari VCS (aka Atari 2600), and while one can develop all sorts of things for it (a BASIC interpreter, artgames, demos, etc.), and I think it’s an amazing platform for creative computing, I’m pretty sure it’s not inherently a poetry machine. The Atari VCS doesn’t even have built-in characters, a font in which to display text. On the other hand, the Commodore 64 allows programmers to easily manipulate characters; change the colors of them individually; make them move around the screen; replace the built-in font with one of their own; and mix letters, numbers, and punctuation with an set of other glyphs specific to Commodore. This system can do lots of other things — it’s a great music machine, for instance. But visual poetry, with potentially characters presented on a grid, is also a core capability of the platform, and very tiny programs can enact such poetry.

I’ve written at greater length about this in “A Platform Poetics / Computational Art, Material and Formal Specificities, and 101 BASIC POEMS.” In that article, I focus on a specific, ongoing project that involves the Commodore 64 and Apple II. More generally, these are the reasons I continue to pursue to composition of very small computational poems on several different platforms.

A Web Reply to the Post-Web Generation

At the recent ELO conference in Montréal Leonardo Flores introduced the concept of “3rd Generation” electronic literature. I was at another session during his influential talk, but I heard about the concept from him beforehand and have read about it on Twitter (a 3rd generation context, I believe) and Flores’s blog (more of a 2nd generation context, I believe). One of the aspects of this concept is that the third generation of e-lit writers makes use of existing platforms (Twitter APIs, for instance) rather than developing their own interfaces. Blogging is a bit different from hand-rolled HTML, but one administers one’s own blog.

When Flores & I spoke, I realized that I have what seems like a very similar idea of how to divide electronic literature work today. Not exactly the same, I’m sure, but pretty easily defined and I think with a strong correspondence to this three-generation concept. I describe it like this:

  • Pre-Web
  • Web
  • Post-Web

To understand the way I’m splitting things up, you first have to agree that we live in a post-Web world of networked information today. Let me try to persuade you of that, to begin with.

The Web is now at most an option for digital communication of documents, literature, and art. It’s an option that fewer and fewer people are taking. Floppy disks and CD-ROMs also remain options, although they are even less frequently used. The norm today has more to do with app-based connectivity and less with the open Web. When you tweet, and when you read things on Twitter, you don’t need to use the Web; you can use your phone’s Twitter client. Facebook, Instagram, and Snapchat would be just fine if the Web was taken out behind the shed and never seen again. These are all typically used via apps, with the Web being at most an option for access.

The more companies divert use of their social networks from the Web to their own proprietary apps, the more they are able to shape how their users interact — and their users are their products, that which they offer to advertisers. So, why not keep moving these users, these products, into the better-controlled conduits of app-based communication?

Yes, I happen to be writing a blog entry right now — one which I don’t expect anyone to comment on, like they used to in the good old days. There is much more discussion of things I blog about on Twitter than in the comment section of my blog; this is evidence that we are in the post-Web era. People can still do Web (and even pre-Web) electronic literature and art projects. As Jodi put it in an interview this year, “You can still make websites these days.” This doesn’t change that we reached peak Web years ago. We live now in a post-Web era where some are still doing Web work, just as some are still doing pre-Web sorts of work.

In my view, the pre-Web works are ones in HyperCard and the original Mac and Windows Storyspace, of course. (It may limit your audience, but you can still make work in these formats, if you like!) Some early pieces of mine, such as The Help File (written in the standard Windows help system) and my parser-based interactive fiction, written in Inform 6, are also pre-Web. You can distribute parser-based IF on the Web, and you can play it in the browser, but it was being distributed on an FTP site, the IF Archive, before the Web became the prevalent means of distribution. (The IF Archive now has a fancy new Web interface.) Before the IF Archive, interactive fiction was sold on floppy disk. I consider that the significant number of people making parser-based interactive fiction today are still doing pre-Web electronic literature work that happens to be available on the Web or sometimes in app form.

Also worth noting is that Rob Wittig’s Blue Company and Scott Rettberg’s Kind of Blue are best considered pre-Web works by my reckoning, as email, the form used for them, was in wide use before the Web came along. (HTML is used in these email projects for formatting and to incorporate illustrations, so the Web does have some involvement, but the projects are still mainly email projects.) The Unknown, on the other hand, is definitely an electronic literature work of the Web.

Twitterbots, as long as they last, are great examples of post-Web electronic literature, of course.

With this for preface, I have to say that I don’t completely agree with Flores’s characterization of the books in the Using Electricity series. It could be because my pre-Web/Web/post-Web concept doesn’t map onto his 1st/2nd/3rd generation idea exactly. It could also be that it doesn’t exactly make sense to name printed books, or for that matter installations in gallery spaces, as pre-Web/Web/post-Web. This type of division makes the most sense for work one accesses on one’s own computer, whether it got there via a network, a floppy disk, a CD-ROM, or some other way. But if we wanted to see where the affinities lie, I would have to indicate mostly pre-Web and Web connections; I think there is only one post-Web Using Electricity book that has been released or is coming out soon:

  1. The Truelist (Nick Montfort) is more of a pre-Web project, kin to early combinatorial poetry but taken to a book-length, exhaustive extreme.

  2. Mexica (Rafael Pérez y Pérez) is more of a pre-Web project based on a “Good Old-Fashioned AI” (GOFAI) system.

  3. Articulations (Allison Parrish) is based on a large store of textual data, Project Gutenberg, shaped into verse with two different sorts of vector-space analyses, phonetic and syntactical. While Project Gutenberg predates the Web by almost two decades, it became the large-scale resource that it is today in the Web era. So, this would be a pre-Web or Web project.

  4. Encomials (Ranjit Bhatnagar), coming in September, relies on Twitter data, and indeed the firehose of it, so is a post-Web/3rd generation project.

  5. Machine Unlearning (Li Zilles), coming in September, is directly based on machine learning on data from the open Web. This is a Web-generation project which wouldn’t have come to fruition in the walled gardens of the post-Web.

  6. A Noise Such as a Man Might Make (Milton Läufer), coming in September, uses a classic algorithm from early in the 20th Century — one you could read about in Scientific American in the 1980s, and see working on USENET — to conflate two novels. It seems like a pretty clear pre-Web project to me.

  7. Ringing the Changes (Stephanie Strickland), coming in 2019, uses the combinatorics of change ringing and a reasonably small body of documents, although larger than Läufer’s two books. So, again, it would be pre-Web.

Having described the “generational” tendencies of these computer-generated books, I’ll close by mentioning one of the implications of the three-part generational model, as I see it, for what we used to call “hypertext.” The pre-Web allowed for hypertexts that resided on one computer, while the Web made it much more easily possible to update a piece of hypertext writing, collaborate with others remotely, release it over time, and link out to external sites.

Now, what has happened to Hypertext in the post-Web world? Just to stick to Twitter, for a moment: You can still put links into tweets, but corporate enclosure of communications means that the wild wild wild linking of the Web tends to be more constrained. Links in tweets look like often-cryptic partial URLs instead of looking like text, as they do in pre-Web and Web hypertexts. You essentially get to make a Web citation or reference, not build a hypertext, by tweeting. And hypertext links have gotten more abstruse in this third, post-Web generation! When you’re on Twitter, you’re supposed to be consuming that linear feed — automatically produced for you in the same way that birds feed their young — not clicking away of your own volition to see what the Web has to offer and exploring a network of media.

The creative bots of Twitter (while they last) do subvert the standard orientation of the platform in interesting ways. But even good old fashioned hypertext is reigned in by post-Web systems. If there’s no bright post-post-Web available, I’m willing to keep making a blog post now and then, and am glad to keep making Web projects — some of which people can use as sort of free/libre/open-source 3rd-generation platforms, if they like.

Paging Babel

About 12 hours ago I was reading “The New Art of Making Books” by Ulises Carrión, a text I’d read before but which I hadn’t fully considered and engaged with. As I thought about Carrión’s writing, I felt compelled to put together a short piece on the Web. That took the form of a Web page containing a rapidly-moving concrete poem. The work I devised is called “Una página de Babel.”

Screen capture of Babel

Many will surely note that it is based on Jorge Luis Borges’s “Una biblioteca de Babel” (The Library of Babel). And, I hope people are aware of some the other interesting digital projects based on this story. I have seen one from years ago on CD-ROM; one that is very nice, and available on the Web, is Jeremiah Johnson’s BABEL. There’s also the exquisite Library of Babel by Jonathan Basile.

My piece does not try to closely and literally implement the library that Borges described, although it does have a page that is formally like the ones in Borges’s library: 80 characters wide, 40 lines long. Given this austere rectangular regularity, I assumed a typewriter-like monospace font.

The devotion of “Una página” to what the text describes stops there; instead of using the 23-letter alphabet that Borges sketches to populate this 80×40 grid, I use the unigram probabilities of letters in the story itself, in the Spanish text of “La biblioteca de Babel.” So, for instance, the lowercase letter a occurs a bit less than 8.4% of the time, and this is the probability with which it is produced on the page. The same holds for spaces, for the letter ñ, and for all other glyphs; they appear on the page at random, with the same probability that they do in Borges’s story. Because each letter is picked independently at random, the result does not bear much relationship to Spanish or any other human language, in which the occurrence of a glyph usually has something to do with the glyph before it (and before that, and so on).

“Una página” is also non-interactive. One can zoom, screenshot, copy and paste, and so on, but the program itself does not accept user input.

I sketched the program in Python before developing it in JavaScript, and when I was done with the HTML page that includes the JavaScript program, I thought I’d make a Python version, too. But when I did, I was disappointed; the Python program isn’t a page, and doesn’t produce a page, and so doesn’t seem to me to fit the concept, which has to be that of a page. Thus, I’m not going to release the Python program. The JavaScript version is the right one, in this case.

Tumblrs of the Everyday

I collaborate with Flourish Klink on two very specific Tumblr blogs, which are both open for submissons.


streetcrts.tumblr.com features photos of CRT televisions (or monitors) that have been placed on the street to allow others to take them away, or to allow them to be removed as trash.


xavierpauchard.tumblr.com chronicles, in photos, the legacy of industrial/furniture designer Xavier Pauchard, who, without formal training, designed steel furniture early in the 20th century that seems to be in about 1/5 of all New York restaurants, bars, and coffeehouses, and in many, many other places worldwide. Pauchard does not, as of this writing, even have an English-language Wikipedia entry.

If you spot street CRTs, or the furniture of Pauchard, please submit photos to these two sites.

You Have Been Offered ‘More Tongue’

I just put a new poetry generator up. This one was released in inchoate form at @party, the Boston area demoparty. I’ve finished it, now, writing an HTML page of 2kb that employs JavaScript to generate nonsense poems that *I*, at least, find rather amusing.

More Tongue (paused)

‘More Tongue’ is available in an expanded version (functioning the same but with uncompressed code and more meaningful variable and function names) which I suggest for just about everyone, since I encourage everyone to study and modify the code, for fun, for art, and so on. If you want to see the 2k version working, that’s there too.

I could have compacted this below 2kb, although I rather doubt I’d have gotten it to 1kb without some major shift in the way the program works. I can see a few inefficiencies in how I put the program together, and while I did turn to some compression resources I didn’t use the famed Minify. I was happy, though, with what the 2kb page does.

I’ll be reading from this in about an hour at Babycastles’s WordHack event, here in Manhattan, during the open mic. Hope to see some of you there.

If the Internet Did Exist

If the Internet did exist, we’d have to uninvent it: “It seemed that in their minds, the Internet did not exist; only Facebook.”

Those poor people in developing countries don’t know about the Internet, only Facebook.

Of course Babycastles, my main link to poetry & digital media in NYC, keeps a calendar of events only on Facebook, not on a plain Web page.

I’ve found it very difficult to find (open, public) poetry events in NYC because many are announced only on Facebook.

I’m at an LA poetry festival now. Didn’t know about my friends’ (public) offsite readings; they are Facebook-only.

So, really the joke’s on me for thinking that the Internet still exists and not being on Facebook.

Thanks to the 15 of you who will read this on Twitter. It would have been 3, also deluded that the Internet exists, if I’d only blogged it.

Des Imagistes Lost & Found

Des Imagistes, first Web editionI’m glad to share the first Web edition of Des Imagistes, which is now back on the Web.

I assigned a class to collaborate on an editorial project back in 2008, one intended to provide practical experience with the Web and literary editing while also resulting in a useful contribution. I handed them a copy of the first US edition of Des Imagistes, the first Imagist anthology, edited by Ezra Pound and published in 1914.

Jason Begy, Audubon Dougherty, Madeleine Clare Elish, Florence Gallez, Madeline Flourish Klink, Hillary Kolos, Michelle Moon Lee, Elliot Pinkus, Nick Seaver, and Sheila Murphy Seles, the Fall 2008 workshop class, did a great job. The project was prompted, and indeed assigned, by me, but it’s the work of that group, not my work. The class put a great deal of editorial care into the project and also attended to principles of flexible, appropriate Web design. The cento they assembled and used for an alternate table of contents made for a nice main page, inviting attention to the text rather than to some sort of illustration. I’m not saying it would have been exactly my approach, but what they did is explained clearly and works well.

I told the class that the licensing of their project was up to them. They chose a CC BY-NC-SA license, more restrictive than I would have selected, given that the material was in the public domain to begin with, but a reasoned choice. They were similarly asked to decide about the hosting of the work. They just had to present what they’d done in class, answer questions about it, and let me look at and interact with it. While I would be glad to place a copy on my site, nickm.com, it was up to them as to whether they would take me up on the offer. They placed their work online on its own domain, which they acquired and for which they set up hosting.

After announcing this edition, readers, scholars, and teachers of Imagist poetry commented and thanked the class for it work. But as I bemoaned last October, Des Imagistes was no longer online a few years later. I asked around for files, but asking former students to submit an assignent six years later turns out to be a poor part of a preservation strategy.

Now, working with Erik Stayton (who a research assistant in the Trope Tank and is in the masters in CMS 2015 class), I’ve recovered the site from the Internet Archive. The pages were downloaded manually, in adherence to the robots.txt file on archive.org, the Internet Archive’s additions to the pages were removed, and something very close to the original site was assembled and uploaded.

Some lessons, I suppose, are that it’s not particularly the case that a group of students doing a groundbreaking project will manage to keep their work online. As much as I like reciprocal and equitable ways of working together, the non-hierarchical nature of this project probably didn’t help when it came to keeping it available; no one was officially in charge, accepting credit and blame. Except, of course, that I should have been in charge of keeping this around after it was done and after that course was complete. I should have asked for the files and (while obeying the license terms) put the project on my site – and for that matter, other places online.

Would you like to have a copy of the Des Imagistes site for your personal use or to place online somewhere, non-commerically? Here’s a zipfile of the whole site; you will also want to get the larger PDF of the book, which should be placed in the des_imagistes directory.



The Facepalm at the End of the Mind

I can no longer keep myself from commenting on the Facebook “emotional manipulation” study. Alas. Here are several points.

  • Do you want your money back?
  • Don’t we only know about this study done on 689,003 people because it was written up and reported on in a prestigious journal?
  • Could it be that other studies might have been done, or might be going on right now, or might happen in the future, and we might know nothing about them because their results will be kept as proprietary information?
  • Why didn’t something this massive and egregious ever happen on the Web – you know, the open Web that isn’t run by a single corporation?
  • Don’t we have, or didn’t we used to have, news feeds on the Web, like the Facebook news feed that the company manipulated?
  • Such as RSS feeds?
  • Using a free standard, which anyone in world can set up in their own way without adhering to a single company’s policy?
  • Don’t we, or didn’t we, subscribe to these RSS feeds with feed readers, such as Liferea?
  • Wouldn’t it be harder for a person or company to manipulate a news reader that subscribes to feeds on the open Web and is running on a person’s own computer?
  • Particularly if this news reader is free software and you can build it from source that you and everyone else in the world can inspect?
  • Could it be that the “users,” as we like to call them, are the ones who really made a fundamental mistake here, rather than Facebook?
  • You know how Facebook is, well, a company, a for-profit corporation?
  • So, it’s actually supposed to harvest data from users as efficiently as possible and exploit that data to make more money, up to the limit of what the law allows?
  • Can’t companies be sued by their shareholders if they don’t act to maximize profits?
  • Could it be that Facebook is, in everything it does, trying to harvest information from, exploit the data of, and learn how to profit from the behavior of those people called “users,” whom Facebook legally and officially owes absolutely nothing?
  • Remember the World Wide Web?
  • Remember blogs?
  • What happened to these blogs, including the one that I was part of that helped to shape the emerging field of digital media?
  • Was the recent zombie craze formulated to help metaphorically describe what has happened to blogs?
  • Why do I still blog?
  • Have you noticed that I get a comment on my blog about every other month?
  • What does it mean that I can announce the publication of a book that I worked on for years, and after more than two weeks, this post hasn’t garnered a single comment?
  • Remember how, after overcoming a few (diminishing) technical barriers, anyone could write about whatever topics – personal, political, academic, technical, aesthetic – and could host a forum, a blog, in which anyone else in the world, as long as they were online, could respond?
  • Remember how attitudes toward technology, changing methods of media consumption and transformation, and other important discourses were shaped by people having public conversations on the Web in blogs?
  • Why do I get thousands of spam comments on my blog each month, sent in complete disregard for the things that are posted here?
  • These spam comments might be sent by organized crime botnets, in part, but since some of them are commercial, might they be sent by or on behalf of companies?
  • Companies trying to maximize their profit, indifferent to anything except what they can get away with?
  • Why do we think that we can fix Facebook?
  • Why did people who communicate and learn together, people who had the world, leave it, en masse, for a shopping mall?

There’s a party — Perverbs.

I persist in my quest to develop extremely simple, easily modifiable programs that produce compelling textual output.

My latest project is Modern Perverbs. In a world where nothing is as it seems … two phrases … combine … to make a perverb. That’s about all there is to it. If phrase N is picked from the first list, some phrase that isn’t number N will be picked from the second, to ensure maximum perverbiality. The first phrase also carries the punctuation mark that will be used at the very end. This one is a good bit simpler than even my very simple “exploded sentence” project, Lede.

(I learned of perverbs and their power, I should note, thanks to Selected Declarations of Dependence by Harry Mathews.)

In case, for some reason, you fear the legal repercussions of ripping off my HTML and JavaScript and editing it, Modern Perverbs is explicitly licensed as free software. Save the page on your desktop as plain HTML (not “complete”), open it in an editor, and have a field day.

Sounds, User-Input Phrases, and Monkeys in “Taroko Gorge”

Check out “Wandering through Taroko Gorge,” a participatory, audio-enabled remix.

As James T. Burling stated on the “projects” page of MAD THEORY:

>In this combination of presentation and poetry reading, I’ll present a remix of Nick Monfort’s javascript poetry generator, “Taroko Gorge.” My remix added a musical component using a computers oscilloscope function, and more importantly allows participant-observers to type in answers to prompts which are then added to the poem in real-time. The poem will be available throughout the day, gradually adding all inputs to its total sum. I’ll discuss the process of decoding html and javascript as a non-coder, describe some of my theories on participatory performance using computer interfaces, and raise questions about agency in performance and how a digital artifact can function as a poetic event.

Jill Walker Rettberg, this Monday’s Purple Blurb

Purple Blurb

MIT, room 14E-310

Monday 5/5, 5:30pm

Free and open to the public, no reservation required

Jill Walker Rettberg

“Seeing Ourselves Through Technology: How We Use Selfies, Blogs and Wearable Devices to Understand Ourselves”

Jill Walker RettbergThis Monday (2014-05-05) the Purple Blurb series of Spring 2014 presentations will conclude with a talk by Jill Walker Rettberg on a pervasive but still not well-understood phenomenon, the types of digital writing, tracking, photography, and media production of other sorts that people do about themselves. Her examples will be drawn from her own work as well as from photobooths, older self-portraits, and entries from others’ diaries.

Jill Walker Rettberg is Professor of Digital Culture at the University of Bergen in Norway. Her research centers on how we tell stories online, and she has published on electronic literature, digital art, blogging, games and selfies. She has written a research blog, jilltxt.net, since October 2000, and co-wrote the first academic paper on blogs in 2002. Her book _Blogging_ was published in a second edition in 2014. In 2008 she co-edited an anthology of scholarly articles on _World of Warcraft._ Jill is currently writing a book on technologically mediated self-representations, from blogs and selfies to automated diaries and visualisations of data from wearable devices.

More about Purple Blurb