After revisiting several texts that I collected over the past year (Aarseth’s Cybertext, Gitelman’s Always Already New, Slack & Wise’s Technology & Culture, and Goldsmith’s Wasting Time on the Internet) I can feel myself closing in on a more cohesive topic for my PhD thesis. The idea occupies an overlapping point shared by the fields of Cultural Studies, Literary Analysis, Science Technology and Society, and Philosophy. It’s an odd creature.

I have been thinking about a research direction based on a web of interconnectivity, in the tradition of Actor-Network Theory, involving four realms of works (either genre-fied or not) from the 20th and 21st centuries. These realms of works exclusively involve what I’ll call ‘static’ texts for now (not the kind of ‘static text’ used in flash development). I’ll elaborate on what I mean:

I define static texts as works that, while they may not be linear, are distinct from digital, interactive narratives such as adventure games in that they do not necessarily rely on digital media in order to be read. In other words, while computers may have had a hand in their genesis, the use of a computer is not necessary to read the final work. The book may be distributed by the author as a free .pdf, for example, but accessing a printed version of the book will not significantly alter or detract from the reading experience (excluding considerations of the effects of materiality upon the nuances of reading, of course). A print copy of a choose-your-own-adventure book from the ‘80s would be static. An interactive story written in Twine would not.

The following four categories are loosely based upon Espen J. Aarseth’s division of ‘cyborg’ texts into three categories according to the degree of human involvement in the process of the text’s generation and revision. These categories constitute my attempt to trace the involvement of ‘algorithmic thinking’ or machine-like processes in writing from long before computers or sophisticated algorithms existed, up until the present day, where cybertextuality has become commonplace among contemporary writers. I’m sure that further research will reveal that algorithmic thinking in human composition of texts goes back farther than I’m aware of, but I think this is a solid starting point. The four categories are:

  1. Texts, produced by both the OULIPO and conceptual writing movements, that are based on constraints or generative formulas. These works resemble texts assembled by algorithms, but they were written by humans, without the aid of computer algorithms (or computers in some cases). These texts may incorporate found material, but are not composed entirely of ‘unoriginal’ work. They include examples such as Raymond Queneau’s 100 Trillion Poems, the works of Georges Perec, and almost all early OULIPO works.
  1. Works of ‘citation poetics’, composed entirely of appropriated text, also written without the aid of computer algorithms and involving no re-writing of the appropriated material, such as the works of Bern Porter, Heimrad Baker, Vanessa Place, and Kenneth Goldsmith, as well as Walter Benjamin’s Arcades Project.
  1. Works produced in direct and admitted collaboration with computer-driven algorithmic ‘entities’, such as Erin Moure’s Pillage Laud, Bill Kennedy and Darren Wershler’s Apostrophe, and Chamberlain and Racter’s The Policeman’s Beard is Half-Constructed.
  1. Works produced by algorithms with the least human involvement possible, markedly less than the so-called ‘cybertexts’ of the previous category which require a significant action by a human component not only in the initiation of the algorithm, but also in the composition and/or editing process of the text at some point. Once set in motion, the algorithms that write these kinds of texts require only the instruction to do so and the entire, finished text is generated.

I should note that I’m not sure where Craig Dworkin’s Parse or the most recent works of Robert Fitterman would fit, since I’m not certain if those authors used algorithms to order their work, or to what extent algorithms were involved in their genesis and editing, if at all (using a search engine would technically count, but I am here referring to the use of an algorithm that both ‘fishes for’ the constituent parts of a text and assembles them as it sees fit, editing included). But I digress.

One example of the kind of writing in the fourth category is the algorithmically-driven ‘bots’ of Philip M. Parker, which can assemble non-existent books at a user’s request, an act of write-on-demand. These bots are capable of sufficiently paraphrasing and rearranging existing material so as to avoid copyright infringement. The books can be downloaded as .pdfs or printed on demand. Again, they do not require a computer to be read, but are generated entirely by algorithms. In this sense, they might masquerade as human writing if in print, until one sees who (or what) the author of the books really is. Additionally, Parker is not the only one enlisting such entities.

There’s a pun involving ghostwriting and the notion of the ghost-in-the-machine somewhere in all this.

Bot writing raises several questions. How did the first three realms of textual generation I mentioned above influence the fourth, and vice-versa? If the subject of a text can be named, and the entire text generated by bots (including the title), what does this mean for freelance or contract writers? How accurate are these machine-generated texts, and how well can they avoid copyright if scrutinized? How fast are they evolving? What resemblance do they have to the far more crude and infamous “buy an essay on X topic” websites that have undoubtedly been the downfall of many an undergraduate student who thought they could fool their professor? This last question in particular brings up issues of copyright, which I will undoubtedly include if I choose to write my thesis on this topic.

There are numerous articles about the proliferation and increasing tonal flexibility of such bots, and I am still sifting through them. Online articles from Wired, ReadWrite, The Verge, Associated Press, SingularityHUB, ExtremeTech, The Daily Good, Educause Review, American Journalism Review, and various blogs are just a few examples.

I would love to requisition a book about robot authors from a robot author. Perhaps I’ll do just that…

Getting back on track: in Cybertext, Aarseth argues that computers should be used to create worlds with which the reader/player/user engages, such as adventure games, open world narrative environments, and the like. He seems to imply that the only texts we might feed such an algorithm are Shakespearean sonnets, ancient Greek plays or harlequin romance novels. While human vernacular is still heavily influenced by the tropes, syntax, and grammar that has evolved from earlier written materials (don’t forget languages other than English!), I’m curious about the limitations of these bots.

What if we fed such an algorithm the least conventional, most robotic works that we have contextualized as literature thus far? What if we fed an algorithm material that could have itself been produced by an algorithm (such as Oulipean writing)? What would that algorithm produce as a result? Would the tone of a work we perceive as ‘robotic’ seem the same to a writing algorithm? What might such a program do if fed jargon, or numerals, or images?

Programs such as Google’s deepdream generator, the @PrimitivePic Twitter bot, and the University of Tubingen’s painting program are some examples. Every so often, something they produce has a “non-human” feel to it, something more alien than a typical collage or pastiche of obviously human-made elements.

Already bots are trawling the net for data that looks nothing at all like writing, but that data is being produced by humans constantly. Such bots may not be producing output that is read by anyone but other bots, but the fact that we have largely ignored their potential to produce anything we might call writing is something that bears addressing. What Lisa Gitelman calls “habitual contexts of display” (Gitelman 126) are influencing the way in which we consume and assemble texts of all kinds. How do texts generated with little to no human involvement figure into such a discussion of textual generation, presentation, and reception?

This PhD research direction involves examining the points where digital and ‘pre-digital’ conceptual writing, digital media, and advanced writing algorithms intersect, using actor-network theory as a basis or scaffolding. Parker’s bots bring to mind Borges’s Library of Babel. They are like the soulless, inhuman monks operating tirelessly at Parker’s (and the users’) behest, generating not the gibberish texts from Jonathan Basile’s dizzying algorithm-driven website, but readable volumes with coherent subjects and logical semantic progressions.

In effect, Parker’s bots and Basile’s site represent opposing incarnations of Borges’s library: one discerning and controlled by users, and the other bound only to the constraints set forth by Borges, resulting in output that is mostly gibberish. Could Parker’s bots one day communicate with Basile’s website to hunt more efficiently for non-existent texts? The possibility is at once exciting, and somewhat disturbing.

Works Cited:

Gitelman, Lisa. Always Already New: Media, History and the Data of Culture. Cambridge, MA: MIT, 2006. Print.

Join the conversation! 1 Comment

  1. […] this material into supposedly ‘unique’ texts? I wrote a short piece about such bots on another blog, which I am working to transform into a PhD […]

Leave a Reply

About Ken Hunt

Ken Hunt is the author of "Space Administration", a book of erasure poetry created by plundering NASA’s voice transcription of the first day of the Apollo 11 moon mission. "Space Administration" is published by the LUMA Foundation, as part of Kenneth Goldsmith and Hans Ulrich’s 89+ Project. Excerpts from the book have been published in NoD Magazine, Rampike, Matrix Magazine, and No Press. Ken’s next book of poetry, "The Lost Cosmonauts", is forthcoming from BookThug in 2017. "The Odyssey", an erasure of the entire Apollo 11 moon mission transcript, is also forthcoming from BookThug in 2019, to coincide with the 50th anniversary of the Apollo 11 mission. Ken is currently pursuing an MA in English at Concordia University in Montreal.

Category

weeknotes