Advice Concerning the Increase in AI-Assisted Writing

Edward Schiappa, Professor of Rhetoric
Nick Montfort, Professor of Digital Media
10 January 2023

[In response to a request, this is an HTML version of a PDF memo for our colleagues at MIT]

There has been a noticeable increase in student use of AI assistance for writing recently. Instructors have expressed concerns and have been seeking guidance on how to deal with systems such as GPT-3, which is the basis for the very recent ChatGPT. The following thoughts on this topic are advisory from the two of us: They have no official standing within even our department, and certainly not within the Institute. Nonetheless, we hope you find them useful.

Newly available systems go well beyond grammar and style checking to produce nontrivial amounts of text. There are potentially positive uses of these systems; for instance, to stimulate thinking or to provide a variety of ideas about how to continue from which students may choose. In some cases, however, the generated text can be used without a great deal of critical thought to constitute almost all of a typical college essay. Our main four suggestions for those teaching a writing subject are as follows:

  1. Explore these technologies yourself and read what has been written about them in peer-reviewed and other publications,
  2. understand how these systems relate to your learning goals,
  3. construct your assignments to align with learning goals and the availability of these systems, and
  4. include an explicit policy regarding AI/LLM assistance in your syllabus.

Exploring AI and LLMs

LLMs (Large Language Models) such as those in the GPT series have many uses, for instance in machine translation and speech recognition, but their main implications for writing education have to do with natural language generation. A language model is a probability distribution over sequences of words; ones that are “large” have been trained on massive corpora of texts. This allows the model to complete many sorts of sentences in cohesive, highly plausible ways that are sometimes semantically correct. An LLM can determine, for instance, that the most probable completion of the word sequence “World War I was triggered by” is “the assassination of Archduke Franz Ferdinand” and can continue from there. While impressive in many ways, these models also have several limitations. We are not currently seeking to provide a detailed critique of LLMs, but advise that instructors read about the capabilities and limitations of AI and LLMs.

To understand more about such systems, it is worth spending some time with those that are freely available. The one attracting the most attention is ChatGPT. The TextSynth Playground also provides access to several free/open-source LLMs, including the formidable GPT-NeoX-20B. ChatGPT uses other AI technologies and is presented in the form of a chatbot, while GPT-NeoX-20B is a pure LLM that allows users to change parameters in addition to providing prompts.

Without providing a full bibliography, there is considerable peer-reviewed literature on LLMs and their implications. We suggest “GPT-3: What’s it good for?” by Robert Dale and “GPT-3: Its Nature, Scope, Limits, and Consequences” by Luciano Floridi & Massimo Chiriatti. These papers are from 2020 and refer to GPT-3; their insights about LLMs remain relevant. Because ChatGPT was released in late November 2022, the peer-reviewed research about it is scant. One recent article, “Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education,” offers a short human-authored introduction and conclusion, presenting sample text generated by ChatGPT between these.

Understanding the Relationship of AI and LLMs to Learning Goals

The advisability of any technology or writing practice depends on context, including the pedagogical goals of each class.

It may be that the use of a system like ChatGPT is not only acceptable to you but is integrated into the subject, and should be required. One of us taught a course dealing with digital media and writing last semester in which students were assigned to computer-generate a paper using such a freely-available LLM. Students were also assigned to reflect on their experience afterwards, briefly, in their own writing. The group discussed its process and insights in class, learning about the abilities and limitations of these models. The assignment also prompted students to think about human writing in new ways.

There are, however, reasons to question the practice of AI and LLM text generation in college writing courses.

First, if the use of such systems is not agreed upon and acknowledged, the practice is analogous to plagiarism. Students will be presenting writing as their own that they did not produce. To be sure, there are practices of ghost writing and of appropriation writing (including parody) which, despite their similarity to plagiarism, are considered acceptable in particular contexts. But in an educational context, when writing of this sort is not authorized or acknowledged, it does not advance learning goals and makes the evaluation of student achievement difficult or impossible.

Second, and relatedly, current AI and LLM technologies provide assistance that is opaque. Even a grammar checker will explain the grammatical principle that is being violated. A writing instructor should offer much better explanatory help to a student. But current AI systems just provide a continuation of a prompt.

Third, the Institute’s Communication Requirement was created (in part) in response to alumni reporting that writing and speaking skills were essential for their professional success, and that they did not feel their undergraduate education adequately prepared them to be effective communicators.[1] It may be, in the fullness of time, that learning how to use AI/LLM technologies to assist writing will be an important or even essential skill. But we are not at this juncture yet, and the core rhetorical skills involving in written and oral communication—invention, style, grammar, reasoning and argument construction, and research—are ones every MIT undergraduate still needs to learn.

For these reasons, we suggest that you begin by considering what the objectives are for your subject. If you are aiming to help students critique digital media systems or understand the implications of new technologies for education, you may find the use of AI and LLMs not only acceptable but important. If your subject is Communication Intensive, however, an important goal for your course is to develop and enhance your students’ independent writing and speaking ability. For most CI subjects, therefore, the use of AI-assisted writing should be at best carefully considered. It is conceivable at some point that it will become standard procedure to teach most or all students how to write with AI assistance, but in our opinion, we have not reached that point.The cognitive and communicative skills taught in CI subjects require that students do their own writing, at least at the beginning of 2023.

Constructing Assignments in Light of Learning Goals and AI/LLMs

Assigning students to use AI and LLMs is more straightforward, so we focus on the practical steps that can be taken to minimize use when the use of these system does not align with learning goals. In general, the more detailed and specific a writing assignment, the better, as the prose generated by ChatGPT (for example) tends to be fairly generic and plain.

Furthermore, instructors are encouraged to consult MIT’s writing and communication resources to seek specific advice as to how current assignments can be used while minimizing the opportunities for student use of AI assistance. These resources include Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program It is our understanding that MIT’s Teaching + Learning Lab will be offering advice and resources as well.

Other possible approaches include:

• In-class writing assignments
• Reaction papers that require a personalized response or discussion
• Research papers requiring quotations and evidence from appropriate sources
• Oral presentations based on notes rather than a script
• Assignments requiring a response to current events, e.g., from the past week

The last of these approaches is possible because LLMs are trained using a fixed corpus and there is a cutoff date for the inclusion of documents.

Providing an Explicit Policy

Announcing a policy clearly is important in every case. If your policy involves the prohibition of AI/LLM assistance, we suggest you have an honest and open conversation with students about it. It is appropriate to explain why AI assistance is counter to the pedagogical goals of the subject. Some instructors may want to go into more detail by exploring apps like ChatGPT in class and illustrating to students the relative strengths and weaknesses of AI-generated text.

In any case: What this policy should be depends on the kinds and topics of writing in your subject. If you do not wish your students to use AI assisted writing technologies, you should state so explicitly in your syllabus. If the use of this assistance is allowable within bounds, or even required because students are studying these technologies, that should be stated as well.

In the case of prohibition, you could simply state: “The use of AI software or apps to write or paraphrase text for your paper is not allowed.” Stronger wording could be: “The use of AI software or apps to write or paraphrase text for your paper constitutes plagiarism, as you did not author these words or ideas.”

There are automated methods (such as Turnitin and GPTZero) that can search student papers for AI-generated text and indicate, according to their own models of language, how probable it is that some text was generated rather than human-written. We do not, however, know of any precedent for disciplinary measures (such as failing an assignment) being instituted based on probabilistic evidence from such automated methods.

Conclusion

The use of AI/LLM text generation is here to stay. Those of us involved in writing instruction will need to be thoughtful about how it impacts our pedagogy. We are confident that the Institute’s Subcommittee on the Communication Requirement, the Writing, Rhetoric, and Professional Communication program, the MIT Writing & Communication Center, and the English Language Studies program will provide appropriate resources and counsel when appropriate. We also believe students are here at MIT to learn, and will be willing to follow thoughtfully advanced policies so they can learn to become better communicators. To that end, we hope that what we have offered here will help to open an ongoing conversation.[2]


[1] L. Perelman, “Data Driven Change Is Easy; Assessing and Maintaining It Is the Hard Part,” Across the Disciplines 6.2 (Fall 2009). Available: http://wac.colostate.edu/atd/assessment/perelman.cfm. L. Perelman, “Creating a Communication-Intensive Undergraduate Curriculum in Science and Engineering for the 21st Century: A Case Study in Design and Process.” In Liberal Education in 21st Century Engineering, Eds. H. Luegenbiehl, K. Neeley, and David Ollis. New York: Peter Lang, 2004, pp. 77-94.

[2] Our thanks to Eric Klopfer for conversing with us concerning an earlier draft of this memo.

Concise Computational Literature is Now Online in Taper

I’m pleased to announce the release of the first issue of Taper, along with the call for works for issue #2.

Taper is a DIY literary magazine that hosts very short computational literary works — in the first issue, sonic, visual, animated, and generated poetry that is no more than 1KB, excluding comments and the standard header that all pages share. In the second issue, this constraint will be relaxed to 2KB.

The first issue has nine poems by six authors, which were selected by an editorial collective of four. Here is how this work looked when showcased today at our exhibit in the Trope Tank:

Weights and Measures and for the pool players at the Golden Shovel, Lillian Yvonne-Bertram
“Weights and Measures” and “for the pool players at the Golden Shovel,” Lillian Yvonne-Bertram
193 and ArcMaze, Sebastian Bartlett
“193” and “ArcMaze,” Sebastian Bartlett
Alpha Riddims, Pierre Tchetgen and Rise, Angela Chang
“Alpha Riddims,” Pierre Tchetgen and “Rise,” Angela Chang
US and Field, Nick Montfort
“US” and “Field,” Nick Montfort
God, Milton Läufer
“God,” Milton Läufer

This issue is tiny in size and contains only a small number of projects, but we think they are of very high quality and interestingly diverse. This first issue of Taper also lays the groundwork for fairly easy production of future issues.

The next issue will have two new editorial collective members, but not me, as I focus on my role as publisher of this magazine though my very small press, Bad Quarto.

The Gathering Cloud

The Gathering Cloud, J. R. Carpenter, 2017
The Gathering Cloud, J. R. Carpenter, 2017. (I was given a review copy of this book.)

J.R. Carpenter’s book is an accomplishment, not just in terms of the core project, but also by virtue of how the codex is put together. The introduction is by Jussi Parikka, the after-poem by Lisa Robertson. While social media and ethereal imaginations of the network keep us from being lonely as a cloud these days, they obscure the material nature of computing, the cost of linking us in terms of wire and heat. Carpenter’s computer-generated Generation[s] was concerned with the computational production of text; The Gathering Cloud also engages with the generation of power. This book and the corresponding digital performance, for instance at the recent ELO Festival in Porto, yields up the rich results of research, cast in accomplished verse. As with Carpenter’s other work that is rooted in zines and the handmade Web, it is personal rather than didactic. Deftly, despite the gravity of the topic, the book still affects the reader with a gesture, not a deluge of facts — more by waving than drowning.

My @party Talk on Computer-Generated Books

I just gave a talk at the local demoparty, @party. While I haven’t written out notes and it wasn’t recorded, here are the slides. The talk was “Book Productions: The Latest in Computer-Generated Literary Art,” and included some discussion of how computer-generated literary books related to demoscene productions.

Sliders

Sliders front cover, with battlements

My minimal book Sliders has been published by my press, Bad Quarto. The book contains 32 poems, some of which are only one word long. In a break from tradition, they are not computer-generated.

Currently Sliders is only available for sale at the MIT Press Bookstore, 301 Massachusetts Ave, Cambridge, Mass.

Sliders back cover, with blurbs

Trope Tank Writer in Residence

The Trope Tank is accepting applications for a writer in residence during academic year 2016-2017.

The Trope Tank, 3 August 2016

Our mission is developing new poetic practices and new understandings of digital media by focusing on the material, formal, and historical aspects of computation and language. More can be discovered about the Trope Tank here:

http://nickm.com/trope_tank/

The main projects of the Trope Tank for 2016-2017 are Renderings and Heftings, as I’ve described for a forthcoming article in _Convolutions 4_:

> The **Renderings** project is an effort to locate computational
> literature in languages other than English — poetry and other
> text generators, combinatorial poems, interactive fiction, and
> interactive visual poetry, for example — and translate this work
> to English. Along the way, it is necessary to port some of this
> work to the Web, or emulate it, or re-implement it, both in
> the source language and in English. This provides the original
> language community better access to a functioning version
> of the original work, some of which originates in computer
> magazines from several decades ago, some of which is from
> even earlier. The translations give the English-language
> community some perspective on the global creative work that has
> been undertaken with language and computation, helping
> to remedy the typical view of this area, which is almost always
> strongly English-centered.

> **Heftings,** on the other hand, is not about translation into
> English; the project is able to include translation between any
> pair of languages (along with the translation of work that is
> originally multilingual). Nor does it focus on digital and computational
> work. Instead, Heftings is about “impossible translation” of all
> sorts — for instance, of minimal, highly constrained,
> densely allusive, and concrete/visual poems. The idea is that
> even if the translation of such works is impossible, attempts at
> translation, made while working collaboratively and in conversation
> with others, can lead to insights. The Heftings project
> seeks to encourage translation attempts, many such attempts
> per source text, and to facilitate discussion of these. There is no
> concept that one of these attempts will be determined to be the
> best and will be settled upon as the right answer to the question
> of translation.

The Trope Tank’s work goes beyond these main projects. It includes developing creative projects, individually and collaboratively; teaching about computing, videogaming, and the material history of the text in formal and informal ways; and research into related areas. Those in the Trope Tank have also curated and produced exhibits and brought some of the lab’s resources to the public at other venues. The lab hosts monthly meetings of the People’s Republic of Interactive Fiction and occasional workshops.

There are no fees or costs associated with the residency; there is also no stipend or other financial support provided as part of the appointment. A writer in residence has 24-hour access to and use of the Trope Tank, including space to work, power and network connection, and use of materials and equipment. As a member of the MIT community, a writer in residence can access the campus and check out books from the MIT Libraries. We encourage our writer in residence to attend research and creative discussions and join us in project work and other collaborations, but this is not expressed with a particular requirement to be in the Trope Tank some amount of time per week.

To apply, email me, Nick Montfort, at moc.mkcin@mkcin with short answers (in no case to exceed 250 words each) to the following questions:

– What work have you done that relates to computation, language and literature, and the mission of the lab? Include URLs when appropriate; there is no need to include the URLs when counting words.

– How would you make use of your time in the Trope Tank? You do not have to offer a detailed outline of a particular project, but explain in some way how it would be useful to you to have access to the materials, equipment, and people here.

– What is your relationship, if any, to literary translation, and do you see yourself contributing to Renderings, Heftings, or both? If so, how?

– What connections could you potentially make between communities of practice and other groups you know, either in the Boston area or beyond, and the existing Trope Tank community within MIT?

Include a CV/resume in PDF format as an attachment.

Applications will be considered beginning on August 15; applicants are encouraged to apply by noon on that day.

We value diverse backgrounds, experiences, and thinking, and encourage applications by members of groups that are underrepresented at MIT.

Language Hacking at SXSW Interactive

We had a great panel at SXSW Interactive on March 11, exploring several radical ways in which langauge and computing are intersecting. It was “Hacking Language: Bots, IF and Esolangs.” I moderated; the main speakers were Allison Parrish a.k.a. @aparrish; Daniel Temkin
DBA @rottytooth; and Emily Short, alias @emshort.

I kicked things off by showing some simple combinatorial text generators, including the modifiable “Stochastic Texts” from my Memory Slam reimplementation and my super-simple startup name generator, Upstart. No slides from me, just links and a bit of quick modification to show how easily one can work with literary langauge and a Web generator.

Allison Parrish, top bot maker, spoke about how the most interesting Twitter bots, rather than beign spammy and harmful or full of delightful utility, are enacing a critique of the banal corporate system that Twitter has carefully been shaped into by its makers (and compliant users). Allison showed her and other’s work; The theoretical basis for her discussion was Iain Borden’s “Another Pavement, Another Beach: Skateboarding and the Performative Critique of Architecture.” Read over Allison’s slides (with notes) to see the argument as she makes it:

Twitter Bots and the Performative Critique of Procedural Writing

Daniel Temkin introduced the group to esoteric programming languages, including several that he created and a few classics. He brought copies of a chapbook for people in the audience, too. We got a view of this programming-language creation activity generally – why people devise these projects, what they tell us about computing, and what they tell us about language – and learned some about Temkin’s own practice as an esolang developer. Take a look at Daniel’s slides and notes for the devious details:

Esolangs: A Guide to "Useless" Programming Languages

Finally, interactive fiction author Emily Short reviewed some of the classic problems of interactive fiction and how consideration has moved from the level of naïve physics to models of the social worlds – again, with reference to her own IF development and that of others. One example she presented early on was the challenge of responding to the IF command “look at my feet.” Although my first interactive fiction, Winchester’s Nightmare (1999) was not very remarkable generally, I’m pleased to note that it does at least offer a reasonable reply to this command:

Winchester's Nightmare excerpt

That was done by creating numerous objects of class “BodyPart” (or some similar name) which just generate error messages. Not sure if it was a tremendous breakthrough. But I think there is something to the idea of gently encouraging the interactor to o play within particular boundaries.

Emily’s slides (offering many other insights) may be posted in a bit – she is still traveling. I’ll link them here, if so.

Update! Emily’s slides are now online — please take a look.

I had a trio of questions for each pair of presenters, and we had time for questions from the audience, too. The three main presenters each had really great, compact presentations that gave a critical survey of these insurgent areas, and we managed to see a bit of how they speak to each other, too. This session, and getting to talk with these three during and outside of it, certainly made SXSW Interactive worth the trip for me.

There’s an audio recording of the event that’s available, too.

Remarker #1 Is Out

Remarker #1

This month I published a zine in the form of a bookmark. It’s available by asking me for a copy, asking a contributor for a copy, or going to my local radical bookstore, Bluestockings, at 172 Allen Street, New York, NY. If you wish to find Remarker there you must, alas, look under the register among the freebies (and advertisements), not among the “grown up” zines. The upside is that Remarker is free.

Trope Tank Writer in Residence, Spring 2015

Andrew Plotkin, Writer in Residence at the Trope Tank for Spring 2015

This Spring, Andrew Plotkin (a.k.a. Zarf) is the Trope Tank’s writer in residence. Andy will be at the Trope Tank weekly to work on one or more of his inestimable projects — as a game-maker, programmer, and platform developer, he has been working furiously for many years. (His home page is modest in this respect; See also his latest game, Hadean Lands.)

A “Trope Report” on Stickers

Not literally on stickers, no. This technical report from the Trope Tank is “Stickers as a Literature-Distribution Platform,” and is by Piotr Marecki. It’s just been released as TROPE-14-02 and is very likely to be the last report of 2014. Here’s the abstract:

Contemporary experimental writing often directs its attention to its writing space, its
medium, the material on which it is presented. Very often this medium is meaningful
and becomes part of the work – the printed text transfered to another media context
(for instance, into a traditional book) would become incomprehensible. Literature
distributed on stickers is a form of writing that is divided into small fragments of texts
(a type of constrained writing), physically scattered in different locations. One of the
newest challenges in literature are books with augmented reality, AR, which examine
the relation between the physical (the medium) and the virtual interaction. Sticker
literature is a rather simple analog form of augmented reality literature. The stickers
have QR codes or web addresses printed on them, so the viewer who reads/sees a
random sticker in the public space can further explore the text online. The viewer can
read other parts of the text on photographs (the photograph being another medium) of
other stickers placed in different locations. The author will discuss the use of stickers
throughout literary history, beginning with 20th century French Situationists, through
different textual strategies applied by visual artists and ending with literary forms such
as the sticker novel Implementation (2004) by Nick Montfort and Scott Rettberg or
Stoberskiade (2013). The author shall try to explain why writers decide to use this form,
how the text is distributed and received and how the city space is used in such projects.

Renderings (phase 1) Published

For the past six months I’ve been working with six collaborators,

– Patsy Baudoin
– Andrew Campana
– Qianxun (Sally) Chen
– Aleksandra Małecka
– Piotr Marecki
– Erik Stayton

To translate e-lit, and for the most part computational literature works such as poetry generators, into English from other languages.

Cura: The Renderings project, phase 1

After a great deal of work that extends from searching for other-langauge pieces, through technical and computing development that includes porting, and also extends into the more usual issues assocaited with literary translation, the first phase of the Renderings project (13 works translated from 6 languages) has just been published in Fordham University’s literary journal, Cura.

Please take a look and spread the word. Those of us rooted in English do not have much opportunity to experience the world-wide computational work with langague that is happening. Our project is an attempt to rectify that.

This Thursday! In Stereo!

I will be reading from and discussing three recent books this Thursday at 7pm the Harvard Book Store here in sunny Cambridge, Massachusetts. These are:

#!
Counterpath Press, Denver
a book of programs & poems (pronounced “shebang”)

World Clock
Bad Quarto, Cambridge
a computer-generated novel

10 PRINT CHR$(205.5+RND(1)); : GOTO 10
MIT Press, Cambridge
a collaboration with nine others that I organized, now out in paperback

These all express how programming can be used for poetic purposes, and how
new aesthetic possibilities can arise with the help of computing. Also,
some portions of these (which I’ll read from) are quite pleasing to read
aloud and to hear.

I would love it if you are able to join me on Thursday.

My Boston-Area Events This Fall

Yes, the first event is today, the date of this post…

September 12, Friday, 6pm-8pm

Boston Cyberarts Gallery, 141 Green Street, Jamaica Plain, MA
“Collision21: More Human” exhibit opens – it’s up through October 26.
“From the Tables of My Memorie” by Montfort, an interactive video installation, is included.


September 18, Thursday, 7pm-8pm

Harvard Book Store, 1256 Massachusetts Ave, Cambridge, MA
Montfort reads from #!, World Clock, and the new paperback 10 PRINT
http://www.harvard.com/event/nick_montfort/


September 24, Wednesday, 7:30pm

Boston Cyberarts Gallery, 141 Green Street, Jamaica Plain, MA
Montfort joins a panel of artists in “Collision21: More Human” for this Art Technology New England discussion.
http://atne.org/events/sept-24th-collision21-more-human/


October 22, Wednesday, 6:30pm-7:30pm

The Atrium of MIT’s Building E15 (“Old Media Lab”/Wiesner Building)
Montfort reads from #! at the List Visual Arts Center
http://counterpathpress.org/nick-montfort


November 15, Saturday, 9am-3pm

MIT (specific location TBA)
Urban Poetry Lateral Studio, a master class by Montfort for MIT’s SA+P
http://sap.mit.edu/event/urban-poetry-lateral-studio


December 4, Thursday, 5pm-7pm

MIT’s 66-110
“Making Computing Strange,” a forum with:
  Lev Manovich (Software Takes Command, The Language of New Media)
  Fox Harrell (Phantasmal Media)
  moderated by Nick Montfort
The forum will examine the ways in which computational models can be used in cultural contexts for everything from analyzing media to imagining new ways to represent ourselves.
http://web.mit.edu/comm-forum/forums/makingcomputing.html

Texto Digital Seeks Papers (in Many Languages)

A correspondent in Brazil sends news of a new call for papers in the journal Texto Digital. The recent issues have been almost entirely in Portuguese, but the journal is reaching out and seeking submissions in several languages. I think you can tell from the title (even if your Portuguese is a bit rusty) that this publication focuses on some very Post Position (and Grand Texto Auto) sorts of topics. Here’s the call:


Texto Digital is a peer-reviewed electronic academic journal, published twice annually in June and December by the Center for Research in Informatics, Literature and Linguistics – NuPILL (http://www.nupill.org/), linked to the Postgraduate Program in Literature, the Department of Vernacular Language and Literatures and the Center of Communication and Expression at the Federal University of Santa Catarina (UFSC), Brazil.

Texto Digital publishes original articles in Portuguese, English, Spanish, French, Italian and Catalan which discuss several theoretical implications related to the texts created/inserted in electronic and digital media.

Interdisciplinary by nature and range, as implied in its title with the words “text” and “digital”, the journal embraces the fields of Literature, Linguistics, Education, Arts, Computing and others, in their relation to the digital medium, yet without privileging any specific critical approach or methodology.

In addition to the Articles Section, Texto Digital presents specific sections destined on publishing digital works of art, as well as interviews with recognized researchers and / or digital artists.

Once submitted, all articles that meet the general scope of the journal and its guidelines will be considered for peer-review publication, even in case of issues that may favor some particular subject-matter.

CALL FOR PAPERS – TEXTO DIGITAL

Texto Digital, the electronic journal published by the Center for Research in Informatics, Literature and Linguistics (NuPILL) at the Federal University of Santa Catarina (UFSC), Brazil, informs that submissions for articles are open until October 15th, 2014.

We accept papers that analyse the relationships of digital media with one or more of the following subjects: Literature, teaching processes (reading and writing in particular, but not restricted to), language studies and arts in general. Accepted papers will be published in our
our December issue (n.2/2014).

Submissions for our journal are open on a continuous flow basis since September 1st, 2014, for academic papers that fit its scope.
Our publication standards and guidelines are available at:. https://periodicos.ufsc.br/index.php/textodigital/about/submissions#authorGuidelines. Only papers in accordance with such criteria will be accepted.