GUEST FORUM: Adapting College Writing for the Age of Large Language Models such as ChatGPT: Some Next Steps for Educators

Angel McNall photography

Adapting College Writing for the Age of Large Language Models such as ChatGPT: Some Next Steps for Educators 

(Updated 4/17/2023 to include new links)

By Anna Mills and Lauren M. E. Goodlad (licensed CC BY NC 4.0

Large language models (LLMs) such as ChatGPT are sophisticated statistical models that predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense. Through intensive mining, modeling, and memorization of vast stores of language data “scraped” from the internet, these text generators deliver a few paragraphs at a time which resemble writing authored by humans. This synthetic text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.  

Wikimedia Commons

From an academic integrity perspective, this means that “AI”-generated writing  

1) is not easily identifiable as such to the unpracticed eye;  

2) does not conform to “plagiarism” as that term is typically understood by teachers and students; and  

3) encourages students to think of writing as task-specific labor disconnected from learning and the application of critical thinking. 

Many teachers who assign writing are, thus, understandably concerned that students will use ChatGPT or other text generators to skip the learning and thinking around which their writing assignments are designed. 

In the future, the producers of language models may offer tools for identifying texts generated by their systems. Such tools may arise independently, like a recent app that claims to identify ChatGPT’s outputs. It is also possible that government regulators and other policy-making bodies will become involved in overseeing the use of LLMs in educational settings. New York City schools have already banned ChatGPT, and many experts argue that the abuse of LLMs could extend far beyond the impact on students.  

As a teacher and textbook author, one of us (Anna Mills) has been collecting multiple perspectives on the topic for the Writing Across the Curriculum Clearinghouse. Another of us (Lauren Goodlad) is the chair of the Critical AI @ Rutgers initiative and the editor of Critical AI.  Though both of us feel strongly that unsupervised use of LLMs for student assignments is detrimental to learning, we believe that, in the short run, a combination of the practices described below will effectively discourage students from submitting machine-generated writing as their own. At the very least, any student determined to use text generation will encounter significant obstacles.

In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction. (We offer a few solid links below.) In the meantime, the following practices should help to protect academic integrity and student learning. At least some of these practices might also enrich your teaching.  

Common practices that can be updated in the current context

Portrait of Cristine de Pizan Writing, British Library
  • Encourage intrinsic motivation. Most educators already strive to make their assignments engaging, but it’s worth emphasizing that students who feel connected to their writing will be less interested in outsourcing their work to an automated process.  
  • Highlight how the writing process helps students learn. Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking. Writing, reading, and research are entwined activities that help people to communicate more clearly, develop original thinking, evaluate claims, and form judgments.
  • Update academic integrity policies to make them explicit about the use of automated writing tools. Academic integrity policies and honor codes should specify what, if any, use of automated writing assistance is appropriate (teachers may wish to consult departmental or institutional policies).  
  • Ask students to affirm that their submissions are their own work and not that of another person or of any automated system. This practice has long been used to deter plagiarism and can be adapted to include text generation. One of us asks students to add the following statement along with their initials when they turn in written work: “I certify that this assignment represents my own work. I have not used any unauthorized or unacknowledged assistance or sources in completing it including free or commercial systems or services offered on the internet.” 

Practices we recommend

Photo by Michael Burrows
  • Let students know that detectors for identifying “AI”-generated text exist and are improving. While these tools should not be relied upon as a “silver bullet,” students should know that detection is possible. However, teachers who use these tools as a check on potential policy violations should bear in mind that the detectors may produce false positives and false negatives. Students should know that this technology is rapidly evolving: future detectors may be able to retroactively identify auto-generated prose from the past. No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. 
  • Assign prompts that state-of-the-art systems such as ChatGPT are not good at. The below tasks are either impossible for ChatGPT to perform reliably, or require the student to supervise and edit in ways that entail significant expertise, rhetorical skills, and time. As such, students who are simply eager to earn a good grade with minimal effort will likely find such assignments difficult to automate. These requirements may also make assignments more robust in other ways.
  • Requirement for verifiable sources and quotations. ChatGPT currently fabricates sources and quotations (though it may occasionally hit on a real author or title). GPT-3 is even more prone to this “hallucination.” Students using either model would need to find and input sources and quotations themselves. 
  • Analysis of specifics from images, audio, or videos. Students would need to describe these kinds of media in detail in order to generate automated outputs about them. 
  • Analysis of longer texts (too large for the limited windows for prompting automated systems). Although tools exist for summarizing longer texts, using them adds another obstacle to easy automation. Such programs may introduce errors that make automated text easier to detect.  
  • Analysis that draws on class discussion. Assigning this criterion requires the student to input notes from class discussion, involving time and effort. 
  • Analysis of recent events not in the training data for the system. Students would need to do their own research and then feed that information into the automated system.
  • Assignments that articulate nuanced relationships between ideas. Such assignments could entail comparing two passages that students themselves choose from two assigned texts. Students might be asked to explain a) why they chose these particular passages; b) how the chosen passages illuminate the whole of the texts from which they were excerpted; and then c) how the two passages compare according to instructions that bear on the course themes or content. LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 
  • Assign in-class writing as a supplement to or launching point for take-home assignments. Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. (Note: In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities). 

Additional practices you might wish to undertake 

  • Assign steps in the writing process and/or reflection on that process. Many instructors already include these practices in their approach to teaching writing. Note that ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.  
  • Hold individual conferences on student writing or ask students to submit audio/video reflections on their writing. As we talk with students about their writing, or listen to them talk about it, we get a better sense of their thinking. By encouraging student engagement and building relationships, these activities could discourage reliance on automated tools. 
  • If you are curious about the technology, test your own writing assignments. Once you sign up for an account, it is straightforward to test an assignment using ChatGPT by feeding in the instructions and other required information (such as a short text). Users of these models can learn to improve their outputs by prompting the model to add or revise. (Anna has compiled a set of examples).

Practices you might wish to undertake once you learn more about language models

  • Teach students about text generators and “AI.” Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent. No matter how seemingly magical, text generators do not understand language (or anything else) the way that humans do. For the same reason, LLMs often mimic the harmful prejudices, misconceptions, and biases found in data scraped from the internet. (See the below resources for some preliminary introductions to the subject.) 
  • Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Given that social media and the internet are full of bogus accounts using synthetic text, alerting students to the intrinsic problems of such writing could be beneficial. (See the “ChatGPT/LLM Errors Tracker,” maintained by Gary Marcus and Ernest Davis.) Note that teaching students about LLMs in this way is a different practice from using ChatGPT as a means of teaching students how to write (a practice that some are recommending but which we regard as having limited value). Since ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content, it seems like a poor foundation for building students’ skills and may circumvent their independent thinking. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. 
  • Join in discussions with fellow teachers, policymakers, and industry to explore regulations and tools that will help educators support student learning in the age of “AI”-text generators.

Practices we do not recommend 

  • Requiring handwritten submissions. Writing by hand is difficult for many students, especially students with certain disabilities and students who use voice-to-text software. To be sure, some studies show that students remember things better when they take notes with pen and paper rather than laptops or other devices. Writing by hand might be offered as an option for some assignments.
  • Adopting surveillance tools. Some are advocating software that records the students’ entire writing process–systems that have been shown to be biased and inaccurate. Such for-profit surveillance tools are highly intrusive and (like language models themselves) potentially exploitative. They may also be ineffective and inequitable.
  • Adopting systems trained to recognize specific student writing. In theory it is possible to determine the authorship of a text by training a system to recognize an individual student’s patterns of word choice and syntax. Like the surveillance systems above, such technology is prone to unreliability, exploitation, and abuse.  

Further resources on text generators

(Updated 4/17/2023 to include new links)

Note: Good journalism on language models is surprisingly hard to find since the technology is so new and the hype is ubiquitous. Here are a few reliable short pieces.    

The below academic article is now a classic:

A sample of resources on “AI” more generally (Updated 1/19)

To share your ideas or offer advice please feel free to comment below (the comments are moderated) or write to one or both authors. 

Anna Mills: armills@marin.edu

Lauren Goodlad: lauren.goodlad@rutgers.edu

In addition, the Modern Language Association and the College Conference on Composition and Communication have convened a task force on this topic (Anna is a member). Anyone who would like to contact the taskforce can write to Paula Krebs (MLA director) or Holly Hassel (CCCC director).

3 comments

  1. Great points. You and others might also find of interest my Forbes column coverage on AI Ethics and AI Law related to generative AI and ChatGPT, including this piece about student use and teachers: https://www.forbes.com/sites/lanceeliot/2022/12/18/enraged-worries-that-generative-ai-chatgpt-spurs-students-to-vastly-cheat-when-writing-essays-spawns-spellbound-attention-for-ai-ethics-and-ai-law/

    Another piece covers why the claims of being able to use a specialized AI app to detect human versus ChatGPT output is misleading and lamentedly going to cause troubles: https://www.forbes.com/sites/lanceeliot/2023/01/12/debunking-those-bonehead-claims-about-being-able-to-use-special-purpose-ai-to-readily-spot-generative-ai-chatgpt-produced-essays-plus-caustic-call-outs-by-ai-ethics-and-ai-law/

  2. This is very helpful–I think the practice of reviewing multiple drafts for an assignment as well as conferencing with students in and out of class during a writing task can help–there’s really not a great deal of difference, fundamentally, between student cutting and pasting and AI, I think.

    A couple years ago, I had a very savvy student write a paper on a forerunner of ChatGPT–he included a paragraph written by the software and asked me if I could identify it. It took a bit of time (10-15 minutes), but I found it. He asked me what tipped me off–I told him the lack of hard facts of any kind, no real analysis or connection to the informative paragraphs that preceded and followed it, and the lack of a similar writing signature to the rest of the paper. This corroborates many of the points the authors make here about the characteristics of ChatGPT-produced writing.

Leave a Reply to Dr. Lance EliotCancel reply