By Nicholas Birns
A few months ago, my wife and I took an online course on how to use ChatGPT4o. The course, within its terms, was well delivered. But as a literary critic who has taught writing at the college level for thirty-five years, what struck me most was the attitude toward the process of writing. That writing was portrayed as a time-consuming inconvenience that this new technology could ameliorate was, as I saw it, a highly disturbing portent.
Let me clarify that I do not think it behooves literary critics to be a priori hostile to new digital technologies. If automated text generation can assist with literary criticism in some fashion—much as computation of various kinds has assisted in digital humanities research —I do not think literary scholars should be obstructionist about that. But I agree with those who argue that critics like me need to understand what text-generating products like ChatGPT can and cannot do—a task of building critical AI literacies that educators must, alas, add to an already heavy workload (usually without adequate support from their paymasters). This thinkpiece began with the goal of sharing my experiences with others curious about what the road to critical AI knowledge-building might yield. But it turned into a deeper meditation on what writing is, why reading literature and writing literary criticism still matter, and how these fundamental questions remain central to the human condition.
The Ontology and Epistemology of ChatGPT
Our teacher’s implicit attitude was that ChatGPT was now the default approach to fulfilling writing-related tasks of any sort, by enlisting ChatGPT to output a basic draft. After this, the human user could take over to “sculpt” as needed. The secret to this practice, we were taught, is prompt chaining: a technique that involves directing a question to ChatGPT and then using its answer as the basis for the next in the series (and so on). If this process yielded content that did not fit the purpose, the next step would be adjusting the prompts, followed by “sculpting.”
This guise of autonomy begs a question that tech companies and their marketing teams are loath to discuss. If one outsources the writing of some needed content to ChatGPT—either because one does not like writing, or finds the writing too arduous—then why would writing bespoke prompts and “sculpting” the outputs be any better or easier than just sitting down and grinding out the actual copy? Isn’t anyone who can write detailed prompts, measure their efficacy, iterate if needed, and then sculpt for the final touches someone who, by definition, already has the skills of an experienced writer? This centering of the machine and decentering of the flesh and blood “user” does not even make sense in purely utilitarian terms (see also Crano 2025). Here was the rub.
It should be established from the first that ChatGPT (like all systems based on large language models [LLMs]) does not, as I eventually learned, either ontologically or epistemologically resemble the human conversant that it is designed to simulate. This is the topic of numerous articles in the field of critical AI studies. To illustrate the latter point through a real-life experience, consider the assignment for the online course which tasked us to use ChatGPT to create brochure copy for a nonexistent bed and breakfast. My wife chose to situate this imaginary hospitality spot along a real-world river, the Paulins Kill, in rural New Jersey. When she prompted the bot with this information, it immediately turned out a result that depicted the river as “rural” and “picturesque.”
As we immediately noted, the Paulins Kill area (which is known for trout fishing, but is not particularly scenic) is rural, but not picturesque. Here, was our first experience of what Johan Fredrikzon (2025) describes as “the supreme confidence” with which LLMs deliver outputs, regardless of whether they are accurate, semi-accurate, or downright nonsensical (so-called hallucinations).
A human copywriter would, no doubt, begin by seeking information about the region. By contrast, ChatGPT leveraged its model of statistical correlations gleaned from training data to predict that a “rural bed and breakfast” is likely to be “picturesque.” The model’s “supreme confidence” is, thus, a kind of statistical bluster: a rhetorical certainty that lulls uncritical users into passive positions of unwarranted trust and belief. Another example: when asked to ideate about possible events at the bed and breakfast (for which the bot had chosen the comely name “Whispering Waters”), ChatGPT came up with a fake fantasy novel by a fake fantasy novelist to promote at a fall weekend: Into the Chathar by “Jon Liebmann Krause.” [1] When we reproduced this imaginary novelist in our next prompt—dutifully following instructions for prompt chaining—ChatGPT responded by expanding on Krause, turning him into a viable celebrity with a fan base that, though “niche,” would wish to attend the weekend.
Our thought at the time was that ChatGPT is like a person with very bad short-term memory who forgets that he has made up an author and canon out of whole cloth and, therefore, cannot depend on his fans as paying customers. As we later came to realize, the real issue is not poor memory but, rather, ChatGPT’s fundamental inability to distinguish between reality and fiction: what Fredrikzon (2025) calls its “epistemological indifference.” That is, ChatGPT did not actually know that it had fabricated an author out of whole cloth, any more than it “knew” that it had mischaracterized the Paulins Kill as “picturesque.” That is because, unlike a person, ChatGPT does not draw on a conceptualized understandings of the world that are cultivated through lived and embodied experience.This is why implementing LLMs as personified chatbots, while using human data workers to improve their outputs, is so deceptive. At bottom, the system’s apparent fluency is predicated on the recycling of common patterns that are statistically proximate to the user’s inputs: a relation between strings of numbers in the model’s mathematical representation of linguistic patterns observed during training. A sample of the best of the model’s outputs are then identified by human workers and programmatically “reinforced” (see Stone, Goodlad, and Sammons 2024).
We next added to our prompt series the name of a real-life musician who had played at our wedding—a figure respected and prominent but not truly famous. We said that he would be appearing at the breakfast weekend along with “Krause.” ChatGPT warned that both figures were “niche” and would generate attendance only from their hardcore fans. Once again, the chatbot illustrated its inability to distinguish fact from fiction when it responded to its own invented novelist and our real-life musician as if they shared the same ontological status. We wondered if the absence of the made-up “Krause” in its model of training data accounted for the attribution of a “niche” fandom. Only later did we learn that this was an incorrect surmise about how statistical inference works. Once again it was symptomatic of our very human propensity to compare the behavior of an ostensibly fluent language model to that of a human being. Unbeknown to us at the time, we were struggling with a version of a problem that the computer scientist Joseph Weizenbaum first noticed in the 1960s and which has come to be known as the ELIZA effect that is, “the tendency of people to project humanlike status onto computational systems that articulate even minimally sensible language (Stone, Goodlad, and Sammons 2024).
What we really needed was the knowledge that, as S. Casey Laizure (2024) has pithily put it, “ChatGPT doesn’t know what it is saying and doesn’t know what you are asking.” These systems are “epistemologically indifferent,” in Fredrikzon’s (2025) words, because of their “lack of a structured knowledge representation of any recognizable kind.” The problem for a novice like me, undertaking a first adventure in the chaining of prompts and subject to the ELIZA effect, is that it is tempting to think of the bot as a lucid, rational actor that can be gently corrected—as one might with a promising, intuitively gifted, but inexperienced student. This is the result partly of deliberate design choices. Far from an inexperienced student, gifted or not, ChatGPT’s epistemic illusions are made possible by memorized patterns and statistical weights juiced through “reinforced” human feedback and a few other technical workarounds (see Pascal forthcoming in Critical AI). Thus, unlike a faulty human, ChatGPT partakes in something like a spurious filling in of the blanks: a highly-engineered conjuring trick that seems to deliver knowledge when there is none.
In a similar vein, we found ourselves wondering about the “Chathar” in Into the Chathar. Some psychotic cult that worshipped an obscure fruit? A galactic vortex? A fictional forest? Perhaps the model predicted that a proper setting for a fantasy novel would be vaguely non-European-sounding and exoticist. If so, ChatGPT would be mimicking the colonial assumptions that are “baked into” the fantasy genre and, thus, prominent in the relevant training data. In retrospect, I might have asked ChatGPT, but I lacked the presence of mind—and as I later learned anything that ChatGPT says about its own operations may be unreliable and will likely be mediated by developer scripts and preferences. As the novelist Ted Chiang (2017) has suggested, metacognition is one of the key differences between human thinking and optimized pattern-finding. Unlike my students, ChatGPT cannot stand back and contemplate its own actions—though, thanks to deliberate design choices, it may use language that suggests that it can.
But to me and many others who are thinking about this topic, those human choices, and the metacognitive reflections that support them, go to the very heart of the writing enterprise. Even a beginning writer in an introductory college composition class can explain their choices in some fashion. Indeed, such courses are designed precisely to build on the student’s capacity to regard the writing process as an opportunity to exercise critical choices while developing the rhetorical skills necessary for articulating them to others. A small business owner lacking the capital to hire an experienced marketing professional would, to my mind, be better off hiring an enterprising undergraduate to write some copy. The student would at least minimally be able to ground their epistemic assumptions in some way—they would not pass off correlations as if they constituted confident knowledge.
Of course, some may disagree. Not all students will find a suitable template for brochure copy and not all business owners will recognize such a problem. Moreover, my advice overlooks the enticements of cheap, convenient, and supposedly “frictionless” content generation. Still, I am tempted to say that you get what you pay for.
Generative AI and the Research Enterprise
Still curious, I pursued a question more immediately relevant to my professional expertise (literary critics reading this will understand). I asked ChatGPT to name the leading Chaucer scholar. Lo and behold, the first name mentioned was W. W. Skeat, the famed late-Victorian era Chaucerian who has been dead since before my grandmother was born.
Eventually, further questions conjured more contemporary names—some equivalent to what I would have answered. In answer to the question of who the leading scholars of a particular canonical author were, ChatGPT came up with the name of someone who had long ago left the profession. My impression was of a bias toward the long-established or the long-dead. When the bot came up with my name as the leading expert in one area, I was momentarily flattered until I saw myself paired with a scholar who died two years ago. The mortality of academics is not an AI forte.
Taken as a whole, ChatGPT’s generalizations about scholarship are both too hyperbolic and too haphazard to command real authority. Clearly, ChatGPT cannot draw nuanced inferences about research expertise from its training data. Instead it favors laudatory generalities. In this context it is worth repeating that ChatGPT is not a database or a continuously updated crowd-sourced resource like Wikipedia. Nor (see Shah and Bender 2022) does it operate like conventional search engines that, at their best, provide links to sources that are algorithmically predicted to represent the most relevant, popular, and/or reliable sources on the internet. As Shah and Bender explain, the statistical modeling of training data privileges the quantity of patterns in data, with no direct means of signaling the quality of those patterns. Far from providing any thoroughgoing percipience to its users, when ChatGPT becomes a “go to” resource for information, what gets sacrificed is the plurality and serendipity that more time-intensive and unpredictable research processes tend to yield.
In so doing, systems like ChatGPT suppress the “friction” that is integral to strong habits of information retrieval (Allison and DeRewal 2024). My own experiments suggest that is certainly true in the field of literary criticism. Research in the humanities should yield a sense of multiple possibilities that may, between themselves, be irreconcilable and yet testify, in their range of conjecture, toward a heuristic interpretive field. By contrast, ChatGPT promotes a consensus that smooths out the truly generative contradictions. Moreover, with its tendencies to privilege the dead and out-of-date, ChatGPT, at least as far as literary criticism goes, is more likely to provide stale, tendentious, and misleading ideas than an undergraduate’s random pass through a Google Scholar search or—dare I say it—a look at the library shelves. A carapace of statistical flattening prevails over the messier and more vital plurality of information that is actually, or at least potentially, at our disposal. The presumed “ontology” of ChatGPT turns out to be no ontology at all.
“Writing” Without Writing
Given this inescapable, if inevitably disappointing reality, how are we to frame and mediate the effect that ChatGPT and its correlates will doubtless exert on humanities disciplines such as literary criticism? One immediate ramification, I suggest, is to concertedly prioritize acts of primary reading and writing. The experience of querying ChatGPT for information reminded me of an undergraduate professor of mine who advised her students to set aside secondary criticism. “If you have time to read anything else but the work you are analyzing, read another primary work of the same author or period.” My professor was not disparaging the value of secondary criticism or being deliberately formalist. She was instead assuming that undergraduates would probably not know how to find the most cogent criticism in the library. And she believed that secondary work, even if well-done, could distort our perception of the primary work. She was telling us to wait to make any foray into the secondary work until we had learned to fully engage with the primary work. Although I did not always obey her injunction, I well understood the logic behind it. A student’s most meaningful thinking about any object of analysis should, to the extent possible, not begin by being derivative of some other thinking. Eventually, one learns the habit of reading secondary criticism (as one does everything else) critically. But those habits take time to cultivate.
By contrast, ChatGPT crystallizes a kind of potted, half-outdated version of the conventional wisdom. In this way it makes palpable what Lionel Trilling (1950: 206) called “the hum and buzz of implication.” Trilling was referring to what was “in” or accepted: playing with ChatGPT on matters of literary criticism—and much else, I suspect—makes one aware that said hum and buzz are better off implied than realized. To be sure, people are encouraged to go to these mechanisms to save time. But this effort to minimize effort may end up costing far more than what any student or researcher should be willing to pay. In her evaluation of NotebookLM as a tool for research, Tiffany DeRewal (2025) shows how such products sidestep the cognitive and experiential trajectories of process through which so much genuine learning and research occurs. As with most generative AI applications, the most adept users are likely to be experienced scholars, fully and arduously trained in the art of research and analysis. How will society sustain such practices and produce the next generation of scholars if higher education loses the war on “friction”?
The advent of LLM-based text generators is not just an opportunity to defend literary thinking but also to understand what literary thinking is and does. Though the field has its own issues with trends and herd mentalities, literary criticism has historically had little patience for the parroting of simplified consensus positions—much less the goal of conducting “research” at the touch of a button. In my current work in progress, my experiences with ChatGPT have led me to thinking about how literary criticism, going back to Aristotle, is in the realm of production—in other words, of action. When critics actively analyze the production of others, they undertake a mode of production themselves. Without being purely functional or utilitarian, words are deeds to literary critics. If, in Kathy Eden’s words (1986: 34), literature is part of “the practical or active life,” literary works can practically operate in the realm of causation. But merely to observe the difference between the literary critic and the “stochastic parrot” (Bender et al. 2021) is not to sufficiently contend with the challenge of LLMs. That these systems use words as their medium makes any assessment of post-generative AI textuality a tangled one.
In contrast to the earlier effects of the internet, generative AI is not just a computerized information technology, but also a technology that renders legible text in the style of human-wrought representation. For example, with only minimal instructions from a human user (perhaps by means of prompt chaining), generative AI can produce something that readers could recognize as a novel. I set aside whether anyone would knowingly wish to read such a novel (though we know that plenty of people are right now duped into buying them on sites like Amazon which allow them to be passed off as the work of favorite author; see Jones [2025]). For now, I am more interested in recalling Chiang’s point about the machine incapacity for metacognition; ChatGPT’s capacity to “write” a “novel” points to a fundamental incapacity. I don’t mean that one could not prompt ChatGPT to generate a novel in which, say, a generative AI system reflects on generative AI or shares its “thoughts” on any other topic. But while generative AI is perfectly capable of going “meta” in this performative way, it cannot take account cognitively of its own representative action.
To make this point clearer, consider the work of Alexander Manshel (2020), who has argued that, relative to the fictional depictions of earlier media such as radio and television, the fictional depiction of the internet and mobile devices in comparable early twenty-first century fiction is “uncannily rare.” “Though digital technology is now instrumental in the research, writing, publication, marketing, purchasing, and even reading of contemporary fiction, it remains largely absent from literary fiction itself” (53). What the argument overlooks, however, is the possibility that the novelists of this era have consciously chosen how to represent this technology, through a deliberate reduction in degree. That is, a human author might set her novel in, say, 2007 but consciously underrepresent the presence of the internet at that time for strictly novelistic reasons. Yet, such a novel would reveal more interpretive knowledge about the internet, than if one instructed ChatGPT to produce a novel of so many words, set in 2007, which depicted the modes of online use available at that time, even though the latter text would almost certainly discuss the internet quite explicitly. Put differently, LLMs are unable to really “write” a novel in the way that human authors do but their ability give the impression that they can explodes the differential wall between literary representation and depiction of technology.
Whereas radio, television, and the internet diverted some literate people from the act of reading fiction (and perhaps reading anything), the marketing and implementation of text generating chatbots pose the danger of diverting many people who communicate in text from the act of writing as I conceive it (see also Kitzinger 2024). In this sense generative AI is not simply a media advance of the kind that interests Manshel. It is also an instance of what Dustin Edwards (2025) calls “digital damage.” Edwards shows that the building of centers for data processing and mining inflicts significant, tangible harm to land and water resources. I believe that the chatbotization of writing could inflict parallel damage on the cognitive and imaginative resources of young people today and future generations. Some may quibble about the extent of the damage and they may quibble about the meaning of “writing.” Nevertheless, a mechanism with the apparent ability to generate textual content that rhetorically resembles human writing but which, in reality, emerges from a wholly or largely automated process will, at the very least, require critics (among others who study writing) to think much more about the advent of legible bodies of textual content borne of cognitive absence and epistemological indifference.
It is still too early to know how far the adoption of generative AI will take us given that there are already signs of a backlash. I believe that one reason for the anxiety about generative AI undermining the literature classroom is that, as a society, we have not, for some time, valued literature or literary criticism nearly enough. In particular, we do not see literary criticism as an act of production. The most optimal way to counter epistemological indifference, I contend, is not just by knowing, but by doing; by taking action.
In the literature classroom that action begins with reading. That many people have come to believe that reading a summary of a literary work is somehow sufficient seems implicitly to mark a lack of understanding of what it means to read a literary work—questions that literary critics may need to take up anew. What critics such as Terry Eagleton (1983) marked forty years ago as hostility to theory has seemed to degenerate into a full-blown hostility to reading books that predates chatbots by at least a decade. Cryptocurrency magnate Sam Bankman-Fried infamously commented that he never read anything longer than a blog post of a few paragraphs (see Roberts [2022]). His comment was edgy to be sure, but, sadly, hardly a complete outlier. Now the developers of commercialized chatbots seek to “solve” the problem of “writing” about what one has never actually read through the technically mediated recycling of consensus-driven textual fodder.
As Chloë Kitzinger (2024) has argued, Plato’s writing on writing in his Phaedrus is not (as many believe) a case of the futility of resisting a destabilizing new technology; rather, as Jacques Derrida’s rereading of that text helps us to recognize, what generative AI outputs has none of the concerted action, multivalence, or possibility of “writing” in either Plato’s or Derrida’s sense. Without that multivalence, a new technology for “writing” is less a form than a simulation of communication. Ricky D’Andrea Crano (2025) notes the “techno-solutionism” of overoptimistic discourse about AI. This is the latest avatar of a long-enduring hostility to theory. As the linguist Roman Jakobson put it (1987: 29-30), theory involves a "laying bare of the device"—that is, an exposure of the vulnerabilities within intellectual paradigms that may otherwise seem plausible (Jakobson qtd. in Bradford 1994:98). Crano (2025) has discussed this techno-solutionism as taking the form of a “pedagogy of the inevitable,” which uses frenetic optimism over the alleged quantum change resulting from “AI” to promulgate “surveillance, data theft, labor oppression, epistemic violence, and the monetization of every conceivable form or fraction of human life.” But Crano also suggests that we can resist these trends by remembering that “writing involves a struggle with language” and that machines may facilitate in this struggle, but cannot truly collaborate. My point (and Crano’s) is not that students and teachers can return to humanistic bromides, or deny the materiality of today’s developments in generative AI. But we have to refuse a rhetoric that enjoins us either grimly or exuberantly to accept developments that will lead to relinquishing agency and individuality and giving consent to an incipient corporatist authoritarianism.
It is no surprise that, in light of this urgent threat, some writing educators have made the case for “refusing” generative AI in the sense recently proclaimed by Maggie Fernandes, Megan McIntyre, and Jennifer Sano-Franchini (2025). Others are trying out the alternate path of what they call “experimentation and adoption”; while others still are urging the teaching the of critical AI literacies as a necessary component in higher education inside and outside the writing classroom (Critical AI @ Rutgers n.d.). In all these cases, emphasizing direct interaction with texts remains essential to counter superficiality and energize serendipity, friction, and thought. Perhaps, generative AI will give literary criticism a new impetus to theorize and teach reading and writing as purposeful cognitive and metacognitive actions.
Not long after ChatGPT emerged in November 2022, a colleague of mine was distressed to find that the bot had already seemed to ask and answer all the questions concerning a particular literary work. How could one go into the classroom and teach after that? Of course, such early reactions to what momentarily appeared to be a substitute human intelligence have since given way to a different kind of despair: the painful recognition of student learning loss every time one reads a paper almost certain to have been “penned” by a chatbot. Among other teachers, literary critics have some serious thinking to do about their pedagogy.
Still, the answer I gave my colleague back then still holds up I think. I volunteered that I could look at Google Earth and see an accurate representation of my walk from my house to the center of town. What might I encounter? What might I think along the way? As I now know, I might prompt ChatGPT to provide some plausible conjectures as to what might happen to a person like me on a walk along that route. But only after I took that walk would it be real.
Nicholas Birns teaches at New York University and recently co-edited The Cambridge Companion to the Australian Novel (Cambridge UP, 2023).
Notes
Works Cited
Allison, Leslie, and Tiffany DeRewal. 2024. “Where Knowledge Begins? Generative Search, Information Literacy, and the Problem of Friction.” Critical AI 2.2.
https://doi.org/10.1215/2834703X-11556038
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
Bradford, Richard. 1994. Roman Jakobson: Life, Language, Art. Routledge.
Chiang, Ted. “Silicon Valley Is Turning Into Its Own Fear.” https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway Accessed 13 August 2025.
Critical AI @ Rutgers. n.d. “Teaching Critical AI Literacies: ‘Explainer’ and Resources for the New Semester.” https://docs.google.com/document/d/1TAXqYGid8sQz8v1ngTLD1qZBx2rNKHeKn9mcfWbFzRQ/edit?tab=t.0#heading=h.kgds7i8l6uca.
Crano, Ricky D. “UCI’s School of Humanities is Kicking is Dedicated Critical AI Researcher to the Curb,” August 2023, https://www.citricacid.ink/issue-13/uci 's-school-of-humanities-is-kicking-its-dedicated-critical-ai-researcher-to-the-curb, accessed 13 August 2025.
Crano, Ricky D’Andrea, “A Pedagogy of The Inevitable,” Critical AI, volume 3, number 2, 2025.
DeRewal, Tiffany, “Ask An Evaluating LLM “Research Assistants” And Their Risks for Novice Researchers,” Critical AI blog, March 26, 2025, https://criticalai.org/2025/03/26/ask-an-expert-evaluating-llm-research-assistants-and-their-risks-for-novice-researchers/,
Eagleton, Terry, Literary Theory: An Introduction (Blackwell, 1983).
Eden, Kathy, Poetic and Legal Fiction in the Aristotelian Tradition (Princeton University Press, 1986).
Edwards, Dustin, Enduring Digital Damage: Rhetorical Reckonings for Practical Survival (University of Alabama Press, 2025).
Fernandes, Maggie, Megan McIntyre, and Jennifer Sano-Franchini, “A Year of Refusal,” https://refusinggenai.wordpress.com/2025/11/12/a-year-of-refusal/, accessed 14 November 2025.
Fredrikzon, Johan, “Rethinking Error, “Hallucinations,” and Epistemological Indifference, I Critical AI, volume 3, number 1, https://doi.org/10.1215/2834703X-11700255
Jakobson, Roman. 1987. Language in Literature. Edited by Krystyna Pomorska and Stephen Rudy. Belknap press of Harvard university press.
Jaźwińska, Klaudia, and Aisvarya Chandrasekar, “AI Search has a Citation Problem,” The Columbia Journalism Review, March 6, 2025, https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php,accessed 13 August 2025.
Jones, C. T. 2025. “Amazon Is the World’s Biggest Online Book Marketplace. It’s Filled With AI Knockoffs.” Rolling Stone, October 27. https://www.rollingstone.com/culture/culture-features/amazon-ai-book-knockoffs-1235450690/.
Kitzinger, Chloë. "OpenAI's pharmacy? On the Phaedrus analogy for large language models." Critical AI 2.1 (2024). https://doi.org/10.1215/2834703X-11205203
Korzynski, Pawel, Grzegorz Mazurek, Pamela Krzypkowska, and Artur Kurasinski. 2023. “Artificial Intelligence Prompt Engineering as a New Digital Competence: Analysis of Generative AI Technologies such as ChatGPT.” Entrepreneurial Business and Economics Review 11.3: 25–37..
Laizure, S. Casey. 2024. “Caution: ChatGPT Doesn't Know What You Are Asking and Doesn't Know What It Is Saying.” The Journal of Pediatric Pharmacology and Therapeutics 29.5: 558–560.
Manshel, Alexander, The Lag: Technology and Fiction in the Twentieth Century. PMLA/Publications of the Modern Language Association of America. 2020;135(1):40-58.
Roberts, Molly, “Sam-Bankman-Fried Doesn’t Read. That Tells Us Everything.” Washington Post, November 29, 2022, https://www.washingtonpost.com/opinions/2022/11/29/sam-bankman-fried-reading-effective-altruism/, accessed 2 March 2026.
Shah, Chirag, and Emily M. Bender. 2022. “Situating Search.” In Proceedings of the 2022 Conference on Human Information Interaction and Retrieval.
Stone, Matthew, Lauren M. E. Goodlad, and Mark Sammons. 2024.“The Origins of Generative AI in Transcription and Machine Translation, and Why That Matters.”Critical AI 2.1.https://doi.org/10.1215/2834703X-11256853
Trilling, Lionel, The Liberal Imagination (Viking, 1950).
