March 7th, 2024
[ For more on The AI Hype Wall of Shame and our rating system, see this link.]

Let’s face it: technology companies have been targeting education as a golden goose for more than a century–not least since the rise of tech mogul “philanthropies” like Schmidt Futures and the Bill and Melinda Gates Foundation. AI snake oil is pretty slippery. And with the mounting marketing pressure to teach with commercial chatbots, it’s slipping right into higher ed reporting. “Ethics” gets lip service before reporting shifts into “teaching tips” mode.
Case in point is this February 6, 2024 piece on “How college professors are using generative AI to teach,” by Ashley Mowreader. The article starts well enough:
“Since the launch of ChatGPT in November 2022, educators, administrators and other higher education stakeholders have grappled with the implications of generative artificial intelligence on intellectual property, academic integrity and ethical use, among other topics.”
Mowreader rightly reports that educators are “grappling” with generative AI’s implications for “intellectual property”–that is, the technology’s core reliance on the unconsented use of the creative work of individuals and publications such as the New York Times and The Intercept. But while she dutifully shouts out the question of “ethical use,” her cursory framing of the topic avoids a key underlying question: is “ethical use” even possible given the many known harms of commercial chatbots and other “generative AI”?
Note to readers: This is a recurrent pattern for ed tech articles, which often touch on ethical quandaries only to hurriedly move on and away. It’s as if merely to utter the word “ethics” solves the array of wide-ranging problems and harms that one’s reporting has not even bothered to elaborate.
No surprise, then, that Mowreader’s weak ethical sauce soon gives way to techno-determinism: the implicit assumption that the latest ed tech fad (in this instance for-profit chatbots) presages an inevitable march toward mass adoption that only the hoariest technophobes will contest. Never mind that Mowreader’s own facts hardly make that case: “While the majority of U.S. students have yet to latch on to the trend,” she writes, “they’re still outpacing their instructors, with a Tyton Partners report in October 2023 finding only 22 percent of faculty members utilize AI.”
What should inquiring readers take away from this factoid? What implacable “trend” have the majority of students yet to “latch on” to? And what exactly are the instructors now “outpaced” by students doing?
Clicking through Mowreader’s links for some context, we find an article that cites the “senior director of engagement strategy for Anthology,” an edutech company, who argues that the fact that “only” 38% of students are “using AI at least monthly” is actually great. Indeed, according to this source, students’ slow uptake “opens a valuable window of time for university leaders to dig in and assess the landscape and deepen their understanding of how AI can be applied effectively at their institution.” Nevertheless, “It’s moving fast and the clock is ticking.”
Although ed tech consultants do love histrionics–It’s moving fast! The clock is ticking!--Anthology’s theory at least leaves open the possibility that those who “deepen their understanding” will do more than simply urge students to “latch on to the trend.” Perhaps at least some of these “leaders” are more thoughtful than TikTok influencers pushing the latest viral dance craze?
Mowreader herself is remarkably uncurious about what precisely chatbot-using faculty are actually up to. Among those who do use “AI,” she reports, “43 percent are running prompts to understand what their students might be using the tools for, 35 percent are using it to teach and 29 percent are using it for in-class activities.”
“Utilizing” is a rather ambiguous term: 43 percent of the 22% of faculty now “utilizing” AI, apparently, are just doing their jobs by trying to figure out what their students are getting from chatbots.
So let’s dig into those “using it to teach.” Are we talking about faculty co-opting chatbots to write their lectures? Or are we talking about faculty helping their students to build critical thinking skills by, perhaps, showing them how to probe the biases of generative tools? This is an approach that critical AI literacy folks have been urging for some time. Instead of introducing students to the technology by transforming them into OpenAI’s lifelong customers, jonesing for the next “generative” fix, teach them how to explore these flawed systems the way that data scientists do: as researchers rather than users. It’s a win-win for students who learn to understand and manipulate these proprietary systems through empowering knowledge. Those higher-level insights may also help them to impress future employers searching for know-how that exceeds merely “latching on to a trend.”
Now what are those 29% (of 22%) of teachers planning “in-class activities” up to? Is it “let’s draft essays using Claude 3,” or is it something more like “let me show you some failure modes of chatbots”? Needless to say, these different “in-class activities” embed wildly different pedagogical values that publications like Inside Higher Education should be able to articulate for their readers.
In fact, according to Mowreader, “the main barriers to AI adoption” in higher education “are security concerns, a need for AI training programs and ethical implications.” We agree! But perhaps these are “barriers” to adoption that we shouldn’t blow through just to catch up with the trendies?
After all, the problems entailed include the leakage of private data, severe biases and exclusions, non-stop surveillance, the dissemination of erroneous misconceptions and conspiracy theories, the exploitation of human workers (who are poorly paid to “mitigate” god-awful content), the growing environmental impact of computation-intensive “AI,” and the ever-present danger of malicious use. (For more discussion of these “ethical implications,” take a look, for example, at Bender et al.’s classic account of the dangers of these “stochastic parrots”, or at this Rutgers advisory document, and keep your eyes peeled for Critical AI’s upcoming special issue on generative AI and the rise of chatbots.)
Mowreader, alas, has no time for such trivia as she hurries on to four not-to-be-missed tips for teaching with chatbots–every one of which, it turns out, was featured in previous articles in Inside Higher Higher Ed. You read that right: talking up chatbot use in classrooms is, apparently, such a high priority for Inside Higher Ed, that all four of Mowreader’s “four ways faculty members can employ AI” reprise earlier reporting on the topic.
Leading off these underwhelming hits is a business communications professor’s approach to “improve student writing” by “requir[ing] students to experiment with ChatGPT.” The idea is to enable students to “learn how to create the prompts…to produce more accurate and helpful outputs.”
Think about that, readers: this “tip” equates “improving student writing” with writing prompts to get good outputs. This is the digital equivalent of attempting to teach students to ride a bike without ever taking off the training wheels.
Notice too how the assignment enlists students to get “accurate” content by prompting a fallible language model designed to predict probable answers. Several problems here. First, because of how these models work, a prompt that delivers “accurate” content on a single trial will not necessarily do so reliably. Second, language models have no means of ensuring the veracity or even adducing the provenance of their outputs. In essence that means that for students to determine the most “accurate” output, they need to obtain this accurate information from an authoritative source (as in doing some independent research).
Mowreader’s article provides no indication that the recommended prompting exercise involves guiding students through a rigorous process of accuracy-checking. But even if that were the plan, it wouldn’t change the fact that chatbots are, in effect, plagiarism engines. They can’t even identify the sources of their outputs, much less cite or credit them.
Of course, some of you right now may be thinking “but isn’t ‘prompt engineering’ the future profession of every humanities major?” The answer, alas, is not for very long. Consider that these commercial systems surveil your students’ inputs and will, in due course, help themselves to the most efficacious prompts. And they won’t be crediting your students when they do so. That’s why “requiring” students to sign up for the thievery of their intellectual work is hardly an “ethical use” in the first place (for this and other guidelines see “A Blueprint for an AI Bill of Rights for Education”).
In her next tip, Mowreader retells the tale of a professor who used “AI-powered tools” such as “MagicSlides” to “put together a presentation for his podcasting course.”
Wow. Are you feeling the magic? Or are you feeling instead as if nothing done in a classroom with ChatGPT is too banal to qualify for publication–and republication–in the pages of Inside Higher Ed? What’s next? “AI-powered tools helped me check my spelling”?
Mowreader’s third tip–”providing students with feedback”–is worth a deeper dive. In fact, many writing instructors are debating whether probabilistic language models are capable of providing substantive feedback on student writing. Such conversations remind us that however convincing the appearance of human-like understanding may be, statistical models excel at grasping the form of language (predicting sequences of well-wrought grammar and syntax), but rely on pattern-finding and correlation to guess at the meaning and context. Suffice to say that we co-authors agree with Jane Rosenzweig, director of the Harvard College Writing Center, that a chatbot’s shallow and non-stop “feedback” impedes a student’s critical thinking without ever helping them to develop “the thing that makes the writing process meaningful”–the strengthening and articulation of a student’s ideas.
Mowreader, however, doesn’t take up the writing question. She turns to an earlier article that covered a much-discussed introduction to computer science course at Harvard which used a version of ChatGPT that “made the work of TAs and professors more efficient.”
Instructor efficiency is a telling metric. Taking a look at the (as yet non-peer-reviewed) paper that elaborated this experiment, we find a few glowing comments from students. But think about this: according to the paper–even though the instructors used the most up-to-date methods for prompting the model–ChatGPT delivered the wrong answer about computer science 12% of the time. And it gave the wrong answer about course policy 33% of the time. Would anyone enlist a TA with that track record? And if they did, would the result be a ballyhooed win for efficient teaching?
Let’s be clear here: if what’s wanted is a bot to help students manage homework and study 24/7, we have the technology–and it need not involve generative AI. Indeed, any textbook creator or tech-savvy instructor could build an interactive system to navigate course content that would cue students to the right chapter, video, website, or problem set and would be right more or less 100% of the time. A basic course like intro to CS is exactly the type of material suitable for such deployment–and it wouldn’t surveil students, leak their data, suck up energy and water, or inure students to wholesale chatbot dependence.
To be clear: with more investment, a CS bot could be built on a language model (which is exactly what many companies are now doing for fluent but purpose-built approaches to customer service). But developers beware! Companies that have played with generative fire have sometimes gotten burned.
For now the point is simply this: why would anyone want students to use “AI” that is recurrently wrong?
Mowreader’s final tip–”to provide workforce tools”–belabors a point frequently made by those pushing for generative technologies in the classroom. “Following their time in college,” she writes, “today’s learners will encounter AI in the workplace, and it is the responsibility of the institution to prepare them for this experience.”
Among the problems with this reasoning is that, contrary to the hype, the majority of businesses are not rushing to deploy “AI.” In fact, some of the reasons for their not “latching on to the trend” resemble the ethical and security concerns of educators. Let’s remember too that a lot of what employers are doing today will be out of date by the time a student leaves your course, much less enters the workforce.
A more valuable preparation would go deeper by helping students to develop lasting critical AI literacies. Critical AI literacies entail the ability to evaluate tools, understand their affordances and failure modes, and explore their provenance and implications (social, environmental, and ethical). Such literacies can help students–and their potential employers–to make informed decisions about when, how, and if a given generative application makes sense. Anything less is, at best, vocational instruction–not higher education–and, at worst, a recipe for ill-equipping our students for the future.
Mowreader’s final “food for thought” takes a more auspicious turn. She cites experts who advise educators to “create guidelines,” “teach ethical use,” and “understand limitations.” We agree.
In highlighting this type of reporting for the Wall of Shame, our point is not that the predictive data-driven deep learning that is today called “AI” cannot serve the public interest. But the most promising endeavors (in domain-specific fields such as climate science or medicine, for example) involve high-quality data and engaged interdisciplinary expertise–criteria that generative AI has yet to take seriously.
Let’s think twice before urging our students to “latch on” to the slippery commercial trends now hyped by huge tech companies and their investors. And let’s demand more from outlets like Inside Higher Education.
Subscribe now to keep reading and get access to the full archive.