Critical AI

Emily M. Bender on Stephen Marche’s “Of God and Machines” in The Atlantic (9/15/2022)

September 19, 2022

Emily M. Bender (University of Washington)

[The AI Hype Wall of Shame is a collaboration between Critical AI and the DAIR Institute. For more on this initiative and our rating system, see this link.]

Rating: #threemarvins (full-on paranoid)

This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment and education in spotting #AIhype, I present a brief annotated reading.

Straight out of the gate, he’s not just comparing “AI” to “miracles” but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to “God” and “demons”.

This is not the writing of someone who actually knows what #NLProc is. If you use grammar checkers, autocorrect, online translation services, web search, auto-captions, a voice assistant, etc., you use NLP technology in everyday life. But guess what? NLP isn’t a subfield of “AI”.

Here’s the author is claiming to have inside knowledge of some “esoteric” technology development that, unbeknownst to the average human, is going to be very disruptive. But note the utter lack of citations or other grounding for this claim.

Okay, agreed on fake-it-til-you-make-it, but “direct thrust at the unfathomable” and “not even the engineers understand” are just unadulterated hype. If they don’t understand how it works, how are they even measuring that it works?

Protip: They aren’t really. The capabilities that the AI boosters claim to have built are ones that we don’t have effective benchmarks for, and actually can’t, in principle. See: AI and the Everything in the Whole Wide World Benchmark by @rajiinio et al (@cephaloponderer, @alexhanna, @amandalynneP and me). For a quick overview, see this article by @bendee983.

Okay, back to the hype. This is weirdly ominous and again provides no supporting evidence. You can’t see it, but that doesn’t mean it isn’t there… is not an argument that it is!

This is kind of fun, because I was musing a few weeks ago about how we don’t usually go to “superhuman” for other tools. And it does sound ridiculous, doesn’t it?

Why is “AI” the only thing we describe that way? No one says: This airplane has a superhuman flying ability! This jackhammer has a superhuman pounding ability! This printer has a superhuman typing ability! This camera has a superhuman drawing ability! For all other tools, we understand them as things that humans create, refine, adopt to perform some function. They extend our abilities. In claiming “superhuman” for AI, we are claiming that it does everything humans do and then some, and therein lies all kinds of problems.

If you don’t know how something works, but can test that it works (with certain degree of reliability), then it is usable. It’s true that deep learning is opaque on the how. But we can’t let any engineers off the hook in terms of testing the functionality of their systems.

“What technologists call ‘parameters'” makes this sound so ominous and mysterious. Our “little animal brains” have ~86 billion neurons. So not a different scale (and with much more complexity).

More to the point: none of this is inevitable. DL systems aren’t naturally occurring phenomena that we can try to understand or just stand in awe of. They are things we are building and choosing to use. We can choose not to, at least with pretty sufficient testing for each use case.

Also, because it feels gross to compare language model parameters to human neurons, I want to plug again this great article by @AlexBaria and @doctabarz on the computational metaphor.

Back to Marche: I don’t think we should necessarily believe the people who got super rich off of surveillance capitalism when they say “oh noes, can’t regulate, it would stop the development of the technology.”

Again, whether or not we try to build this (and with what regulatory guardrails) is a CHOICE. But also: it would be pretty easy with today’s stochastic parrots to sometimes at least get an answer like that (While other times getting hate speech…).

Uh, just because you put these things in a list does not make them all the same kind of thing (“language game”).

Yeah, just because the people who built the thing say it does something “in a ways that’s not dissimilar from the way you and I do” doesn’t make it true. Do they have the expertise to evaluate that? How did they evaluate that?

Oh, and again, while “contemporary NLP” does use neural LMs for a lot of things, I wouldn’t say it “derives” from them. There is more to the field than just throwing neural nets are poorly conceived tasks.

What comes next is some GPT-3 authored additional hype. Stating with the prompt, “And if AI harnesses the power promised by quantum computing,” Marche does acknowledge it (in the following paragraph). He is also responsible for deciding to include it (note that Marche also doesn’t tell us how many tries he took to get the one he chose to include).

It’s not doing any of these things, actually. Having synthetic text in the style of someone who has died is not bringing them back from the dead. I’m not sure what an “imitation” of consciousness is, nor how it would benefit us.

And it is certainly not “piercing the heart of how language works between people.” On how LM-generated text is nothing like human linguistic behavior, see Bender & Koller 2020 and also this episode of Factually!

 

And one last screencap before I end. Where is the evidence for any of these claims? None is provided.

So, I hope that was enjoyable and/or informative. I give this one #threemarvins. Could 2022 be the year of peak #AIhype? That sure would be nice.

 

Postscript: Important additional info on the comparison of 100B parameter networks to human brains from @mark_riedl: “Those 86B neurons result in 1,000 trillion synapses, which are more analogous to parameters. Human brain isn’t so puny after all. (Also each individual real brain neuron has been shown to be equivalent to a multi-layer neural network, providing another order of magnitude).”

 

Want to follow the conversation on the original Twitter thread? Access it here.

 

Emily M. Bender is a professor in the Department of Linguistics at the University of Washington. She serves as the faculty director of the CLMS program and the director of the Computational Linguistics Laboratory. A Howard and Frances Nostrand Endowed Professor, her research interests include multilingual grammar engineering, linguistics in NLP/Computation in Linguistics , the societal impacts of language technology, and sociolinguistic variation. For more of Professor Bender’s responses to AI hype, go here.

Exit mobile version