NOAH GIANSIRACUSA ON “YOU CAN HAVE THE BLUE PILL OR THE RED PILL, AND WE'RE OUT OF BLUE PILLS" IN THE NEW YORK TIMES (3/24/2023)

April 3, 2023

Noah Giansiracusa (Bentley University)

[The AI Hype Wall of Shame is a collaboration between Critical AI and the DAIR Institute. For more on this initiative and our rating system, see this link.]

Rating: #twomarvins (depressing)

Icon of 2 marvin heads, signaling the "depressing" rating
Rating= #twoandahalfmarvins (depressing/paranoid).

This op-ed by Yuval Harari, Tristan Harris and Aza Raskin falls into an #AIhype trap. Mathematician Noah Giansiracusa teases apart the good, the bad, and the ugly. 

I found the recent @nytimes opinion piece on AI by @harari_yuval @tristanharris @aza very interesting and I agree with some of the overall thrust and points but object to MANY of the important details. So, time for a 🧵 detailing my critiques:

Starting with the opening: They analogize a survey of AI experts estimating AI doomsday to airplane engineers estimating the probability a plane will crash. This is wildly misleading. Flight safety is based on very well-understood physics and mechanics and data, whereas these AI apocalypse estimates are completely unscientific, just made-up numbers, there’s nothing meaningful to support them. And AI experts are biased: they benefit from the impression that AI is more powerful than it is and could easily deceive themselves into believing it.

Airlines don’t benefit from telling you flying is risky, they benefit from getting you safely from A to B. With AI we don’t have a specific commercial goal like this so value is generated from the aura of omnipotence radiating from AI–and xrisk AI doomsday plays right into this. When I see that X% percent of “AI experts” believe there’s a Y% chance AI will kill us all, sorry but my reaction is yeah that’s what they want us to think so we are in awe of their godlike power and trust them to save us. It’s not science.

Next point I LOVE:

Amen to this!! 

But then a highly problematic line comes next:

I 100% agree in spirit, but hold on: “humanity’s most consequential technology”?!? Are you seriously putting chatbots above antibiotics, pasteurization, the internet, cell phones, smart phones, cars, planes, electricity, the light bulb…Chatbots are fun new apps that’ll make a lot of tasks more efficient, but claiming they’re humanity’s most consequential tech is an ENORMOUS assumption to glibly sneak in there.

Maybe you mean all of AI not just GPT? Still, AI wouldn’t have much impact on us if we weren’t connected to each other on the internet, so why not say instead the internet is the most consequential tech? Or computers? Because that wouldn’t make headlines today, whereas AI does.

I sort of agree, but many things that look exponential are actually logistic. For AI, we will face challenging limits, not just a fading Moore’s law, also data: we’re already training LLMs now on most human written text, so how do we keep increasing this finite supply? Even if we could, how do we know AI capabilities will continue exponentially and not plateau? We don’t. They might, they might not: it’s an assumption, not a fact.

Next:

Hmm, some important truth here but also more reckless exaggeration: GPT4 can speak fluently, so maybe it’s mastered language in that sense–but to manipulate us it’d need to measure the impact its words have on us, and it cannot. Donald Trump could see what his words do to his crowds of supporters, what actions they took because of his speeches.

Chatbots are not remotely in a position like that. They produce text, they might accidentally manipulate individuals (like trying to convince @kevinroose to leave his wife) but mastering linguistic fluency is NOT the same as mastering the manipulating power of language.

Good start, I’m with you…

?!?!

How did we just jump from generative AI to this level of omnipotence? Maybe this is down the future road, but the article seems to be about GPT4 and sequels and these AIs just don’t know enough about us to have these superpowers yet. So I agree that this is an enormous risk but worry the framing here is overly flattering to current AI capabilities and chatbots. Google knows what ads I’m likely to click and what products I’m likely to buy–GPT4 doesn’t know what any convos it has with me will do to my real-world actions at all. 

It is SUCH a leap from a rigid mechanical game like chess to these amorphous real-world settings where winning isn’t even defined.

Fascinating point, frightening prospect, and unfortunately I do find this plausible: not necessarily that AI will produce great cultural artistic achievements (it might, it might not), but it’s sure likely to produce the kind of consumer-oriented successes that capitalism selects for (ha, sorry, couldn’t resist the pop culture jab here).

But this nice sentence is followed by an utterly silly one:

Nope. I mean, sure, campaigns will use data and algos, but Obama did that in 2008; the tools will be fancier in 2028, but come on it’ll still be run by humans.

I LOVE this point!!! But then the interpretation of this with AI is slippery:

As the authors point out, it’s already colored by social media and much other technology.

I don’t see a huge paradigm leap into a nonhuman world–I just see an acceleration and expansion of the role algorithms play in society. I agree with their worries but find them overplaying the role of AI here–I see more of a tech continuum than they do. 

Somehow autonomy and careful planning of actions has snuck into this mastery of language. GPT4 type apps DO NOT know their impact, plan their actions, they’re not aware of the world we live in, so I find this a big stretch. Maybe a risk down the road, but it’s a long road, so again my beef here is conflating superficial fluency of language with some kind of omnipotent understanding of the impact of words and some autonomous desire to leverage that. Let’s say likely a valid risk eventually but this ain’t just a matter of GPT-style linguistics.

Won’t go into details on the parts about social media but I LOVE that part of the article and fully agree and think they expressed it powerfully and beautifully. Please read it!

After describing social media as our first contact with AI, then a strange line:

Social media relies heavily on large language models. Look up Facebook’s RoBERTa, for instance. So maybe chatbots rather than LLMs.

A lot to unpack here but just for now to point out that I see chatbots much like social media algs: they’ll be plenty aligned with the profits of the tech giants.

So the problem isn’t aligning them, it’s deciding whose benefit they’ll be aligned with–and the obvious answer is the companies commercializing them. So I totally agree with the problem the authors raise but I don’t find it new at all here with chatbot AI.

The following paragraph is so beautifully stated and important that I cannot skip it:

The time to reckon with AI is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it.

YES YES YES!! This is so important and I’ve never seen it stated so clearly and so well! Thank you!!

When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises. We have summoned an alien intelligence.

No need for “godlike” and “alien intelligence” here; needlessly diluting your important point with hype.

I’m sorry to be so critical as I do think the essay is fascinating and the conclusions of what to do are spot on and extremely important. But all the interwoven hype is dangerously misleading and free advertising for already overly powerful+reckless tech companies. Fin.

Want to follow the conversation on the original Twitter thread? Access it here.

Noah Giansiracusa is an assistant professor in the Department of Mathematical Sciences at Bentley University. After publishing the book How Algorithms Create and Prevent Fake News, Noah has gotten more involved in public writing and policy discussions concerning data-driven algorithms and their role in society. He’s written op-eds for Scientific American, TIME, Barron’s, Boston Globe, Wired, Slate, and Fast Company as well as appeared on BBC Radio 4, Slate, and TechCrunch podcasts. Noah is currently working on a second book, Robin Hood Math: How to Fight Back When the World Treats You Like a Number, with a Foreword by Nobel Prize-winning economist Paul Romer.

Share this:

Like this:

Like Loading...

Discover more from Critical AI

Subscribe now to keep reading and get access to the full archive.

Continue reading