This essay has now been published in Critical AI at this link https://doi.org/10.1215/2834703X-11700255; the abstract is pasted in below. If your institution lacks access to Critical AI please encourage them to subscribe. If you are an independent scholar please write to criticalai@sas.rutgers.edu.
ABSTRACT:
In our current generative AI paradigm, so-called hallucinations are typically seen as a kind of nuisance that will eventually be swept away as the technology improves. There are several reasons to question this assumption. One of them is that the very phenomenon is the result of deliberate business decisions by corporations invested in delivering diverse sentence structures through deep learning and generative pretrained transformers (GPTs). This article urges a fresh view on hallucinations by arguing that, rather than being errors in any conventional sense, they are evidence of a probabilistic system incapable of dealing with questions of knowledge. These systems are epistemologically indifferent. Yet, by presenting as errors to users of generative AI, hallucinations can function as practical reminders of and indexes to the limits of this kind of machine learning. Viewed this way, hallucinations remind us that every time one gets something reasonable-seeming from a system such as OpenAI’s ChatGPT, one might as well have been given something quite outrageous; from the machine’s perspective, it’s all the same.
