STUDENT INSIGHTS: Purple is not real: The Last Step between GPT-3 and Human AI

WILLIAM WU

The research, writing, and editing of this post was part of an undergraduate project undertaken for a Rutgers Honors College seminar in Fall 2020, “Literatures of Artificial Intelligence.” The author’s bio follows the post.

A week ago, I learned that purple isn’t a real color. To be precise, it’s not a spectral color; purple can only be seen as a mix of red and blue, and never as a single wavelength like green or orange. The red cones in our eyes detect red light, the blue cones detect blue light, and then we see the color purple (Settembre). But somewhere between our eyes and brain, the information of red and blue light transforms into the experience of purple. That instance of subjective, conscious experience – of subjective, conscious purple – is called qualia.

Qualia can remind you of a familiar thought experiment: what if we all have distinct qualia for the colors we see? My red could be your orange, and your red could be my orange. This hypothetical is a variation of the Knowledge Experiment. Imagine a world where Mary lives in a black-and-white world but researches everything there is to know about color. She understands how colors reflect light and how our eyes process color, but she has never seen color. Would Mary learn anything new when she sees color for the first time (Nida-Rumelin)?

Right now, AI is in a place similar to Mary. If you asked a computer for the color purple, it could tell you the exact ratio of red, blue, and maybe dashes of green to make your precise shade of purple. It could even display that color for you. But the computer has no correlative for the qualia that allow most humans to visualize purple. AI right now can provide an explanation for why purple and orange and yellow go nicely together, but it will never be able to experience the majesty of a sunset.

That brings us to Hard Problem of Consciousness. How are qualia produced? Currently, no one has a sure answer, and this post certainly does not either. Before we can even get to the Hard Problem, there are the Easy Problems to answer: how do humans categorize and react to stimuli, how do they integrate information, how do they access and report mental states, and how will AI be able to simulate these cognitive capacities (Chalmers).

The Hard Problem of Consciousness may seem unnecessary to answer from a practical perspective. Indeed, one might wonder, so what if robots are unable to experience a sunset? A robotic being would have no obvious use for qualia. A current text-generation AI, OpenAI’s GPT-3, is a prime example of why qualia may be essential to build a human-like AI. GPT-3 may be our most advanced language prediction AI so far, but it doesn’t “fundamentally change progress in AI” as its idea of a joke (apparently!) is, “What do fish say when they step on broken glass?… No, fish do not have ears” (Bartlett). GPT-3 only mines massive troves of text and finds plausible patterns without understanding the language in question. If it could experience the qualia of a good joke, perhaps it could then create something more fun. And just for the record, fish don’t have legs either, so they can’t step on broken glass.

Ray Kurzweil, Director of Engineering at Google and leading thinker and futurist, agrees with the limitations of neural networks like GPT-3. He believes that while “100-layer neural nets have been very successful” at doing specific tasks like playing Go or advancing the technology for self-driving cars, the problem is that “Life begins at a billion examples” (Ford 180). Kurzweil is leading an effort at Google to build an AI based on the human neocortex. Humans, he explains, ”can learn from much less data because we engage in transfer learning… we can generalize information from one domain to another” (Ford 181). While Kurzweil does not explicitly mention qualia, qualia may be a crucial part of allowing us to generalize and abstract the data that we gather from our surroundings.

One need not look further for some guidance on this question than the robots described in Asimov’s I, Robot. The robots in both “Catch that Rabbit” and “Liar!” struggle with problems that could be solved if they could experience qualia. In “Catch that Rabbit”, Dave, the leader robot, does not have the ‘personal initiative’ to control more than six finger robots when there is a crisis (Asimov 61). If Dave were able to generalize the information that the six finger robots are giving him and return similar instructions to the finger robots to escape danger, he might manage the crisis independently. In “Liar!,” the robot Herbie is puzzled whether to lie or tell the truth, as both would hurt a human, and his First Rule of Robotics prevents him from doing that (Asimov 75). If Herbie were able to experience the qualia of being lied to and of being hurt by the truth, then he might be able to tell which hurts more and to determine whether he should lie or not to minimize the harm. Asimov’s thought experiments point to qualia as being a possible solution to the complications created by the three rules of robotics.

Being able to experience qualia may not be necessary only for human-like robots but also for humans to be more accepting of robots. Take the monster from Mary Shelley’s Frankenstein. While not a stereotypical robot, the monster is artificially created and distinctly non-human. And yet the monster is frequently more relatable than any of the humans in the novel. He can experience hunger, thirst, and pain, but more importantly, the desire to be loved and the desire for revenge (Shelley 147). Because the monster can express the qualia that he shares with humans, he is a more sympathetic character than a typical creature or robot can ever be.

The information from the world around us becomes “experience” at some point between our sensory organs and our brain. After we answer the Problems of Consciousness, we can work out how that transformation happens. And if we can incorporate a method of simulating qualia into the robots of tomorrow, we will create AIs that have a better understanding of the world—learning from smaller data sets and perhaps also more relatable to and accepted by humans. And it’ll be able to make a joke it can chuckle at.

William Wu is a sophomore studying Information Technology and Computer Science at Rutgers University. He plans on becoming a consultant or data analyst in the future. He plays for the Ultimate Frisbee team and enjoys finding new music and new video/board games to play.

Works Cited

Ford, Martin. Architects of Intelligence: the Truth about AI from the People Building It. Packt Publishing Ltd., 2018.

Nida-Rümelin, Martine, and Donnchadh O Conaill. “Qualia: The Knowledge Argument.” Stanford Encyclopedia of Philosophy, Stanford University, 23 Sept. 2019, plato.stanford.edu/entries/qualia-knowledge/.

Settembre, Amelia. “Magenta: The Color That Doesn’t Exist And Why.” Medium, The Startup, 4 Mar. 2020, medium.com/swlh/magenta-the-color-that-doesnt-exist-and-why-ec40a6348256.

Chalmers, David J. Facing Up to the Problem of Consciousness, 1995, consc.net/papers/facing.html.

Bartlett, Jonathan. “GPT-3 Is ‘Mindblowing’ If You Don’t Question It Too Closely.” Mind Matters , 30 July 2020, mindmatters.ai/2020/07/gpt-3-is-mindblowing-if-you-dont-question-it-too-closely/.

“Catch That Rabbit and Liar!” I, Robot, by Isaac Asimov, Del Rey, 2020, pp. 48–76.

Frankenstein, by Mary Shelley, Broadview Press, 2012.

Leave a Reply