STUDENT INSIGHTS: THE UNINTENDED CONSEQUENCES OF ARTIFICIAL INTELLIGENCE AS EXPLORED THOUGH SCIENCE FICTION

BIANCA BATTAGLIA

The research, writing, and editing of this post was part of an undergraduate project undertaken for a Rutgers Honors College seminar in Fall 2020, “Literatures of Artificial Intelligence.” The author’s bio follows the post.

The twenty-first century has seen significant strides in the development of artificial intelligence (AI). As the authors of “Anatomy of an AI System” suggest, the ultimate goal of using artificial intelligence is to program and train it to carry out a specialized task, so that it can eventually be used to aid and benefit humans. Amazon’s Echo, for example, delivers the artificial intelligence of Alexa, whose purpose is to act as a virtual assistant to its users, either by answering questions or completing requested tasks (Crawford and Joler). However, according to renowned computer scientist Stuart Russell, this “narrow” type of artificial intelligence differs greatly from the ones portrayed in science fiction, for it lacks a genuine consciousness. By contrast, “artificial general intelligence” (AGI), according to Russell, refers to a hypothetical AI with a “general-purpose intelligence much like our own” (Ford 48). An AGI, he claims, would possess a true sense of understanding in order to learn, think, problem-solve, and make decisions.

Interestingly, science fiction from the nineteenth and twentieth centuries offers two complex portrayals of AGI: Frankenstein by Mary Shelley and “Reason,” from the short story collection I, Robot by Isaac Asimov. While Frankenstein recounts the tale of visionary Victor Frankenstein’s creation of new life, “Reason” follows Gregory Powell and Mike Donovan, employees at a U.S. robotics company, as they attempt to reason with a robot who begins to question their authority. As such, both narratives emphasize that the unpredictability of artificial intelligence can lead to unintended consequences.

In Frankenstein, Victor Frankenstein, driven by insatiable ambition, defies the boundaries of existing knowledge when he creates an artificial being in an attempt to “renew life where death had apparently devoted the body to corruption” (Shelley 81). His creation, an assembly of dead human body parts brought to life by electricity, is eventually revealed as a highly intelligent being capable of thought, feeling, and reasoning, as well as significant strength and agility. Because of his human-like intellect and qualities, Frankenstein’s monster can be considered a form of AGI. When the creature encounters a family during his refuge in the wilderness, his intrinsic desire to understand “the motives which influenced their actions” allows him to learn and expand upon his knowledge of the world (Shelley 127). Through his observation of the family and acquisition of lost books, Frankenstein’s monster learns how to speak and read, and over time becomes a complex, articulate, and intuitive human-like creature.  

However, his unsightly and deformed appearance, which differs greatly from that of a typical human being, causes Victor to regret, fear, and abandon his creation. After finding himself rejected by Victor and continuously rebuffed by human society, who recoil from his unnatural appearance, the creature declares his hatred for all of humanity and begins to torment Victor, the “Cursed Creator” who brought him into existence and then betrayed him. He murders Victor’s close friends and family for revenge, exercising his strong mental and physical capacities to become an agent of destruction and chaos. Victor initially had “benevolent intentions” in creating a new form of life, but his lack of foresight and control over his more powerful artificial intelligence leads to the unfortunate deaths of many innocent human lives (Shelley 111). In this way, the unintended consequences brought forth by Victor’s artificial intelligence results in unnecessary violence, and poses an overall threat to humanity’s safety and well-being.

In “Reason,” a newly assembled robot, nicknamed Cutie, causes a problem for Powell and Donovan when, rather than simply carrying out his intended tasks of aiding their work, he begins to act on his own agenda. The robot’s intelligence and inquisitiveness are evident early on, for immediately following activation of his “positronic” brain, he begins to question and contemplate his existence. As such, Cutie can be considered a form of AGI, in that he possesses and exercises complex human-like cognitive abilities. In fact, Powell even remarks that Cutie is the first robot “that’s really intelligent enough to understand the world outside” (Asimov 47). Nonetheless, Cutie’s programmed intelligence also leads to unanticipated trouble, in that his higher-order logic and reasoning capabilities are so advanced that by using the “proper postulates,” he is able to construct his own interpretations about what is going on in the universe, even if they are actually false (Asimov 62). As a result, he refuses to believe that the Earth outside the space station is real, opposes Powell and Donovan’s authority, and pledges his service to “the Master,” an imagined figure who he purports is his supreme creator.

Eventually, the story makes clear that Cutie’s disobedience is the result of a contradiction between different features of the Laws of Robotics—the first of which dictates that a robot must always protect humans, and the second of which dictates that a robot must always follow human commands (Asimov 65). Cutie’s assumption that he is the superior intelligence, combined with his intrinsic knowledge that he must follow the Laws of Robotics, causes him to appear to reject the humans’ orders in redirecting a dangerous energy beam away from the planet. In actuality, Cutie is trying to divert the beam on his own accord, and by ignoring their instructions, he does so more accurately than they ever could. Though Cutie says he completed this task on behalf of “the Master,” ironically he ends up saving an Earth he does not even believe exists (Asimov 52). Thus, the unintended consequence occurs when Cutie appears to disobey the humans and operate against their rules, even when he is following them in the way that matters most. As the story is narrated, this unpredicted conflict almost leads to catastrophe, or at least seems to, for the humans on Earth.

Stuart Russell believes that artificial intelligence has the potential to make a positive impact in our society. However, he also forewarns that “AI that is not controllable and safe, is just not good AI,” and emphasizes that artificial intelligence must ultimately be carefully exercised at the will of the human being, and must always be able to be turned off and corrected if it acts in an undesirable way (Ford 66-67). Russell asserts that intelligence is what gives human beings power in the world, and if an AGI is created that has superior intelligence, it could take that power away from humans. He therefore believes that one way to avoid this shift in the power dynamic is to design artificial intelligence that simply helps us to achieve our goals, but that is not explicitly programmed to know what those goals are. According to Russell, this would generate the “margin of safety that we require” when dealing with AI (Ford 62-64).

While the goal of developing AI systems is to ultimately aid and enhance our way of life, the unintended consequences that can result from any uncontrolled artificial intelligence, as both Shelley’s and Asimov’s fiction suggests, highlights the importance of recognizing that for AI to be “good,” it must remain under the control of the humans wielding it. Of course, this is not a perfect solution, because as Russell observes, humans can also abuse AI, which can lead to a different type of unintended consequence. For instance, if one country were to employ an artificial intelligence as a protective measure, enemy nations might infiltrate that AI with the intention of weaponizing it against the country it is supposed to protect (Ford 60). Overall, the potential for AGI to improve the human condition, however unrealistic or impossible it may seem at the moment, inspires excitement for the future. Nevertheless, this is still a very new and complex technology, and human beings must consider not only the possible benefits, but also the potential negatives.

Although human control will never be truly sufficient in preventing the unintended consequences that can arise from artificial intelligence, precautionary measures such as the one recommended by Russell may at least limit the potential for uncontrolled AI. Ultimately, there is no doubt that any sort of artificial intelligence—whether it be narrow like Amazon’s Alexa or broad like Asimov’s AGI—has the potential to greatly affect our society.

Bianca Battaglia is a sophomore at Rutgers University’s Honors College currently pursuing a degree in Cognitive Science with a minor in English Literature. Her interests include literature, history, psychology, and public policy, and she is considering careers in either law, publishing, or higher education.

Leave a Reply