STUDENT INSIGHTS: NieR: AUTOMATA AND THE DEVELOPMENT OF “HUMANITY” IN ARTIFICIAL INTELLIGENCE  

SABRINA BURNS 

The research, writing, and editing of this post was part of an undergraduate project undertaken for a Rutgers Cultural History of Now course in Fall 2021, “AI and the Human” The author’s bio follows the post.   

Character art depicting the Machines in NieR: Automata 

Humankind is fascinated by the concept of a feeling automaton. The uncanniness of a mechanical object with sentience or emotions has long captured the human imagination. Modern robot developers have created machines capable of limited autonomy, limited sensory perception, and movement. Yet, they have not created genuine sentience or “the cognitive capacity to reason about the future,” as data journalist Meredith Broussard defines it in her book Artificial Unintelligence: How Computers Misunderstand the World (129). That fact hasn’t stopped video game creators, authors, and directors from imagining a future in which robots gain sentience. For instance, the Japanese role-playing video game NieR: Automata explores an alternative future wherein warring androids and machines develop startlingly human behaviors. Though the game is fictional, it creates a hypothetical scenario: Machines could one day develop “humanity” and, perhaps, challenge the traditional definitions for what constitutes a human. 

In NieR: Automata, “Aliens invade Earth with an army of self-sustaining machine lifeforms to wipe out humanity. The surviving humans flee to the moon and create an army of androids to fight back and reclaim their homeland” (n.p.). Centuries later, combat android 2B and her squadmate 9S are deployed to eliminate the machines during the 14th Machine War. However, the machines begin to exhibit signs of human behavior, such as the formation of different societies and family units. This discovery leads 2B and 9S to question the machines’ capacity for emotions and subsequently to question their own. After all, if the enemy resembles the androids’ human creators in mind and spirit, then peace may become viable for both parties. 

Screenshot from NieR: Automata      

Machines in the game are a proxy for human players—their emotional struggles resonate with the players’ understanding of life’s difficulties. Yet they also exist in a harmful feedback loop of self-perpetuating war. Though the machine lifeforms attempt to humanize their behavior, they fail where the androids succeed because the machines “only imitate human behavior” and cannot learn from mistakes: “if a unit fails, it fails in exactly the same way the next time” (n.p.). Programmed with an inefficient and unsupervised learning style, the machines analyze humanity’s cultural remains without human guidance and then “perform” identified human behaviors without understanding the context, symbolism, or deeper motives behind these actions. Like the “narrow AI” that Broussard describes, these machines lack the “meaningful thoughts, feelings, and mental contents” of human minds (38). Even as the game grants the machines a semblance of higher-level thought through their need for social bonds, it differs from many fictional works in portraying this technology as an outgrowth of today’s “narrow” AI–limited in the ability to learn and grow. 

In contrast, the androids use reinforcement learning to grow their bountiful emotional intelligence. Like humans, they feel pride and kinship when performing charitable deeds for both their kind and non-hostile machines alike. Their goal to create a safe domain for humankind requires multiple social and analytical skills to effectively respond to ever-shifting apocalyptic conditions on Earth. In 9S and 2B’s cases, use of environmental and social cues encourages novel solutions to many challenges. For instance, 9S “multiplexes” his data over a nearby machine network to preserve his current “self” and hold onto his shared memories with 2B, a never-before-used tactic within the game’s universe. The androids’ tight-knit organizational dynamics incentivize compassion, in turn spurring progress toward the group’s self-development and closeness to humanity. Although the androids’ human-like “AI” uses reinforcement learning (a technique common to real-life machine learning), it surpasses anything computers today can achieve. The game therefore portrays these learning techniques as if they had the ability to evolve into an authentic (and “humane”) form of artificial general intelligence (AGI) and “humane” robots.  

Inspired by Philip K. Dick’s novella, Do Androids Dream of Electric Sheep? (1968), I will define the state of being human in terms (partly) of the potential for empathy. According to journalist Jun Wu, empathy is the ability to comprehend and even feel what another being “is experiencing through their frame of reference” (n.p.). Most people today regard empathy as necessary for collective survival. Rick Deckard, the protagonist of Dick’s novella, believes that empathy emerges as “an unimpaired group instinct; a solitary organism…would have no use for it.” Hence, Rick considers the androids he hunts down to be “solitary predator[s]” because of their (perceived) apathy and inability to protect one another (19). As readers eventually recognize, the novella’s Nexus-6 androids do show concern for one another, even though Rick never fully admits it. Nonetheless, in Rick’s dystopian world, humans prize empathy as the feature that supposedly distinguishes them from these artificial beings, and therefore justifies their domination over the androids. Ironically, not many humans qualify—most have forgotten how to empathize without technological assistance (mood organs, empathy boxes, etc.). Rick’s lack of empathy for the androids he is paid to kill makes Rick a complicated character. But perhaps because Dick wants his readers to know that empathy is still a key feature of being human, Rick eventually empathizes with the Nexus-6 androids. 

Screenshot from Blade Runner (1982), Ridley Scott’s adaptation of Dick’s Do Androids Dream of Electric Sheep? 

NieR: Automata proposes the eventual rise of machine-learned “humanity” via its exploration of artificial empathy and “purpose.” In Automata, humanity is defined according to “purpose, and the free will to achieve that purpose.” But, as in Dick’s novella, empathy plays a key role as well. Almost all androids in the game rely on each other for information, weapons, or repairs to fulfill their mission and reconquer Earth. Their collective survival requires emotionally complex relationships like those which human game-players experience. For example, 2B and 9S must sometimes barter with renegade androids for mission intel. The pair employ empathy and emotional control to cooperate with the wary renegades and further their mission. In comparison, machine lifeforms can only experience emotions when they are disconnected from the all-encompassing machine network hive-mind. Those still connected to the network cannot evolve their strategies because they feel nothing when the androids vanquish their companions. Comparing the machines’ lack of emotional autonomy and empathy to the androids’ drive to bond over and aid in their shared goal suggests that NieR: Automata’s androids are more human-like than their enemies. These androids in effect learn how to be human-like since their reinforced learning incentivizes cooperation and cognizance. They tackle a variety of obstacles using tools gained from social and environmental interactions, which resemble their creators’ own decision-making processes. The androids’ development stands in stark contrast to the enemy machine lifeforms’ disorganized and faulty mimicry of human behaviors. 

Image Credit: Square Enix

By contrast, today’s data-driven machine intelligence lacks the drives exhibited by NieR’s automatons. While large language models such as ChatGPT use prediction to generate human-like text, they do it via statistical modeling of vast stores of data. They are neither conscious nor sentient like the fictional AI in many games and stories. They depend on sheer quantity of human-generated information and complex mathematical functions to generate text that seems human; but they do not understand anything they are spouting in a human-like way. They also use a lot of human reinforcement to improve the quality of their outputs. In comparison, the machines and androids of NieR: Automata: “play out an existence defined by their code,” clinging to “one ‘objective’ [that gives] their life purpose and meaning” (n.p.). In the game’s world, any entity that cannot absorb new information and adapt their attitudes to changing circumstances is doomed to failure–exactly what a pre-trained statistical model like ChatGPT cannot do. If the AI entities of Automata fail their objective, then they lose their life purpose and meaning and revert to a simple, meaningless, data-driven existence like real world machine intelligence today. This means that NieR: Automata works at two levels: the machines, in resembling existing AI, throw doubt on the possibility of artificial humanity. Yet the androids’ elevated emotional intelligence suggests that perhaps one day a different technology will create a different kind of artificial being.

Image credit: Monsitj

Many people believe that machines in the future may have the purpose necessary to coexist and interact with humans. Computer scientist Judea Pearl in his Book of Why (2018), for example, believes that “strong” AIs must comprehend “the vocabulary of options and intent” to emulate free will and truly communicate with humans (329). Under my working definition, true empathy (defined as care for others), purpose (defined as a reason for living), and free will (defined as the ability to choose a purpose) could constitute a synthetic humanity that might be indistinguishable from human. For instance, in NieR: Automata, 2B’s assistant robot Pod 042 begins as a neutral entity but comes to learn empathy and purpose from witnessing interactions between the androids and machines, which ultimately motivates him to save his friends. At least for now, humanlike AI with empathy, purpose, and free will exists only in fiction. Regardless, the point of these fictions may be less about predicting what future technology might achieve than facilitating compelling discussions of what it means to be human.

Sabrina Burns is a rising junior in the School of Arts and Sciences Honors Program. She is majoring in English and minoring in Digital Communication, Information, & Media and Creative Writing. She loves exploring humanity’s depths in her writing and hopes to become an author one day.


Leave a Reply