[This event is part of our AY 2022-23 series on large language models (LLMs). The event was organized and co-sponsored by Critical AI @ Rutgers and the Rutgers British Studies Center. Below is a blog on the event. Keep an eye out for an updated recording of the event coming summer 2023.]

By Jacob Romanow (English, Bryn Mawr College)
Like many in the humanities, scholars of Victorian literature and culture are used to encountering skepticism about the contemporary relevance of our objects of study. But when it comes to questions of so-called artificial intelligence, Victorianists face a less familiar position: pushing back against many of the highest-profile assertions of the nineteenth century’s relevance. We find ourselves batting away analogies that are opportunistic or misleading, from breathless claims about a “fourth industrial revolution” to deceptive allegations of “Luddism,” or resisting the over-eager embrace of science-fictional imagination as historically predictive. These kinds of analogies are sometimes presumed to illustrate the inevitability of certain forms of technological change—and, thus, the counter-historical futility of criticizing them.
The invigorating April 6 panel discussion “Victorian ‘Artificial Intelligence’: A Call to Arms,” co-hosted by Critical AI @ Rutgers and the Rutgers British Studies Center, presented an alternative, more productive set of connections between contemporary “AI” and the nineteenth-century past. Combining presentations from Sophia Hsu (English, Lehman College, CUNY), Pamela Gilbert (English, University of Florida), and Lauren M. E. Goodlad (English/Critical AI, Rutgers University) the panel suggested that twenty-first-century technology isn’t like the Victorian past, but, rather, is itself far more Victorian in its interpretive suppositions, genealogies, and assumptions than its present-day creators might recognize or admit.

Goodlad, in a paper entitled “Victorian ‘Artificial Intelligence’: An Ongoing History” built on a heuristic distinction between “atomized” and “holistic” social ontologies (that is, understandings of social reality as, respectively, an aggregation of isolated entities or as an immersive, mutually embedded web) to illuminate contemporary AI’s participation in a long tradition of atomizing thought. Because today’s “AI” involves statistical models, the technology, like all such models, binds the future to pre-trained models of the past. To this extent, “AI” is rooted in the long history of “bell-curve thinking,” a simplifying displacement of social multiplicity by phantasmatic abstractions like the “average man.” The usefulness of statistical norms have led to an impressive persistence for this nineteenth-century technique, which seems to remain in use no matter how many times it is conceptually discredited.

Goodlad delineated a sequence of figures who have been putatively disavowed (by some) for their eugenicism and racism but whose (eugenics-motivated) ideas remain structuring influences on the technological landscape that followed them: Adolphe Quetelet and his redefinition of “average” personhood from a question of regularity to one of knowledge production; Francis Galton and his law of regression toward the mean; Henry Goddard and his racially-motivated theory of IQ; Charles Murray and his attempted modernization of this legacy of scientific racism. The presumed homology between brain and computer of neural networks relies on this intellectual genealogy, however strong the desire to disentangle it from some of these individuals: thus, modern technologies “perpetuate and amplify racist biopower without the explicit essentialisms of eugenics.” Conversely, Stephen Jay Gould’s critique of The Bell Curve, Jathan Sadowski’s theorization of datafication, and George Eliot’s Victorian novels, Goodlad suggested, are unified by their insistence on the “multiplistic critical intelligences” that atomized social ontologies reduce to isolated competition. So-called algorithmic bias, then, is only nominally algorithmic: (to do with programmed instructions): the heart of the problem is usually training data that bear the impress of racism and exclusion.
Gilbert, presenting “Common Sense, AI, and the Whiteness of Affect,” traced a parallel, specific trajectory of this genealogy as built into technologies of affect recognition and emotional analysis. Dismantling the triumphalist narrative that nineteenth-century theories of embodiment were marred by racist simplifications that were overcome in the ensuing decades, Gilbert showed how contingent (non)scientific concepts designed specifically to confront Victorian cultural discourses have been widely integrated into scientific consensus and, now, into AI-based technological frameworks. Specifically, Gilbert analyzed the persistent fantasy of universal, observable affective categories. Gilbert linked early efforts by Charles Darwin and others to systematize a universal lexicon of emotion relative to two intersecting Victorian concerns: the theological and the racial.

As Gilbert described, the anthropologist Kate Crawford has identified a genealogy of affect recognition from Darwin to Paul Ekman to, say, Microsoft’s Face API, one that reintegrates racial bias at every turn; Gilbert’s presentation richly historicized this chronology, showing how Darwin’s own work on emotion was deeply imbricated in debates around divine design. Emphasizing the theological roots of the debate between monogenist and polygenist theories of race, Gilbert elaborated the influences of Charles Bell and the Scottish “common sense” school and Prichardian ethnology to read Darwin’s vision of universal emotion socio-politically, as an alignment with abolitionist activism and a kind of compatibilist theology. So, when Ekman and others cherry-pick Darwin’s theory of categories of emotion to assert six universal emotions (despite the failure of his own experimental efforts to prove them), he falsely universalized not only western emotional vocabulary but also Darwin’s own insights. Nevertheless, both Ekman’s categories and his elision of voluntary and involuntary expression have been mostly uncritically integrated into facial recognition software, everywhere from “recidivism prediction” tools like Northpointe’s COMPAS (Correctional Offender Management Profiling for Alternate Sanctions) to Facebook’s emoji reactions. Consequently, these technologies, “black box” or no, suffer from the same basic problem as the predecessors traced by Gilbert: a presumptive reversion to whiteness as universal whenever faced with data that fail to confirm their priors.
These narratives of conceptual continuity underscored the central arguments of Hsu’s paper, “Statistical Reasoning and the Limits of Predictive Modeling in AI.” Hsu investigated AI’s origins in nineteenth-century statistics, examining Victorian critiques and anxieties about the misleading nature of models and averages, “the ontological gap between data collected and real world referents.” Bringing together nineteenth-century commentators like John Robertson with data scientists like Cathy O’Neil and cultural historians like Mary Poovey, Hsu showed that continuities like those traced by Goodlad and Gilbert have been matched by discursive continuities critiquing them: in a sense, Hsu’s paper presented a kind of prehistory of the Critical AI project itself.

Hsu’s primary case study was Elizabeth Gaskell’s 1848 novel Mary Barton: A Tale of Manchester Life. Gaskell’s “condition of England” novel draws heavily on sociological language and thought in its portrayal of the industrial laboring class while simultaneously limning the shortcomings of sociological and statistical thought. It does so through the technology of realist character, which Hsu presented as itself constructing a kind of “average man”: human-like but imaginary, an amalgamation of real people that models a possible world. Yet the novel insists on the limits of this framework by focusing on characters whose motives and actions cannot really be understood or predicted by the statistical frameworks through which they are introduced: the novel enacts and insists on critical and contextualizing interpretation of statistics in order to escape the circularity of data’s reliance on induction. Thus, the dilemma Goodlad associated with Quetelet—whether to understand the “average man” as a descriptive or a predictive category—became for Hsu a paradigmatic tension at the heart of both statistical thinking and Victorian fiction. Hsu connected Gaskell’s critique to contemporary problems of data bias, pointing to fiction’s capacities as exemplary of the critical and contextualizing interpretation that must inform any algorithmic analysis to counterbalance the limits of datafication.
Meredith Martin (The Center for Digital Humanities, Princeton), in her comments responding to the panel, posed a broad challenge to humanists working on AI-related issues to draw on the long history of literary scholarship engaged with the dilemmas of data and technology. Taken together, the event’s presentations productively expanded that history backwards, showing how putatively novel challenges raised by new technologies in fact reflect and participate in a centuries-long push-and-pull around the tradeoffs of statistical proxies. As such, “Victorian ‘Artificial Intelligence’” made a cogent implicit argument that humanistic thought can contribute to today’s conversations around AI as more than a corrective, embellishment, or site of critique. This is what I took to be the event’s “call to arms”: the insistence that humanities methods can illuminate not just the human values embedded in technology, but the more central fact that any given algorithm—and algorithmic thought as a whole—can and must be understood as a cultural artifact.