Critical AI

ON MARKET ONTOLOGIES: DATA ONTOLOGIES WORKSHOP #1 WITH LAUREN GOODLAD AND CAROLINE E. SCHUSTER

[Data Ontologies is the second in a two-part series of AY 2021-22 workshops organized through a Rutgers Global and NEH-supported collaboration between Critical AI@Rutgers and the Australian National University. Below is the first in a series of blogs about each workshop meeting. Click here for the workshop video and the discussion that followed.]

by Lee Vinsel (Associate Professor of Science, Technology, and Society at Virginia Tech)

On February 10, 2022, I was lucky to attend Critical AI’s virtual workshop titled “Market Ontologies.” The event featured Caroline E. Schuster (Anthropology/Center for Latin American Studies, ANU) and Lauren M. E. Goodlad (English/Comparative Lit/Critical AI, Rutgers) discussing recent articles on the possibility of general artificial intelligence through reinforcement learning and the moral effects of market-mediated data analysis. The workshop was the first in a series on DATA ONTOLOGIES which followed upon a previous semester’s discussions of THE ETHICS OF DATA CURATION. According to Critical AI’s website, the first semester’s workshops had “focused on the work, values, and conditions involved in creating the data that drives [AI] systems,” while the topic of Market Ontologies was devised to kick off an exploration of “data and its relationship to knowledge, world-building, and the nature of being.”

Wangechi Mutu, The Bride Who Married a Camel’s Head (2009)

I thought the event was very interesting. Most of all, I thought that Schuster and Goodlad demonstrated the potentials of critically reading sources related to digital culture, in ways that build on the best traditions of the humanities and social sciences. Among other upshots, Schuster and Goodlad reminded us that ideas have impacts, and so we must attend to them. At the same time, the presentations raised questions for me about what we know about the scale and scope of AI, the adoption of it by individuals and organizations, and its actual impacts in our material world. If we are working in materialist traditions of thought, the connection between concepts and material conditions seems like something that should be a part of our picture. I raised questions about this during Q&A of the event and will explore the questions further below. 

In her portion, Schuster (they/them) examined Marion Fourcade and Kieran Healy’s influential article, “Seeing Like a Market.” As Schuster pointed out, a large part of the article depends on jokes and puns that are primarily legible to sociologists, so, as Schuster wryly put it, their job was to ruin the jokes by explaining them. Schuster thinks that “Seeing Like a Market” can be profitably read along with (and challenged by) two other publications: anthropologist and historian Fernando Coronil’s review of James Scott’s Seeing Like a State, titled “Smelling Like a Market,” and “Gens: A Feminist Manifesto for the Study of Capitalism,” by Laura Bear, Karen Ho, Anna Lowenhaupt Tsing, and Sylvia Yanagisako. More broadly, Schuster’s strategy was to read Fourcade and Healy through the lenses of feminist studies of finance and ethnography. 

Scott’s Seeing Like a State, an obvious source for Fourcade and Healey, argues that, as Schuster put it, the act of “simplifying systems of representation, measurement, and naming that define the modern state’s short-sighted way of seeing” constituted its own form of political power. Scott thought there are numerous problems with the state’s way of viewing the world, including both “epistemological shortcomings” and the molding of people that the state “seeks to control.”  

Petrus Van Schendel, An Evening Market

Scott’s theory has always had its critics, and according to Schuster, Coronil is one who can help us to think through these issues. In his 2001 review, Coronil took Scott to task for drawing a clean line between the state and capitalist markets (no doubt in part because of Scott’s anarchist predilections). Scott’s ironically simplified view of government and capitalism fails to recognize how markets, as Coronil wrote, “redesigned societies through no less costly modalities of social engineering than the ones Scott examines in this book.” Coronil argues that Scott’s approach effectively leaves (false) neoliberal distinctions between state and society intact. Such thoughts lead Coronil to ask, “Would a book critical of modernist visions titled ‘Seeing Like a Market’ be likely to be produced or to gain wide acceptance at this time?” (Schuster was careful to point out that they have no evidence that Fourcade and Healey knew of or drew on Coronil, but, whether or no, they argued that “Seeing Like a Market” answers one set of Coronil’s problems with Seeing Like a State, while failing to answer another.

In “Seeing Like a Market,” Fourcade and Healey argue that the top-down, planning-based view of the state has been thoroughly undermined by modern data systems. The market itself has “become a classifier,” they argue: “Court filings, voter information, driver data, property records, city fines—all have been repurposed to feed the ever-expanding appetite of private agencies and data brokers who re-sell them to third parties, including, sometimes, the state itself.” Moreover, since many of these processes are supposedly automated (here I believe we need much stronger evidence for this claim), the people whose data is recorded come into view, but the mythical all-seeing eye of the people (or objects) that do the recording disappears. 

Joachim Beuckelaer, Fish Market (1568)

“Seeing Like a Market” riffs on a further sociology pun in a section titled “the new spirit of classification,” a reference to Max Weber’s The Protestant Ethic and the Spirit of Capitalism (1905). Importantly, data-driven classification systems are deeply moralizing and, one could argue, lead to a form of internalization where individuals try to “live up” to the ideas of the categories (say around actions that impact credit rankings). Such classifications place people in forms of social hierarchy, or as Fourcade and Healey put it, “market institutions create market situations, and hence class situations, from the inside.” Put another way, the classifications stem from “the market’s own efforts to classify the people inside it.”

It’s Fourcade and Healey’s occasional habit of referring to “the market” that Schuster sees as problematic and where they draw on the Gens manifesto to challenge a dimension of the analysis. Among other things, Schuster points out that “Seeing Like a Market” focuses almost exclusively on the United States, leaving out much of the world and, thus, ignoring how data systems impact other populations and places. The authors of the gens insist on the constructedness and plurality of markets. This set of thoughts leads Schuster to a final suggestion: “[A]s a research program in the social sciences, we might add the account of labor – the labor of producing these scores and measures and data infrastructures – not just the extinct recording individual, but the management of classification, that generates the market so that it can see.” 

In her presentation, Goodlad (she/her) examined the article “Reward is Enough” by David Silver, Satinder Singh, Doina Precup, and Richard S. Sutton, a team of computer scientists connected to DeepMind, a subsidiary of Google/Alphabet. Goodlad claims that human-like Artificial General Intelligence (AGI) has been the field’s “holy grail” since at least the 1940s. But, Goodlad points out, that’s not what the field has been successful at producing; rather it has made narrow tools that succeed at specific tasks, like playing games or modeling language. 

Actual human intelligence, Goodlad argues, requires being “deeply enmeshed in negotiating the embodied spatial, temporal, emotional, practical, and intellectual challenges of being immersed in a world of objects.” The word “ontology” itself raises “the condition and materiality of being in, and of, this world.”

In examining “Reward is Enough,” Goodlad points out that, just as Fourcade and Healey draw on Weber’s turn-of-the-century ideas about status and moral action, so Google-affiliated computer scientists draw on late-eighteenth century ways of thinking about human action and response to rewards and incentives. More specifically, the computer scientists’ analysis of how AI might be built through reward systems hearkens back to Jeremy Bentham’s utilitarianism and related concepts that have become the bedrock of modern economics and areas of social science influenced by that discipline. The vision here—as Goodlad describes it—is one of atomized individuals attempting to maximize rewards in a world of incentive structures. 

Encyclopaedia Britannica

Goodlad provides a kind of genealogy of assumptions about human action that run through classic utilitarians like Bentham, revisionists (principally, John Stuart Mill), modern economists who respond to Mill’s qualitative utilitarianism (such as Amartya Sen), and finally computer scientists who worked on “AI” including John Von Neumann (who adopted Bentham-like ideas for game theory) and John McCarthy, (who is said to be the person who first coined the term artificial intelligence). The economistic vision that begins with Bentham and reinvents itself through game theory is built into McCarthy’s concept of intelligence, Goodlad argues, which he defines as “the computational ability to achieve goals in the world.”

Here Goodlad stopped to take stock: “The point I want to emphasize is that insofar as any given social-scientific or scientific field incorporates game theory or rational choice theory into its understanding of human and social behavior it diverges from what has been the dominant understanding in in humanistic disciplines and social perspectives for quite some time,” beginning with Mill’s qualitative divergence from Bentham’s quantitative utilitarian calculus.

Yet, it is precisely the economistic understanding of human action that undergirds “Reward is Enough.” Indeed, the authors believe it might be the key to future developments in AI, writing, “the generic objective of maximizing reward is enough to drive behavior [in AI systems] that exhibits most if not all the abilities that are studied in natural and artificial intelligence.”

T. Johnson, Charles Robert Darwin (1883)

Goodlad argues that what exactly the authors want to advocate for in this approach is unclear, in large part because they include such large hedges in the article. As she points out, rewards-based reinforcement learning has proven to be inefficient in some areas of AI (such as language modeling). As Goodlad puts it, if the paper is only meaning to say there are still potentials in reinforcement learning for narrow AI, it’s no big deal; but if it really is making claims about AGI (in the sense of human-like intelligence), then it appears the authors have fallen into what on Twitter is called #AIhype. Goodlad argues that a core problem with the economistic, game theoretical picture in “Reward is Enough,” is that it is ontologically insufficient. Put another way, reward is not enough because it doesn’t take into account the real world enough. She also pointed out that the authors’ fallacious notion that the interaction of a rational agent with its environment will lead to complex intelligence is, “in effect, a reinvention of Charles Darwin’s theory of natural selection…minus 500 million years of evolutionary time and a world full of examples.

My overarching question for both of these presentations and the framing of the Market Ontologies panel itself stems from a new book project I have begun, tentatively titled A Good History of $%#@ Jobs, which examines material change and employment in the US economy since the 1970s. One thread I am following in the story is that, although there has been a great deal of hype around digital technologies since the 1970s, they have often not shown significant impact in traditional economic measures. Most famously, with sole exception of the so-called “New Economy” of roughly 1994-2004, measures of productivity—or efficiency in business processes—have remained stubbornly low since 1970. 

Now, productivity is a controversial measure for some folks, but there are plenty of other reasons to be skeptical that “AI” and other recently hyped technologies are having a huge impact. One example is market size: as Jeffrey Funk and I noted in a co-authored piece, by 2000, less than a decade after the Internet was commercialized, e-commerce, Internet hardware Internet software, and mobile service revenues has reached $446 billion, $315 billion, $282 billion, $230 billion respectively (all in 2020 dollars to simplify comparisons). By contrast, Gartner estimates that the global “AI” software market has only reached $50 billion, even after a decade of intense hype. In a more recent piece, Funk examines 20 publicly traded AI companies and finds that only two are profitable, and those two have small market capitalizations. Moreover, he finds that not a single AI startup has broken into the “top 400 global firms in terms of market capitalization.” Contrast this with earlier digital technologies, including computer and Internet companies, which had “achieved top 100 market capitalization status within 10 or 15 years of their founding.” 

To summarize, while it seems that many people in the worlds of business and journalism are “buying” into AI, in the sense of repeating hype and fantastic projections, not many people are literally buying AI, in the sense of plunking actual money down. They demonstrably and certainly aren’t buying it as fast and broadly as they bought earlier digital technologies.  

And that is not remotely surprising. In Artificial Unintelligence, computer scientist and journalist Meredith Broussard shows how fantasies about the powers of AI far outpace actual capabilities. And in her interview with me on the Peoples & Things podcast, computer scientist Samantha Kleinberg gives many reasons – from data quality problems that are unlikely to ever go away to miniscule gains in algorithm efficiencies – AI is unlikely to deliver on hype anytime soon.

For all of these reasons, I think it is absolutely crucial that humanities and social science scholars describe the scale and scope of AI adoption and its impacts in everyday life when they try to critically examine it. Among other things, focusing on scale and scope is a form of scholarly reflexivity of the sort that Pierre Bourdieu and others have encouraged us to do: pulling back and examining the larger landscape puts us as analysts into the field of play, a field of play that includes faddishness, enthusiasm, and dramatic claims, which, as I’ve argued elsewhere, is why many social scientists and humanists have turned to studying AI in the first place.

Alfredo Ramos Martinez, Calla Lily Vendor (Vendedora de Alcatraces) (1929)

I had these thoughts in mind when I was listening to Schuster’s discussion of “Seeing Like a Market.” Most of the focus is on how classifications are being created and with reference to whom: but Fourcade and Healey offer much less reflection on who is adopting and using these systems and what impacts they are having. (Here my own approach is informed by a long line of thinking in technology studies going back to at least the 1950s that argues that it is through adoption and use that technologies come to affect the material world.) This distinction between invention (creation of a thing) and innovation (widespread adoption of it) is critically important because, as Fourcade, Healy, and Schuster all know, we’ve had oppressive, sexist, racist, and other prejudicial systems of classification (e.g., credit, mortgage, insurance, and information systems) for many years. Can we demonstrate that contemporary data systems are having worse and larger-scale impacts on oppressed communities?

If Jeffrey Funk’s writing about various AI markets is correct, then it might not matter who is making these data systems or how they are doing it because the reality might be that they are having very little influence at all! So, to ask the question in another way, how do we persuasively connect reflections on contemporary data systems to significant material developments? 

This line of thinking might even be more dramatic when we turn to “Reward is Enough.” When I sent the Silver et al piece to a computer scientist friend who works in applied AI and asked her what she thought of it, she said that it amounted to “academic mental masturbation,” that it was so far from anything that could be applied in reality that it wasn’t even worth talking about. But there was one glowing upshot to the piece she claimed, “I’m glad you sent it because it made my cognitive scientist collaborator laugh.” Perhaps we garner some insights into forms of ideology that are circulating in some AI circles by examining ideas like the ones in “Reward is Enough,” but if they are so unlikely to come into contact with material reality, what more do we gain?

One way of seeing the real strengths of Schuster’s and Goodlad’s presentations is that they involve applying the critical approaches of a long line of thinkers (including Marx, Scott, Foucault, Coronil, Fourcade and Healy, the gens manifesto theorists, and many others) to the question of how ideas—including in this case both social categories and economistic ontologies, come into the world and the potential hazards they have.  But the puzzle I am left with is how – both in terms of method and methodology – scholars in technology studies connect this kind of work to pictures of the scale and scope of material change.

I am very glad to have listened to Schuster’s and Goodlad’s presentations, and I’m grateful to them for having done it. The presentations were interesting in their own right, and they helped me further clarify some puzzles I’d been chewing on for some time.

Exit mobile version