DATA ONTOLOGIES WORKSHOP #2 ON ONTOLOGIES OF JUDGMENT: CHRISTOPHER NEWFIELD REVIEWS BRIAN CANTWELL SMITH’S THE PROMISE OF ARTIFICIAL INTELLIGENCE

[Data Ontologies is the second in a two-part series of AY 2021-22 workshops organized through a Rutgers Global and NEH-supported collaboration between Critical AI@Rutgers and the Australian National University. Below is the second in a series of blogs about each workshop meeting: ONTOLOGIES OF JUDGMENT/SUBMARINE ONTOLOGIES. Click here for the workshop video and the discussion that followed. The workshop meeting was co-facilitated by Katherine Bode, Mark Sammons, and Lauren M.E. Goodlad who filled in for Christopher Newfield when he was unable to attend. In place of a blog on the workshop, what follows is Newfield’s review of the book under discussion, Brian Cantwell Smith’s The Promise of Artificial Intelligence: Reckoning and Judgment (MIT Press, 2019)]

CHRISTOPHER NEWFIELD REVIEWING BRIAN CANTWELL SMITH, THE PROMISE OF ARTIFICIAL INTELLIGENCE: RECKONING AND JUDGMENT (MIT Press, 2019).

by Christopher Newfield (Director, Independent Social Research Foundation & President, Modern Language Association). Edited by Lauren M. E. Goodlad.

The current wave of “artificial intelligence” (AI) has tremendous capabilities as well as hype and flaws that its developers cannot shake. The hype is fuelled partly by the headlong commercialization of AI research, which puts a premium on speed and on the mass quantities of capital investment that promise acceleration. The search for investment capital has generated non-stop AI marketing, which in turn strives to vacuum up social tasks and socio-cultural domains of knowledge —urban redevelopment becomes “smart cities,” public transportation turns into driverless cars, and psychology becomes “private traits and attributes . . . predictable from digital records of human behavior.”   

Critics have identified systemic flaws in current uses of AI.  Four of the most discussed are social biases, particularly racism, that become part of both the code and its use (e.g., Noble); opacity, such that users cannot assess how results were generated; coercion, in that parameters, function, data, etc. are controlled by designers and platforms rather than users; and privacy violations, which result from combinations of bias, opacity, coercion, and commercial priorities in use.  A fifth might be added: much AI research takes place in companies like Google in which managers have authority over the publication of research results.  Famous cases like the firing of Google AI ethics researcher Timnit Gebru suggest that much or most AI research is happening in the absence of academic freedom, which puts researchers at risk while also distorting research by allowing the suppression of findings that don’t fit a rollout narrative or corporate image.

Photo illustration by Lisa Larson-Walker. Photos by Thinkstock, Flickr CC.

These five flaws could seem like little more than growing pains when we consider the power and potential of AI and also the reputation for intellectual rigor in the fields involved, including computer science, cognitive psychology, and various kinds of engineering. Brian Cantwell Smith’s The Promise of Artificial Intelligence: Reckoning and Judgment makes a key intervention on this point.  Although Smith confirms the potential “promise” of AI, the book shows that defining intelligence (in its various modes) is a necessary step prior to arguing about which entities do and do not have it.  Crucially, Smith builds his analysis around a historical account of what “AI” has been, what it is in the present day, and what it must become before the technology delivers on its promise. (I use scare quotes around “AI” because, as many commentators before and since Smith have emphasized, the meanings of “artificial” and “intelligence” are deeply undertheorized.) Smith argues that second-wave AI, as he terms the contemporary version, is a kind of “reckoning” that, for all its potential, stands apart from the “judgment” with which philosophy and other humanities fields tend to associate high-level human intelligence. To be successful, AI’s designers must ensure that any technology tasked with decision-making power meets the rigorous standards of judgment laid out in the book.

Yet while Smith carefully distinguishes between the rigorous judgment that marks human intelligence at its height, and the “formidable reckoning prowess” of today’s AI systems (xiii), he leaves open the possibility that machines may one day advance beyond present-day capacities. The question is how. Though non-specialists probably do not realize it, first-wave AI, or “Good Old Fashioned AI” (GOFAI) is remarkably distinct from the current state of the art. In its mid-twentieth century heyday, GOFAI (also called “symbolic AI”) hypothesized that since computers manipulate symbols (by rendering them as sequences of ones and zeroes), it is possible to encode the actually existing world and its objects through elaborate symbolic representations. By contrast, second wave AI (often described through a rhetoric of “deep learning” and “neural networks”) involves the statistical mining of huge troves of digitized data. This is the shift that Smith so carefully and usefully explores.

In fact, the idea that computers might “learn” by translating data-derived patterns into strategies for optimized processing arose contemporaneously with GOFAI. But it was only the advent of parallel computing (for speed and power) and “big data” (through which terabytes of digital information became available through the internet and myriad devices) which enabled such “machine learning” systems (ML) to achieve eye-popping results. In the 2010s, so-called deep learning (a complex variation of ML) leapt to the fore in competitions over image recognition and machine translation, culminating in AlphaGo’s much-publicized defeat of the world’s best Go player in 2016. Powered by these noteworthy achievements, the comparatively modest language of “machine learning” gave way to triumphalist talk of (and hype over) “AI.”   

Shutterstock

Smith, however, looks past the hype to provide a sustained historical and philosophical comparison and contrast between the two AI approaches in their successive waves. According to this account, GOFAI failed because it “misconceived the world” (37): that is, GOFAI’s formal “representations” were too rigid to answer to real-world complexity. (As humanities scholars read about GOFAI’s fundamental blindness to standpoint, context, and society, they may experience a kind of déjà vu since the critique recalls those advanced by deconstructionists, feminists, postcolonial theorists and others in the 1970s and after.)

But since Smith’s point is to influence the AI research community, he does not stop with critique: he goes further by laying down a full-blown account of the ontological requirements of judgment (whether human or, as may one day be possible, mechanical). There are, he notes, two “‘views’ or registrations of the world” relevant to the design” of any system. (Note that Smith’s use of registration stands apart from his use of representation the latter of which, like most computer scientists, he uses in a technical sense—to stand for an AI system’s encoded information about some aspect of the world, as in GOFAI’s symbolic maps, or ML’s data inputs). By contrast, when Smith speaks of “registrations of the world” he is speaking of the situated and contingent perceptions of the world which an entity generates). The first of the two relevant perceptions “is the designer’s or theorists’s registration of the world in which they are building the system, and into which they will deploy it” (37).  “The second is that system’s own registration of the world” that it will use to behave in the world as it does (38).  Smith states, “There is no reason to suppose the two registrations will necessarily align (38, emphasis in original).  Thus, the theorist-designer is in a triangular relation to the AI system and the world. This means that the system’s relation to the world—which may be delivered through a camera or audio sensor enabling machine perception and/or through data provided by humans —is relatively independent of the designer’s intentions and algorithmic instructions. The system may not only dream in total darkness, to invert the song, but the designer cannot assume that an AI’s relation to the world will correspond to the original design.

GOFAI’s failures in the 1960s and 1970s rested on a Cartesian understanding of thought which Smith breaks down into four faulty propositions: intelligence is thought, thought is logical inference, perception is at a lower level than thought, and the world has a “formal” ontology. From this reductive standpoint objects appear to be discrete, well-defined, and to stand in unambiguous relations to each other. GOFAI’s designers believed that their encoded representations of the world were adequate because, as unreflective Cartesians, they took the unique affordances of human perception for granted. “It turns out,” Smith writes, that the discrete objects that humans are wont to perceive “is the delivery, to our consciousness, of the results of an exquisitely sensitive” perceptual apparatus “with 100 trillion interconnections honed over 500 million years of evolution” (25-6).

This disconnect between human perceptions of the world and the world’s actually existing complexity (which humans can never directly know), is a crisis point for GOFAI. As humanist readers will know, it is also the site of a couple of hundred years of language theory on its course from “natural” to “conventional” models of sign systems.  Smith makes the point this way: “In general, semantic relations to the world (including reference) are not effective” (12, bold in original).  By this he means that “the presence of a semantic relation cannot be causally detected at either end.”  You can’t decide “what a representation represents . . . by local measurement or detection.” “[N]o wave of discernible energy travels along the arrow of intentional directedness” (13). No person or machine can determine meaning by “looking out” at the world.  I won’t belabor this crucial point because it is in fact familiar to humanities scholars: the relation between sign and object is never one of correspondence or causal linkage. It is always mediated by other signs. It is a matter of interpretation, in which the relation is constructed by the interpreter(s), who must juggle multiple factors in a dynamic framework comprised in part of the competing interpretations of others.  As Smith puts it, “the world’s being beyond effective reach is the only reason we need reasoning at all” (15, emphasis in original).

Once we agree that knowledge of the world requires interpretation, we have to say what interpretation is and does. This is a core area of humanities expertise and a core area of historic AI failure.  GOFAI was wrong on several big things. The human brain doesn’t work the way GOFAI assumed (so far as we know, brain operations derive their power from “massive parallelism” [23]). Moreover, human perception, as we have seen, evolved to tame the world’s complexity. It follows that the Cartesian model is wrong: humans make sense of things by compiling information about the world whose elements are not logically related to one another and that mostly consist of “unconscious background.” Knowing also derives in some complicated way from the not-present unknown. GOFAI’s epistemological errors thus rest, for Smith, on its error of ontology, the false idea “that the world comes chopped up into neat, ontologically discrete objects” (28).

From Rene Descartes’ A Treatise on the Formation of the Foetus (1664)

“So GOFAI failed” (43). But did second-wave AI succeed? Smith’s answer is both yes and no.   To be clear: what came to be called machine learning (because of the ability of the software to update itself) and especially deep learning (a kind of ML that involves multiple layers in a complex architecture of statistical message-passing) largely developed without any explicit contrasts to GOFAI and its limits. The related term neural network (to signify the message-passing architecture) originated in 1943. Such data-driven technologies had long had their acolytes: but in the age of “big data” and fast computers, machine learning systems that mobilized “deep” architectures began outstripping other methods in tasks such as object recognition, language processing, and games.

Some readers will recognize how AI’s embrace of “big data,” while part of a much-hyped jargon, also constitutes an important object of humanities analysis. Largescale machine learning (ML), as Smith explains, uses“massive amounts of information…involving a very large number of weakly correlated variables” (49). I once spent the better part of a multi-year NSF grant arguing that a humanities approach to innovation theory would feature “weak signals” that find hidden or marginalized elements that may suddenly become powerful, and therefore point over the visible horizon.  In registering these, ML took an epochal step beyond the brittle formalizations of GOFAI: instead of elaborate sets of rules and logical chains, ML draws its power from pattern-finding at scale. 

Jackson Pollock, Convergence (1952)

That said, while ML’s turn to data-mining addressed several problems with first-wave AI, it did not solve them all and, of course, it created problems of its own. ML systems are at their most useful in showing up the enormous predictive power of statistical correlation (56). Likewise, ML’s granular affordances also take perception seriously, “it turns out that what makes faces distinctive are high numbers of complex weakly correlated variances across their expanse.” ML also offers meaningful alternatives to the Cartesian illusion of discretely knowable objects. Rather than discarding details in favor of abstract generalization, ML

avoid[s] categories altogether.  While we humans may classify other drivers as cautious, reckless, good, and impatient, for example, driverless cars may eschew discrete categories and chunking entirely, in favor of tracking the observed behavior of every single car ever encountered, with that data then uploaded and shared online—participating in the collective development of a profile of every car and driver far in excess of anything humanly or conceptually graspable (60-61). 

It’s worth fully appreciating the utterly profound implications of avoiding categories—what this does to our understanding of how the world is organized and to ML’s potential to grasp patterns at scales to which human cognition is largely blind.  “Much of ML’s power stems from its ability to track correlations and make predictions ‘underneath’ (i.e., in terms of vastly more detail than is captured in) the classificatory level in terms of which [the human’s] high-level ontology and conceptual registration is framed” (67). But it’s also worth noting how easy it is for ML’s boosters to mistake the size of their dataset (which can entail the whole of the scrapable internet) for an immanent totality. The paper that resulted in Google’s firing Gebru made this very point. Large though it is, the internet discourse on which Google’s largest language models is trained overrepresents white middle-class male English speakers at the expense of everyone else.

To put this another way, avoiding categories does not mean avoiding bias. Contrary to claims based on the averaging effects of very large numbers among other statistical features, finding patterns in big data sets leads as readily to “algorithms of oppression” (to borrow Noble’s term) as do other methods. But it does mean that ML leaves in what Cartesian systems push out. Though any given model that ML produces, will thus invariably reproduce the exclusions or distortions of its training data, the technology enables us to envision the world through a post-Cartesian lens: “A vastly rich and likely ineffable web of statistical relatedness weaves the world together into an integrated ‘subconceptual’ whole. That is the world with which intelligence must come to grips” (64; also 67).

Here we begin to discern the core problematic of Smith’s book: ML has introduced us to an outsize non-human ontological scale that, right now, only human intelligence—especially judgment—can evaluate. At least for now, ML utterly lacks that capacity. Instead of judgment, ML delivers reckoning: through troves of granular data, ML encounters a “world of stupefying detail and complexity” (67) which, through energy-intensive training, it translates it into special purpose models: encoded weights distributed through complex message-passing architectures (75). As such, ML can deliver extraordinarily useful predictions (about, say, the weather or the likelihood that a digital image includes signs of cancer) or fatally flawed predictions (e.g., the correlation of female names with unsuitability for work at a tech company, the correlation of majority Black zip codes with credit risk, the prediction that a woman walking her bike across the road will travel at the same pace as if she were riding the bike).

According to Smith, the key feature of intelligence, epitomized in the act of judgment, is that an entity knows what it is talking about (76). It needs to know its own state, that there is an “external (distal) state of the world,” that it and that distal state are not the same thing, and that its relation to the world is via representations. In contrast, Smith writes, “all existing AI systems, including contemporary second-wave systems, do not know what they are talking about” in these senses (76). They do not know, that is, that “they” are “talking” about a world from which they are separate and towards which they have taken a particular stance whose elements Smith itemizes in a series of lists (see in particular the one forming Chapter 8, “Objects”). 

William Frederick Yeams, Defendant and Counsel (1895)

The minimum prerequisites of intelligence are “authenticity, deference, and engagement in the world” (76). Authenticity refers to entities “that register the world (or that we register as registering the world) in ways that are appropriate to, and do not exceed, the forms of their existential engagement in the world. Smith associates authenticity with creatures—“systems, including animals”—that register the world through a “fit” between “how they take the world, or whatever patches of it they take” and “how they conduct their lives, what they care about, and what they are vulnerable to” (107). Authenticity means acknowledgement of this fit, built up over time, and—in the case of animals–through evolutionary processes.

Deference means that the system itself is deferential, “that itself submits to the world it inhabits” rather than simply fitting with our deference to the world (79).  Deference must include the features I’ve noted—knowing there is a world, knowing that the “representations” that guide the system are about that world, and then giving the world priority over these representations. Smith also applies another term, responsibility. Genuine intelligence in AI systems requires the ability to “take responsibility for the adequacy of the abstractive ontologies in terms of which they register the world” (80). I’d attach two further features to deference.  One is vulnerability: since deference involves giving the world priority over the entity’s representation, it induces vulnerability to the world and an ongoing need to take this vulnerability into account during registration. The other is the need to recognize that registration, in effect, is a project of multiple registrations: we cannot trust a system’s account of the world unless it can “shoulder responsibility for acting in constant light of” the fact that “all ways of registering the world are partial, skewed, appropriate in some circumstances and inappropriate in others” (90). These are qualities that may possibly be programmable, but, whether human or machine, they require rejection of dreams of a total knowledge that applies to all entities and contexts.

Engagement means what it sounds like—the knowing entity in continuous interaction and immersion with the world. Two aspects strike me as especially important. One is that the knowing system has to take itself as a knower—or take itself as a self (93-94). As Smith puts it, “a certain form of ‘self-awareness’ is necessary in order to achieve the requisite detachment to be able to see an object as other, in order for the system to hold itself accountable for being detached and holding the object to account” (94).  This is a profound complication of intelligence.

So is the other aspect of engagement, which Smith calls commitment.  “The system (knower) must be committed to the known, for starters.  That is part of the deference” in which one defers to the object in order to know it. But there’s a further matter, in which the knower must be “committed to tracking things down, going to bat for what is right,” and feeling in some deep way existentially “bound by the objects” (93). Intelligence, Smith is saying, depends on an underlying awareness of existence: both one’s own existence and the existence of the world.  For Smith, epistemology is prior to ontology, but we can add that a feeling of existence is prior to them both—and a fundamental precondition of intelligence.

What happens when these features of “knowing what one is talking about,” plus “authenticity, deference, and engagement in the world,” combine to generate the capacity for judgment?  I’ll break out a feature of engagement I just mentioned: late in the book, Smith puts judgment in terms of “commitment” to “a third level” of accountability beyond  representation (such as a machine’s data structure or a human’s image of a person) and beyond registration (as in a situated perception of the world),  As Smith notes, both GOFAI and ML are stuck at the first level (representation) which makes the third level even more daunting: “commitment to that which is registered a certain way” (142-43).  It is this commitment to the thing in the world being registered that Smith posits as the necessary precondition of genuine intelligence—or judgment.

Smith intends the existential overtones. This is not because the genuinely intelligent system must be a knower in the sense of an organic entity. It is because commitment, a deep epistemic ability to choose, depends on knowing the extent to which one’s model, concepts, data, frameworks are “adequate to whatever circumstances are at hand,” and that knowing in turn depends on commitment to that adequacy—in other words, commitment to holding one’s registrations accountable to what they register.  This is for starters an ethical commitment. The entity has to care whether facial recognition software rates darker-skinned people as threats more frequently if the entity is going to address or fix this bias, and that care depends on commitment to non-racist facial recognition software and to non-racism more generally. In order for AI to be intelligent in this way, it has to be the software’s commitment.  The software cannot borrow the engineer’s personal anti-racist commitment. For the software to be intelligent, it needs to have that commitment itself.

 My list of Smith’s “genuine intelligence” (judgment) now has seven features.  It keeps GOFAI’s “articulated intelligence,” but stripped of its ontology.  It keeps second-wave AI’s massive “scope of registration” of non-discrete, not really categorized data. It “knows what [it] is talking about.”  It also shows “authenticity, deference, and engagement in the world.” Finally, it shows commitment to the world that it is registering.

Late in the book, Smith asks, “what could lead a system to judgment”? (130).  He insists we need to set aside second-wave’s belief in “blanket mechanism,” in effective semantics, and in progress through “more of the same” – “increased processing power, accelerated practical development . . . [more] effective mechanisms, algorithms, architectural configuration, etc.—none of which are of the right conceptual sort to lead to insight about deliberation and judgment” (130).

Photo By GraphicaArtis/Getty Images.

Instead, judgment will emerge from something more like massively parallel historical learning.  Smith invokes childrearing. I would invoke formal and informal education. “Developing judgment in people requires steady reflection, guidance, and deliberation over many years—interventions, explanations, and instruction in situation where judgment is required, even if that need is not superficially evident (130).  Smith means situations where one can perhaps say what the rules are, even as the rules don’t work.  Or, the rules might work but one doesn’t know how to apply them. Or there may be ambiguous rules, or rules that don’t yet exist and have to be invented or inducted, or wrong rules that must be actively rejected. This process of fitting practices to vast numbers of cases is endlessly iterative and requires redundant and conflicting information from multiple directions. It amounts to education that is both systematic and experimental, organized and unplanned, inside and outside institutions.

As Smith continues, this education takes on a familiar hue:

The issues, symptoms, and recommended courses of action are deeply embedded in human culture; the patterns of behavior and thought that we hew to have been crafted over centuries of civilization (including in the religious traditions that have served as stewards for ultimate questions in all major historical civilizations).  The stewards of a child’s education—parents, schools, religious institutions, mentors, literatures, communities, and so on—bring to bear attention, wisdom, and reflection that build on this cultural legacy, skills distilled over many generations, and handed down from one generation to the next. (131)

Smith is describing not only parenting and schooling but liberal arts and sciences education. The idea is not to dilute liberal arts learning with technical skills (though the latter do need to be generalized across majors now, 70 years after CP Snow complained about a split between two cultures!) The idea is to intensify this education, making more systematic or at least relentless its practices of multiplied frameworks, interpretative conflicts, hermeneutic dilemmas and experience with stepping into and out of various situations and perspectives.  

Joseph Wright ”of Derby,,” An Experiment on a Bird in the Air Pump (1764)

There is something quite moving about Smith’s insistence on our limits, and on the world’s. He is saying that these limits are the source of our intelligence. A key source of the power of these limits is that they make us do all sorts of interpretative things, most of which remain central to our least powerful academic disciplines and the creative arts.  Smith is saying that intelligence is held back by the current overfocus on computational power and programming (reckoning). He is also saying that AI’s first and second waves has taught us the limits of both idealist abstractions and delusions of scale. And he is also saying that genuine intelligence emerges from computational power only when (and if) embedded in the infinite learning practiced by something resembling a self, one embedded in natural, social, historical worlds. The arrival of third-wave AI depends on bringing computational and experiential disciplines—now divided between STEM and non-STEM–into detailed, unbroken, and also, epistemically egalitarian contact with each other as we have never done before.

Leave a Reply