Critical AI

GUEST FORUM: ELI ZARETSKY’S “WHAT COMPUTERS CAN’T DO”

[This blog post reprises Eli Zaretsky’s “What Computers Can’t Do,”  originally published in the London Review of Books and republished with permission. The original article can be found here.]

By Eli Zaretsky

 

The question of what computers can’t do was posed in 1972 by the philosopher Hubert Dreyfus. Dreyfus’s answer – think creatively – was soon considered an error, but the problem remained. In the New York Times last week, Yuval Harari suggested there is nothing that computers can’t do, as they are learning to mimic humans perfectly. David Brooks was more sanguine in the same paper a few months ago:

AI will probably give us fantastic tools that will help us outsource a lot of our current mental work. At the same time, AI will force us humans to double down on those talents and skills that only humans possess.

But Brooks’s claim is mere assertion, since he never tells us what these ‘talents and skills’ are, nor why they cannot be replicated by machines, as Harari claims.

It is difficult to distinguish human from machine intelligence because we use the same underlying philosophical and psychological understandings of the mind to discuss both. We think of human beings as essentially rational, problem-solving, goal-oriented animals – an idea that long antedates neoliberalism. At the same time, we think of the computer as a problem-solving calculator, though one with access to far more data than an individual person. The main alternative to this paradigm – psychoanalysis – has long been discredited. Nonetheless, I want to propose a psychoanalytic answer to the problem Dreyfus posed. What computers can’t do is free associate.

Free association occurs when the normal functioning of the mind, which is outward-directed, rational, calculative and problem-solving, is suspended, and a different kind of thinking spontaneously arises. This thinking is free association. We all free associate normally, for example in daydreaming, while doing crossword puzzles or when trying to remember where we left something. In such cases we relax into a state of free-floating attention, rather than concentrate on solving a problem. This practice was the original foundation of psychoanalysis.

Free association reflected Freud’s original (and never abandoned) distinction between primary (i.e. unconscious) and secondary (preconscious but not yet conscious) processes. The primary process is the realm of free (undirected) associations; thoughts do not know where they are going; they proceed by association, which results either in condensations or displacements of affect; they contradict each other, more or less along Derridean lines. The secondary process, by contrast, is governed by a grammar or logic; thoughts have direction, meanings can be specified; thinking is essentially calculation. Freud’s discovery was that by suspending the secondary process, and thereby facilitating free association, it was possible to infer the memories, traumatic residues, wish-fulfilling fantasies, parental imagoes and so forth that give shape to our ‘free’ associations.

The early discoveries of psychoanalysis all rested on the primary/secondary distinction. In a dream, a preconscious thought that would disturb sleep (a fear or wish) drifts back into the non-logical, imagistic, associational world of the primary process, where it connects with early fantasies, wish-fulfilments and unconscious imagoes until it makes its way forward again in the form of a wish-fulfilling dream. Poetry too requires the primary process. In ‘On First Looking into Chapman’s Homer’, Keats begins with the secondary process experience of reading Homer, but then descends into archaic memories of a time in childhood when reading seemed like travelling, and the child could imagine himself ‘silent, upon a peak in Darien’. The archaism of the Homeric world converges with the archaism of infancy.

Computers can of course write poems; they can associate, and construct images gleaned from all the world’s literature, but they will never act like a little boy who began reading books when he was very young and imagined himself growing up to become an explorer. Computers have had no infancy, therefore no primary process, and no free associations because they have nothing to free associate to. Computers can solve problems, calculate, increase scientific knowledge or endow us with the powers to act on the world, but they cannot turn inward, become passive and receptive, and discover an inner world, since computers have no inner world to discover.

Since computers perform the same instrumental, problem-solving, goal-oriented functions that we do, the fear that they might someday ‘compete’ with us is not unreasonable. The only edge humans have over computers is access to the unconscious through suspension of rational thought: i.e., free association. But what a paradox! Free association is precisely the capacity that we use less and less. Society today has reduced free association to playing word games or finding lost keys. To understand how this occurred, we need access to the unconscious.

The idea that the mind is a sort of computer goes back to the seventeenth century, to figures such as Hobbes, Descartes and Pascal, but it really gained ground in the mid-twentieth century, with the growth of cybernetics. The basic idea behind cybernetics was to bracket off the questions of subjectivity and interiority that pervaded the Freudian age and to focus instead on prediction and control gained through the gathering of objective, behaviourist information or data. While the cybernetics movement did not survive, a data or network-based view of the world gained ground. The turning point occurred in the 1980s, following the advances in microprocessor technology that made home computers possible, leading to today’s ubiquitous screens and interfaces, feedback loops and circuits, information cycles and supply chains.

The triumph of the computer was accompanied by the destruction of psychoanalysis. According to the cognitive psychologists, information theorists and computer pioneers of the 1940s and 1950s, psychoanalysis was a pseudoscience. Managed care, which arose in the 1970s, insisted that psychotherapy proceed according to the ‘medical model’, meaning that rationality was taken as the norm and mental ‘problems’ were defined as diseases that could be classified through their symptoms, while ‘cures’ were defined as the reduction of symptoms. While a psychoanalytic profession continued to exist, it adapted to the new regime by turning itself into a problem-solving, service profession, not one oriented to the exploration of the unconscious.

You might think that the turning away from free association toward the medical model occurred through secondary process thinking: i.e., the progress of science or the integration of psychotherapy into the world of computers. This would be wrong, however. The triumph of our contemporary ‘post’-Freudian way of thinking about the mind required the collective mobilisation of vast fantasies, powered by emotions and advanced by the social movements of the 1970s. These fantasies characterised psychoanalysis as a movement that hid child sexual abuse (leading to a wave of false accusations of childcare workers) and as a countermovement to feminism, propagating the idea that women were inferior. While there were grains of truth in these accusations, the much larger truths of the unconscious and the universality of homosexuality and bisexuality were suppressed. The behaviourism of the social movements of the 1970s was reflected in the insistence that the mind was the product of society. As Juliet Mitchell wrote of the movements of her time, for them ‘it all actually happens … there is no other sort of reality than social reality.’

While the discovery of free association dates to the 1890s, Freud later formulated a second way of thinking about the mind, which built on and incorporated that discovery – the ideas of the ego and the id. The ego, as Freud conceived it, was the locus of the secondary process, but its borders with primary process thinking were permeable. The id was the source of the impulses, compulsions and narcissistic fantasies that pervaded the primary process. Freud envisioned an ego that could free associate and thereby maintain a sense of its unconscious environment. In the 1970s, the Freudian ego gave way to egoism in the form of ambition, competitiveness and other corporate values. As was frequently said at the time, in earlier social movements radicals were working for others but now radicals were working for themselves. The neoliberal redefinition of the subject in egoistic terms was a surface phenomenon, however. The embrace of egoism rested on the narcissism that emanated from the id. What Foucault called ‘productive power’ – self-generated and self-managed – required a libidinal basis. Market values, infused by egoism, rested on mass psychological processes.

We seem now to be coming to the end of a centuries-old process. The term ‘artificial intelligence’ was coined in the mid-twentieth century, but the reality of organising society in the form of a series of algorithms goes back to the seventeenth century, with its focus on ‘matter in motion’. What else is the market but a conglomeration of calculating liberal agents? E.P. Thompson demonstrated the importance of the introduction of clock-time to an increasingly regimented – especially self-regimented – society. Alan Trachtenberg did the same for railways. Moishe Postone, building on the work of Georg Lukács, showed how all secondary process thinking in modern society is formulated on the template of the commodity. What Freud adds to this is awareness of the market’s phantastic sub-structure.

Given this history, reflecting on the destruction of psychoanalysis and the triumph of computers in the 1970s, the problem of artificial intelligence is ill-posed. The danger is not that we will someday have to go to war with computers powered by artificial intelligence. The danger is that we will become a species of artificial intelligence ourselves.

To share your ideas or offer advice please feel free to comment below (the comments are moderated) or write to the author.

Eli Zaretsky: zarete@newschool.edu

Exit mobile version