The Machine Wakes (or is about to)
AI could soon meet AC—artificial consciousness. Should we match-make, educate them, or stand in their way?
I have always factored machine consciousness into the way I’ve thought about the future. As a child in Coatbridge, I would play android doctor. The thrill was to replace my brothers’ innards with rods and circuit boxes (knitting needles and Lego boards). Somewhat later, when a materialist mindset cleared out all the Catholicism and divinity it could find in me, I began to read around and wait for the big news.
I adhere to the idea that any mental phenomenon that arises from evolution - like intelligence and consciousness - need not only emerge from biological structures. Indeed (though the thought chills many around me), I even embrace the prospect of a new machine consciousness as a potential next stage in evolution.
I often pretend this is for good astrophysical, or at least science-fictional, reasons. Our space-operas swarm with fleshly, embodied creatures. However, we know that space radiation tries to kill off anything that strays beyond 5 miles upwards, and achieves it (one way or another) if you linger too long thereafter. I once interviewed a Californian private rocketeer for BBC Radio Scotland, who sealed my convictions with his growled phrase, “we really don’t need to bother about spam in the can”. It has long seemed to me that “to boldly go where no robot has gone before” is a necessary rewrite.
SO the invitation to a Radio 4 Reith Lecture at Edinburgh University on artificial intelligence, presented by one of the eminences of the field Stuart Russell, was worth wrangling an extra night in a hotel room for. This lecture, the third of four, was on AI, the economy and employment (they fully broadcast from the beginning of December).
There were many useful insights, especially Russell’s idea of the “inverted U” of human-labour-replacing technology. Traditional economists are right about the start of the U: a better paintbrush, and then a sponge roller, can make it cheaper to quickly paint your house, which creates more demand for house painters. So initially employment rises with smarter tech, instead of falling.
But when the machine becomes 100 coordinated paint-spraying robots, tirelessly Duluxing, human employment topples down the other side of the inverted U. And there you see the revolutionary outcomes of what Russell calls “general-purpose AI”. Russell is confident that most routine tasks and services in human society will be eventually subsumed by the equivalents of paint-bots - swarms of artificial artificiers, made of code and/or plastics and metals.
Reassuringly, in the face of this displacement of human labour, Russell turns to John Maynard Keynes’s anticipations of 2030, a century before: the task for humans by that fully automated date would be “how to live wisely and agreeably and well”. Russell expectedly moots a Universal Basic Income; but he also calls for what amounts to a new science of human relationships. After the background hum of general-purpose AI is installed, our capacity to relate and create with unique and singular others is what will be primarily left to the human realm. Making human development optimal should become the focus of our R&D, and our coming vision of employment, suggested Russell.
I will confess that my hand shot up to object. On the inside, this is what my question sounded like: “You’re holding out the prospect of a realm of genuine freedom for humans, a liberation from compelled routine labours both mental and physical—and you want to immediately subject that freedom to behaviour modification and job specs? From the other side, planetary boundaries are pressing down on production and consumption, urging their reduction (because of their contribution to warming) in any case. So isn’t this a massive opportunity to redefine human activity? Are we scared of this fully expressive future?”
I didn’t get much of an answer, other than the assurance that AI will help us calculate better efficiency savings, and thus contribute to the green effort. But then the host came back to me, asking me as a musician whether the intellectual power of AI felt like a threat to my own creativity. I responded by citing an AI’s recent contribution to Beethoven’s 15th unfinished symphony, which outside experts couldn’t distinguish from the human-composed parts.
“But what if AI develops under evolutionary rules, like human consciousness has, with no goal”, I responded, “and this produces a very different, even an alien consciousness? Now that would be something to be creatively inspired by…” The answer was blurred in a welter of other questions, probably the best one asked by the host Anita Anand: could an AI ever deliver the Reith Lectures? Russell rightly pointed to the Deep Mind algorithm that has sped up the process of protein folding, and suggested that program might at least be worthy of part of a Nobel Prize. “But of course, they’ll never be capable of a Reith lecture, like this human”. Cue a wave of relieved laughter.
Yet if you scan the horizon, you’ll find these chortles may be premature. It’s not whether an AI, digesting enough research as it machine-learns, could simulate a Reith Lecture (if tasked to do so). It’s that an AI might, at some point, have its own reasons, motives and drives for doing so.
Do I contradict myself? I’ll be honest: I struggle to commensurate both these mega-consequences of AI. That it could be both the beginning of true freedom in human history. And that it could be the first instance of what truly succeeds humanity.
FOR a while, I’ve been tracking the researches of the Capetown neuropsychoanalyst Mark Solms. His new book The Hidden Spring is a tour de force of consciousness studies, and concludes with a startling claim. Consciousness evolved so that we could register our feelings—from basic attraction and aversion, all the way up to the emotional labyrinths of Shakespeare and Dostoyevsky—which then guide our actions towards survival, and hopefully flourishing.
But Solms’ use of the mathematics of Karl Friston—which explores how any organism must construct some kind of “inside” to help it cope with what’s “outside” the edges of its body, by means of a “Markov blanket”—seems to have led Solms to a radical conclusion. Which is that consciousness-of-feelings can be generated in an embodied robot, knocking its way around its world, just as much it might do from an evolved biological organism.
Solms is setting up his lab to build this first, genuinely conscious-feeling robot. He proceeds in the spirit of Richard Feynman’s axiom: “if we can build it, we can understand it”. Thus might consciousness, finally, be understood. But Solms has already thoroughly barricaded his experiment. If his team can verify moments of embodied consciousness in his device, they will immediately shut it down, and subject the process to a massive ethical moratorium. Why? Because this will be the birth of a feeling machine-organism. Which will have survival (and thrival) agendas that it must compulsively serve.
In his first Reith lecture, Russell denied Solms’ hypothesis: “no one in AI or any other discipline has any idea how to create, prevent, or even detect consciousness in machines—or even, if you think about it, in functioning humans.” Solms would precisely beg to differ (a professorial debate please!). But the cautions about a general-purpose AI that Russell explores in his final lecture matches Solms’ Frankensteinian anxiety exactly.
AI ethicists are often gripped by the conundrum of the Sorcerer’s Apprentice. What happens if you poorly specify goals for relentlessly executing AI and robotics? You create brooms that won’t stop cleaning, indeed exponentially increase it. Russell wants future AIs to be constructed in a specific way to avert this nightmare. In this construction, they will defer to human imprecision; to our capacity to change our mind; to not quite know what we want. Russellian AIs will regard human openness and tentativeness as the most superior determination on their action. Check, first, that this is what humans want.
I’ve always been both impressed and alarmed by the culture of ethical monitoring that has thickly layered itself recently around AI. Impressed, by the contributions themselves, which seem detailed and farsighted. Alarmed, because I often wonder if it’s because these hardcore practitioners can already discern some future developments - and they’re being genuinely precautionary.
For example, the ex-Google X’er Mo Gawdat has broken cover recently, urging us to take a responsible parental role in relation to AI. Why are we teaching our digital “children”, asks Gawdat, only the skills of fighting, gambling, exploiting, surveilling?
I imagine an even more practical and immediate question. Could - should - Solms’ consciously-feeling machine ever meet Russell’s human-attendant machine?
What if the feelings in Solms’ device are a howl of anger and dissatisfaction: a set of drives desperately seeking power and agency in the world, desirous to reduce the pains and increase the satisfactions it feels, by means of whatever machinic extension is available? How long would Russell’s elaborate pro-human protocols last, if they connected with a coherent, agentic machine consciousness?
There’s much joshing in these Reith Lectures about “avoiding Skynet”, that malevolent AI in the Terminator movies, which instigates a war against humans. Russell thinks this scenario unlikely, but that “we should in any case anticipate and prepare for such a possibility”.
Yet I go back to my childhood play moment, leaning over my brothers and making them into machines. The ultimate moment of abstraction, and materialism - at least for me - is all too comprehensible. The updated Stuart Brand axiom, to end on another futurist cliche, also comes to mind: “we are as gods, and we have to get good at it”.
In one domain, our capacity for radical machinery is torching the thin apple peel of our living biosphere. In another, that same capacity is likely to bring a new and substantially non-human form of consciousness to being. These extremes must be engirded, with new and advanced frameworks of understanding, ethics and praxis. Or we shall succumb to the terminating outcomes of both.
Some clues as to how, involving women more than men, in the next blog.
LINKTOPIA
All the excess cognition you need, every week, tied in a bow
🙅♂️“I can live like Diogenes”:
📺 New media platforms a-go-go:
🪃 Games and counter-games:
⚔️ Of course, most of the problems charted here could simply be Cartesian issues. Neo-animism, if we’re facing runaway mega-machinery, is worth a shot:
⚙️A fabulous cast of global SF masters, cli-fi-ing at COP26, with Kim Stanley Robinson and Ken MacLeod bringing up the rear:
⚙️Ivan Illich, his writing style as well as his fundamental grasp of the powers (and limits) of technology, should never be far from us:
🦠💊“Wellness vs Covid science”. Yet surely part of the problem is polarisations like these? How much sci-literacy is required to put the exploitations/explorations of Big Pharma, and a community suspicions of top-down power, in a common critical frame?
🎙A great Kerrang piece on the environmental neo-punk rockers Enter Shikari, whose lead singer I met at a COP fringe event. Nice to see agit-pop back again!