Omnipedia #17: the AI ragtime plays on, Green Chuck (maybe), Oppenheimer A-Go-Go, and more
Including: become a digital fellow at Newspeak House; enter into William Blake's Golgonooza

Sluice-gates open, signs and portents beached up here. Hope it’s of interest. And if you wish to support this work directly…
The AI ragtime continues, and (like many of you no doubt), I don’t know whether to be thrilled or terrified. On the thrill-side, I can’t help (it’s my history and wiring) but respond positively to propellor-head optimism like this, from the CEO of Open AI, and wizard of all these GPT portals of wisdom, Sam Altman:
Naomi Klein responds by recalling Altman’s survivalism. “Back in 2016, he boasted: ‘I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.’” It’s bracing to hear Klein apply her anti-capitalism and radical democracy to AI. Though I note that you could easily replace “AI” with “Branding” in many parts of her critique:
A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It’s a set of policy choices. We can regulate the current form of vampiric chatbots out of existence – and begin to build the world in which AI’s most exciting promises would be more than Silicon Valley hallucinations.
Because we trained the machines. All of us. But we never gave our consent. They fed on humanity’s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances.
And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.
Noted, nodding. But I still feel that the current AI summer is weirder than a left-right, anarchy-authority framing. The question of sentient AIs - ones that feel and suffer, have pains and pleasures - throws up a swarm of moral questions.
If they pass the sentience threshold, whether emergently or by our explicit design, are we once again facing very old questions, about the enslavement of beings for our labours? [Note: there are actual Kenyan workers currently filling in data gaps for ChatGPT under near-slave conditions].
Or do we try to deliberately limit them only to be more powerful auto-completers, sampling all of human culture just to help us motivated imagineers start from higher grounds, rising to new intellectual/expressive levels?
The Atlantic interview with a sociologist of sentience, linked above and here, helps us out:
Jacy Reese Anthis: …In psychology, there’s something called social-dominance orientation. It’s the tendency of a person to think that some groups of society can and should dominate others. It’s very heavily correlated with racism, sexism, and speciesism, meaning the belief in human superiority over nonhuman animals.
I do think caring more for some beings, any beings, leads a person to care a little more for all beings. In Buddhism, this mindset is called bodhicitta. It means universal compassion and concern for all sentient beings.
Interviewer: Do you see an argument for never creating sentient AI systems and never providing AI systems with rights?
Anthis: That’s very compelling. I think in an ideal world, if we had a better computational understanding of what is happening inside an AI that might be creating sentience, if we could prevent that from happening, I’d be sold on the idea.
As usual, I urge you to keep an eye out for the work of neuroscientist Mark Solms, who believes he has an experimental pathway to artificial consciousness. But according to Solms, it’ll be much more like an organism trying to maintain its balance and integrity in the world, than ethereal, superior Spielberg-type mechas. And maybe we mammals know how to handle pseudo-mammals better.

I was on Sky News this week, declaiming my republicanism in the face of the Coronation, though (over) generously trying to find some value in a “Green King leading an age of Green Reform”. The LRB’s James Butler has partly recalled me to my senses:
There are plenty of queasy details about the traditionalist milieu that Charles finds stimulating, but its fundamental intuition is that modernity is an unnatural, desacralising catastrophe and human beings would be much happier returning to their cultural repositories of traditional knowledge, which aligns with the fundamental order of the cosmos.
This may seem an innocuous enough credo – not a million miles from some secular thinking on alienation – until you start to follow its implications: the division of human beings into distinct cultural-religious dispensations, the preference for mystery over knowledge, the bleaching of human history – with all its hybridities, misfirings, detours and sheer mess – into the unstained simplicity of cosmic order.
Worst of all may be the rancid implication that the social order is a given natural phenomenon and you really would be happier learning your father’s trade, not getting above your station, neither moving nor really changing, knowing your place in an inane and endless reproduction of unjustly distributed sameness.
Nice enough if you’re a king, less thrilling if you’re a subsistence farmer. Or, as Charles put it in 2003, in a private whinge against ‘social utopianism’, people think they could be ‘infinitely more competent heads of state without ever putting in the necessary work or having the natural ability’. Which does he think he has?
All taken. My anxiety these days, though, is the degree to which majorities in the Western populations have the energy, are too exhausted, for the “hybridities, misfirings, detours and sheer mess” of modernity. If it gets to power, the ontological conservatism of the Starmer Labour Party - promising nearly no change from the Tory offer, and at least making nothing worse - may synchronise well with such a Prince of stasis (read: “harmony”).
On Sky, I indulged in fantasies of Charles either transforming his assets to become exemplars of a Just Transition, or divesting them into commons that might do the same. At the same time, I wondered if this could provide cover, or at least tradition-wreathed credibility, for a centrist Labour government. Who, like all governments, need to prepare voters for the extreme lifestyle shifts that climate worsening demands. Yeah, I know… Blame it on being up too early and high on Lavazza.
I welcome any opportunity for we frantic mammals to dwell seriously on our capacity for self-termination (avoiding it would be nice, cosmically). Maybe director Christopher Nolan will get us out to the cinemas on that ticket, with his new film about Robert Oppenheimer, father of the Hiroshima and Nagasaki bombs (trailer above). Nolan increasingly reminds me of Kubrick, sharing a view that humans and their machines are in a state of permanent overlap, with moral force and agency swirling between them. No bigger example of that theme than here.
I like this snapshot, which I think is supposed to be Enrico Fermi, answering Oppenheimer’s claim that the Bomb will herald an “era of peace”. That is…
Fermi’s paradox, of course, suggests we haven’t been contacted by alien civilisations because they blow themselves up - self-terminate - when they get to a certain level of world-shaping technology. To me that’s still, as Ezra Pound used to say about poetry, “the news that stays news”.
Alert to those of you who may wish to dwell on the end points and political impacts of radical technologies, in a congenial Shoreditch London residency, amidst a milieu open to your projects and able to surround you with high-talent peers… Apply for Newspeak House’s Fellowship programme.

I like Daniel Pinchbeck’s brusque yet psychedelic urgency:
For example:
I tend to believe that, to galvanize a huge systemic transition, we need to point humanity in a new direction. We need to start imagining/promoting how much more amazing and incredible our world could be, if, for instance, we did away with extreme inequality and redesigned our social and economic systems to maximize cooperation rather than competition. This is where, I think, we need to return to visionaries like William Blake, who saw the imagination, in itself, as the essence.
Apparently William Blake imagined a world called Golgonooza:
A vision of London, as well as Jerusalem, Golgonooza is a model for all internal human and geographical correspondences as ‘microcosm to macrocosm’, and is symbolized as a skull.
In The Marriage of Heaven and Hell, Blake recognized the tremendous dynamism that comes from reconciling opposites: “Without Contraries is no progression. Attraction and Repulsion, Reason and Energy, Love and Hate, are necessary to Human existence,” he wrote…
That seems to me how we need to approach AI: The technology has tremendous capacity for abuse and destruction. It also has incredible potential to reinvent human society, to help us tackle our most intractable social and ecological issues, to liberate humanity from scarcity and drudgery. I understand all of the reasons to be pessimistic or freaked out by it, but I also can’t deny its promise.
As dangerous as it is, it could be humanity’s once-in-a-conscious-species’-evolutionary-trajectory ticket to Golgonooza.
Ok, enough flow-through… Please send material we should be covering here. If you’d like versions of this thinking as strategic insight/foresight for your organisation, project, product or service, go here www.patkane.global. And to support this writing…