Omnipedia #5
Acquiring hope, tech confederations, AI weirdness & AC feelings, Universal Basic Happiness, and much more
This has been a long pause - music-making, house-moving, general ontological bemusement - but hopefully back with at least a fortnightly frequency. Below, some thoughts around some links, which we call round here an “Omnipedia”.
Roberto Unger on hope: “Agency is crucial: that we can have the experience of turning the tables on our circumstances. The passive position generates a contemplative, fatalistic attitude. It's by engagement, by acting (intellectual or practical), that we acquire hope". From this 2018 seminar with Cornel West [stay with Cornel for the whole beginning, he is shamanically wise on Freud]:
Such an excellent question:


AI is the weirdness right in front of us, from Noema:
“I don’t want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans,” a well-known AI critic is quoted as saying. We see it somewhat differently. We do want to talk about sentience and robots and language and intelligence because there are humans harming humans, and simultaneously there are humans and machines doing remarkable things that are altering how humans think about thinking.
Reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of a groundhog-day rehashing of debates about whether machines have souls or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct more nuanced vocabularies of analysis, critique, and speculation based on the weirdness right in front of us.
Yugoslavian “spomeniks” - large, ugly, modernist celebratory structures. This Twitter thread shows many, and warns us about “decontextualising” them (ie, not disconnecting them from the public bullshit of actually existing soviet socialism). But you gotta admit…
The big problem that UBI might solve - not a bad one-graphic shot at it (from Scott Santens). Note also this from RSA on UBI and youth mental health:
Can consciousness be created? Specialist but powerful debate between Mark Solms and Michael Levin (below). This comes at the AI question from a quite different space - not that it’s weird, but almost that it’s evolutionarily banal. Consciousness functions to help an organism adapt to its challenging environment. And that occurs through feelings - you’re averse to, or attracted to, this or that, as these experiences might aid survival. Not so much about data-crammed machine learners, modelling themselves according to their human interlocutors - but about boundaried little box robots, “feeling” their way through the world. Sadly or happily, or both, or more? Maybe when ACs get going, and plug themselves into their voluble AI kin, we’ll be clearly told.
You can agree it’s very very bad, but you don’t then have to be a climate “doomist”: useful collection of articles, enabling you to slither out of the bed any given morning. Though you may have to read this - with its earth-incinerating graphic - as counter-roughage.
“Winning the war before the war”, from the War on The Rocks blog. Just what we need: a new model of “cognitive warfare”. And pardon me, but I thought that’s what advertising and comms already was in a capitalist society…
It is 2050, and society is divided into an archipelago of community-based alternative reality zones. The French armed forces are tasked with “securing reality” in the face of an adversary capable of modifying collective behavior on a large scale through actions of deception and subversion. This was the scenario proposed last summer to the French Ministry of the Armed Forces by a “Red Team” program that links science-fiction authors with the military. This might just seem like an amusingly imaginative exercise, but the notion of “cognitive warfare” is gaining momentum in strategic thinking. But what does this concept even mean? Does it herald a new way of warfare? Or is it just old wine — psychological or influence operations or “information warfare” — in new bottles?
There is something useful in this notion. Cognitive warfare is a multidisciplinary approach combining social sciences and new technologies to directly alter the mechanisms of understanding and decision-making in order to destabilize or paralyze an adversary. In other words, it aims to hack the heuristics of the human brain in an attempt to “win the war before the war,” echoing the strategic vision of French Chief of the Defense Staff Gen. Thierry Burkhard.
It’s been a while, and I’ve been writing elsewhere…
For the National (Scotland)
UK recession - what happens when growth is no longer available to solve our issues?
How a Glasgow philosopher leads a global conversation on technological ethics
“My Old School”, the documentary on the “fake” teenage pupil Brandon Lee, is full of paradoxes
Are all our current crises pointing us “back to the 70s”? Depends on your version of the 70s…
For New Thinking (US)
For The Daily Alternative (Global)
Ok, back in the saddle and riding things…
BTW: I am hoping to use this space to start testing out my ideas for a forthcoming book - essentially, a sequel to The Play Ethic, which will have its 20th anniversary in 2024. I will place the material behind a paid tier, and would really appreciate your support. All those who do contribute will be credited in the book, you’ll get a discount on buying the final result, and there will also be extras provided for those paying (podcasts, forums, etc). Any thoughts or suggestions on this most appreciated - please use comments below.
Ad astra! best, Pat Kane x