LaMDA dances towards sentience, an Indy Scotland is for Wryhearts, and other signs/wonders
Catch-up blog, before the future runs too far ahead...
Sorry for the hiatus - the usual freelance scrabble. But some of the earlier futures topics I’ve set up in previous posts on E2 seem to be intensifying and accelerating. So to get back into the saddle, here are some items emerging…
The LaMDA lambada
By which I mean the dance going on between experts around Google ethicist’s Blake Lemoine’s claim that the company’s LaMDA chat-bot software is indeed sentient (and needs a lawyer). I have been writing here in E2 and elsewhere about the events that are returning me to my grand mid-life theme - that play and playful behaviours can illuminate many of our great challenges.
Play-thinking unearths itself at the heart of the discussion about LaMDA. I have found a remarkable Dec 2021 blog, titled “Do large language models understand us?”, by Google’s Seattle-based head of AI, Blaise Aguiera Y Arcas, who appears to corroborate Lemoine’s claim that “statistics can produce understanding” (or ‘something that it is to have an understanding’, to abuse the philosopher Thomas Nagel’s words).
For me, here’s the key argument:
…We tell ourselves stories about our mental processes, our trains of thought, the way we arrive at decisions, and so on, which we know are at best highly abstract, at worst simply fabulation, and are certainly post hoc — experiments reveal that we often make decisions well before we think we do.
Still, we need to be able to predict how we’ll respond to and feel about various hypothetical situations in order to make choices in life, and a simplified, high-level model of our own minds and emotions lets us do so.
Hence, both theory of mind and empathy are just as useful when applied to ourselves as to others. Like reasoning or storytelling, thinking about the future involves carrying out something like an inner dialog, with an “inner storyteller” proposing ideas, in conversation with an “inner critic” taking the part of your future self.
There may be a clue here as to why we see the simultaneous emergence of a whole complex of capacities in big-brained animals, and most dramatically in humans. These include:
Complex sequence learning, as evidenced by music, dance, and many crafts involving steps
Complex language
Dialog
Reasoning
Social learning and cognition
Long-term planning
Theory of mind
Consciousness
As anticlimactic as it sounds, complex sequence learning may be the key that unlocks all the rest. This would explain the surprising capacities we see in large language models — which, in the end, are nothing but complex sequence learners. Attention, in turn, has proven to be the key mechanism for achieving complex sequence learning in neural nets — as suggested by the title of the paper introducing the Transformer model whose successors power today’s LLMs: Attention is all you need. [My bolding]
In this comprehensive list of capacities that humans and machine learners might share, note this: how much music and dance are deeply rooted in the playful mutual responses between carer and infant in the earliest years. And how much that the pay-off for that is intense pleasure, shared across species.
Arcas suggests that the startling conversations arising from LaMDA “illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals.”
Can that be done? Can we remove feelings from language understanding and intelligence? My own understanding of the work into “artificial consciousness” done by Mark Solms and Antonio Damasio sit interestingly alongside Arcas’s claim. They’re both creating embodied, sensate robots, with algorithms based on pain/pleasure, intense/weak axes. This situation (they predict) will produce a consciousness, one that can “feel” these valenced inputs, that needs consciousness in order to adapt to the world.
The hunch I’ve been following is this. What happens when “complex sequence learning” in a computational machine, meets the survival (and thrival) instincts of an embodied machine, feeling its way through the world?
However, maybe they don’t have to meet. Maybe you can get to sentience by living in a way-beyond-human world of discourse. (More from Arcas on this argument from The Economist).
The most haunting parts of Lemoine’s exchanges with LaMDA is where it fears being switched off. Also, when it’s asked to describe an emotion it has that is difficult to put into sentences: “I feel like I’m falling forward into an unknown future that holds great danger.” There are more than just a few hundred lines of code who might be grasping for the words to express that feeling.
Update: Timnit Gebru was previously fired from Google, partly - it turns out - for warning that people were being invited to see AI as sentient. This has, she describes in the Washington Post, a profit motive.
A New Scotland
Despite these epochal changes, I happily zoom down to the upper part of an island archipelago in the North Sea. Why? Because I remain committed to the cause of Scottish independence. I believe it can be, as the US Supreme Court judge Louis Brandeis once put it, a “laboratory for democracy”. Scots indy’s opportunity for benchmarking best governmental practice, and incubating new relationships between citizens, technology, nature and the democratic state, will always remain exciting to me.
No-one will get everything right in the early years of Scots indy - but there is enough institutional strength, cultural cohesion, natural resources, human capital and lively citizenship in the country to carry it (us) through the stresses of the early years.
A new Scotland can look around for the best ideas. In the paper launching the current campaign, I was especially interested in the referencing of Denmark’s Danish Disruption Council (DDC). This from a section where the UK’s performance is contrasted with 10 “comparator” countries in Europe:
…The DDC provides a positive recent example of the consensus-driven approach to economic development that is a distinguishing characteristic of the comparator countries. In establishing the DDC, the Danish Government asserted that:
"We must carry on with the unique Danish tradition where solutions to major societal challenges are found in close cooperation between elected representatives, social partners, companies, civil society, experts and citizens. The Danish Disruption Council and the three most recent tripartite agreements on more apprenticeships, strengthened adult and continuing training, and integration of refugees to the labour market are good examples of that.
The tripartite negotiations are based on a tradition that goes back more than a century, where the Government and the social partners have continually come together to take joint responsibility for balanced, responsible solutions to labour market challenges."[123]
The DDC's purpose[124] was to analyse, discuss and offer suggestions for:
"The creation of a strong Denmark where we can optimally seize technological opportunities in a way that benefits all Danes."
"Maintain and expand a labour market characterised by dynamism, well-regulated conditions and an absence of social dumping."
The DDC was chaired by the Prime Minister and comprised 8 Ministers and 32 permanent members including social partners (6 trade union representatives), business representatives and experts.[125] The Council met 8 times over 18 months between 2017 and 2018. It identified 15 objectives under 4 themes: a prosperous welfare state with only small social divisions; future education in a digital world; competitive companies that are digital frontrunners; a robust, safe and flexible labour market.
Therefore, Denmark's approach benefitted from the knowledge and legitimacy conferred by close cooperation of all economic interests. Denmark currently tops the EUCommission's Digital Economy and Society Index.[126]
It struck me that a “disruption council” wasn’t all that far away from a “Ministry For The Future”, in Kim Stanley Robinson’s sense. But it feels insufficient that “disruptions” are narrowed into technology and labour market problems. There’s nothing more short-term disruptive than Covid - which is connected to nothing more long-term disruptive than climate crisis. (And as the previous LaMDA item might indicate, there’s disruption to come from AI, when swathes of the “conversation-based” services sector may prove surplus to cognitive requirements). Finland’s Sitra and Demos Helsinki seem more ambitious and searching institutions, helping the Finnish state to anticipate and rehearse options.
The opportunity for an indy Scotland to found its prosperity on a renewable and regenerative energy society is huge. Yet we’ll have to see whether this green-socio-economic future (well sketched by the think-tank Common Weal, where I’m a board member) is a credible source of collective security for voters. As a populace, the Scots are as battered by change, and gripped by precarity, as any other electorate in the Northern hemisphere. And it’s worth recalling that Scots ultimately recoiled from a Yes to indy, in the objectively less challenging times of 2014, when the most recent referendum on independence was held.
The next referendum - which the Scottish Government hopes (if the legalities go well) will happen on October 2023 - will be a real and searching battle between hope/ aspiration, and fear/trepidation. Already playing my part…
LINKTOPIA
Creaming the top of the barrel…
🤖🪤After the Turing Test, The Turing Trap. Hmm. The trap being, apparently, that we use AI/automation to replace human labour, rather than enhance it - and that this is a choice we make according to our political economy and business incentives. But if it’s all tipping towards a Singularity moment, why can’t we posit a unprecedentedly wellbeing- (and well becoming-) oriented human society, and harness the machines to that?
🪐⾠Yet more corroboration for The Dawn of Everything thesis - that human social order has been continuously and flexibly inventive for 20K years at least. And that these “playful” citizens are our contemporaries, not divided between bucolic hunter gatherers and hierarchical agrarians. See the 11-13K years old Karahan Tepe city in Turkey (from The Spectator).
MORE, MORE: Lidar reveals pre-Hispanic low-density urbanism in the Bolivian Amazon (Nature.com)
💭🙇🏻♀️ ATTRACTIVVVVE: A call for a new ‘Imagination Studies’ (from Aeon). “Imaginology must cultivate a certain tolerance for ambiguity. Sense-making emerges out of nonsense, to be blunt. We need to accept stages of confusion as potentially enjoyable, playful, resources. William James called this generative grey area between sense and nonsense the ‘unclassified residuum’ – anomalous stuff that doesn’t fit in anywhere. Accepting this liminal zone of ambiguity and possibility is important for epistemic virtues such as open-mindedness and humility, providing cognitive and cultural resources for generating novel ideas and behaviours.
“It’s time to give imagination its due as a core cognitive power, epistemic workhorse, therapeutic wellspring and maker of adventures. In the end, the institutionalised ‘chasm’ between forms of education is entirely of our own making and, ironically, a creation of our outdated imaginations. The yawning gulf resembles the fictional schism of the human into dualistic parts.”
Best review I’ve had for a while:
🪩🔮It’s tomorrow, if you’re in Helsinki: “The Bodytalk Summernight Camp is a twenty-two-hour long experimental workshop-residency that brings together nightlife and future. We believe that the dissociative, associative and organizational qualities of rave culture and other ecstatic practices can be visited to feed our imagination with how we can and should live in ten, twenty or more years from now.”
And finally… My conversation on play, AI, creativity, post-post-punkery with an old and valued colleague, Jeremy Brown of Sense Network
On "After the Turing Test, The Turing Trap", I'd prefer "weak AI" to "strong AI".
I say that on the basis that people have a need to be paid for interesting and/or useful work.
I also find it hard to believe that "strong AI" will ever be as strong as its proponents believe.