Thursday, 2 April 2026

LOVE AND HAIDT

Dick Pountain /Idealog 374/ 3rd September 2025 : 10:36am 

It becomes harder and harder to scrabble grains of online pleasure, amusement or edification, but it remains possible (just) on YouTube. I particularly enjoy two grizzled performers, Rick Beato (whose interviews of musicians like Rick Rubin, Guthrie Trapp and Tom Bukovac are priceless) and Jon Stewart, satirical political commentator whose late night Daily Show kept many of us in stitches during the GW Bush presidency. Stewart attempted to retire in 2015 but he’s back, presumably lured by the grim shenanigans in the White House, presenting a new YT version of the Daily Show on Mondays, but on Thursdays an in-depth podcast called The Weekly Show, where a recent guest was the social psychologist Jonathan Haidt.

Haidt is currently controversial for his best-selling book that alleges that smartphone overuse is damaging young peoples’ mental health, but I know and admire his work from reviewing an earlier book called ‘The Righteous Mind’ (2012), an experimental study of the way peoples’ moral outlook affects their political behaviour. Haidt is a leading light of the ‘Intuitionist’ school of psychology which holds that not all our behaviour is rational, and in particular moral judgements like disgust are hard-wired to bypass the reasoning parts of the brain. (His thought-experiments to test this are highly amusing but unsuitable for a family magazine like this, involving incest and molesting chicken dinners). The reason I raise his work in this column is another book I just reviewed, Karen Hao’s ‘Empire Of AI’, an inside glimpse into the rise of OpenAI and ChatGPT. 

Hao documents three important facts about the company: unanimous agreement that AGI (Artificial General Intelligence) is possible and the only worthwhile goal; belief that AGI will be achieved by endless ‘scaling’, cramming tens, thousands, millions of Nvidia GPUs into their servers; and a split, right from the very start, between those who think AGI will be great and those who think it will be deadly (many of whom left to set up Anthropic and Claude). Now I’ve stated here several times that I believe AGI is neither desirable nor possible, for reasons that depend upon the work of Haidt among others. If it’s not achievable we won’t face the worst of many imaginary harms like enslavement by robots, but it makes the current monomaniacal hyperscaling futile, dangerous and horribly wasteful. 

Impressive, amusing and addictive as current LLMs and GPTs are, they fall far short of general intelligence because they’re not alive. Unlike Nvidia chips, living beings need food, safety and to reproduce themselves, and these imperatives structure our thought and behaviour profoundly. Billions of years of evolution equipped us with ‘emotions’, chemical computational sub-systems that detect and seek to satisfy needs. The US/Portuguese neuroscientist Antonio Damasio postulates that when we store a memory of an event it somehow gets imprinted 

with the emotional/hormonal state at the time (via biochemistry that is barely yet understood). When we retrieve it later to help understand some future event, these emotional markers act as weights (like parameters in an LLM) and contribute to the outcome of our decision. So images and words can never be entirely neutral, they carry subconscious emotional connotations of varying strengths. AI models lack needs and fragile bodies, and hence purpose. Actually smartphones, which can ‘see’ and ‘hear’, know their location and orientation and can travel the world in our pockets (so long as we remember to charge them) are way closer to human experience than ChatGPT is. Equipping one of those highly-capable Boston Dynamics robots with a fully autonomous AGI must remain science fiction so long as GPTs require aircraft-hangar-sized supercomputers and consume megawatts of electricity. Our own bodies have a mitochondrial ‘battery’ in every cell, enabling us to think and/or reproduce ourselves on around 2000 calories a day…

Cognitive psychologists and economists like Haidt and Kahneman have revealed that emotional modes of intuitive thought aren’t reducible either to symbolic logic or Turing computability, and that these mechanisms drive attraction and enmity, friendship or bias and prejudice. They underpin crucial affective human virtues like empathy, wisdom, justice, courage, honesty, compassion and generosity without which any aspiring AGI would merely be a sociopathic silicon solipsist. And most importantly, intuition is vital for creative reasoning, causing those unprecedented leaps between vastly differing conceptual spaces that make up the mind of a Newton, a Mendeleev or an Einstein. The training data for connectionist AI models contains only representations of mental states – text and pictures scraped from the internet – and what emotional weight it does aggregate is mostly bad news, a swamp of hateful and obscene human communication that costs the AI corporations big bucks to hire human beings to painstakingly disinfect, in procedures they call ‘alignment’ and RLHF (reinforcement learning from human feedback)...

[Dick Pountain likes ChatGPT, but in a purely platonic way]


 






No comments:

Post a Comment

AUDIO DENTIST

Dick Pountain /Idealog 375/ 5th October 2025 : 09:53am  I love music and I love soldering. Here is a story that has both. I’ve explained my ...