Dick Pountain /Idealog 367/ 06 Feb 2025 09:12
I used to think that a monthly column was a fairly relaxed schedule compared to, say, a daily newspaper, but no longer. I’d decided to do this one about how China upset the USA by doing AI on the cheap, but now every ten minutes I feel a need to check online for whatever new geopolitical atrocity has just overshadowed that. Nevertheless I’ll start with a nod to the original plan, how China pulled down the knickers of the US AI bubbleheads.
I won’t dive deep into tech details of how Deepseek succeeded in doing what ChatGPT does for a fraction of the price, or how it rocketed to the top of Apple’s mobile-app store hit parade, nor how it did so by parasitising the US AI bros’ own data in just the same morally and legally unsavoury way they got it from us in the first place. No, instead I’d prefer to harp on about something I’ve been harping on about for at least 20 years, namely how the whole AI industry deludes itself because, being lead by sci-fi-addled nerds (one of whom now appears to be the de facto POTUS), it has a severely limited grasp of biology and philosophy.
Two columns ago I forcefully expressed my opinion of OpenAI’s plans for continued expansion in order to achieve AGI (Artificial General Intelligence), which they claim would confer human-level reasoning. One objection was its colossal, antisocial, power requirements, but my real objection is that I don’t believe AGI is even achievable by simply crunching more data. That’s for reasons of biology I’ve explored here many times, namely that though human intelligence expresses itself through language – by manipulating symbols which is all any computer can do – that’s neither its only nor its most important source.
We’re animals who have been equipped by evolution to succeed at living in a physical world, achieved with the help of many (more than five) senses to sample what’s going on around us. We build, continuously update and maintain a mental model of that world. We have needs – including to eat, drink, reproduce and avoid predators – which are intimately entwined into that model. We’re born with some built-in knowledge about gravity, upness and downness, light and shade, convexity and concavity, that control the model in ways of which we’re not conscious, but which deeply affect our symbolic processing of that world. We’re by no means just ‘rational’. AI has learned how to pretend to be intelligent only by plundering our symbolic representations of the world, texts and pictures, but knows nothing, and can know nothing, of our embodied experience. Sure, it could build imitations of emotions and needs, but they’d just be more static representations, not extracted from the real world by living in it.
Historically computers arrived thanks to advances first in mathematics, and then electronic engineering, so it’s hardly surprising that the intellectual atmosphere in which they’re embedded is more influenced by science fiction than by philosophy, anthropology or cognitive psychology. It may well be too late now for AI practitioners to go back and do the necessary reading, since they’ve reached a level of megalomania that convinces them they already know it all, and have just achieved power over the world’s most powerful nation to prove it.
Were I to be asked to set the tech bros some homework I’d recommend first of all my favourite philosopher George Santayana and his theory of ‘animal faith’, which enables us to navigate life’s uncertainties by deploying our intrinsic knowledge and not ‘overthinking’. That leads directly into the more modern version by Nobel Laureate Daniel Kahneman and his discovery of two modes of thought, the fast, imprecise-but-often-good-enough one, and the slow symbolic one which is all that computers can mimic. Then I’d suggest perhaps George Lakoff and Mark Johnson’s ‘Metaphors We Live By’ which explores the actual content of our intrinsic embodied knowledge and how it modulates our language. Oddly enough, smartphones are already more embodied than GPTs because they have senses (hearing, sight, touch, spatial orientation) and hunger (for charging). Fortunately they can’t yet reproduce.
Having absorbed all that, then perhaps they might dip into Chris Frith’s ‘Making Up The Mind’, the best explanation I’ve read of how the brain creates and updates the mind using fundamentally Bayesian rather than Cartesian mechanisms. That ought to convince them they don’t (or most likely don’t want to) know it all, but the final step would be by far the most difficult, to get them to take democratic politics seriously and divert their megalomaniac schemes toward improving life for the majority of the population rather than a feckless techno-elitist minority. Of course they may prefer to go to Mars, which would provide a rigorous education in embodiment…
[Dick Pountain hopes that Elon doesn’t read his column]
No comments:
Post a Comment