Tuesday 11 June 2019

HIT THE PANIC BUTTON?

Dick Pountain/ Idealog292/ 2nd November 2018 13:38:17

According to a recent Microsoft press release, their research indicates that almost half of British companies think that their current business models will cease to exist in the next five years thanks to AI, but 51% of them don’t have an AI strategy. While I could describe that as panic-mongering, I won’t. It’s more like straightforward marketing: since Microsoft is currently heavily promoting its AI Academy, AI development platforms and training courses, it’s merely AI bread-and-butter. But the idea of subtly encouraging panic for economic ends is of course as old as civilisation itself.

In his fascinating book ‘On Deep History and the Brain’, US historian Daniel Lord Smail described the way that all social animals - from ants to wolves to bonobos to humans - organise into societies by deliberately manipulating the brain chemistry of themselves and their fellows. This they do by a huge variety of means: pheromones; ingesting drugs; performing dances and rituals; inflicting violence; and for us humans, telling stories (including stories on Facebook about AI). It’s recently been discovered that bees and ants create the division of labour that characterises their societies - queens lay eggs, drones fertilise them, workers and soldiers do everything else - by a remarkably simple mechanism.The queen emits pheromones that alter insulin levels in her ‘subordinates’ (though it’s arguable that she’s actually their prisoner) which changes their feeding habits and body type.

And stories do indeed modify the brain chemistry of human listeners, because everything we think and say is ultimately a matter of brain chemistry: that’s what brains are, electro-chemical computers for processing experience of the world. The chemical part of this processing is what we call ‘emotion’, and the most advanced research in cognitive psychology is revealing more and more about the way that emotion and thought are intertwined and inseparably linked. Which is why AI, despite all the hype and panic, remains so ultimately dumb.

All animals (and plants too) have perceptual systems that sample information from their immediate environment. But animals also have emotions, which are like co-processors that inspect this perceived information to detect threats and opportunities. They attach value to the perceived information - is it edible, or sexy, or dangerous, or funny - which is something that cannot easily inferred from a mere bitmap. The leading Affective Neuroscientist Jaak Panksepp discovered seven emotional subsystems in the mammalian brain, each mediated by its own system of neuropeptide hormones: he called them SEEKING (dopamine), RAGE and FEAR (adrenaline and cortisol), LUST and PLAY (sex hormones), CARE and PANIC (oxytocin, opioids and more). Neuroscientist Antonio Damasio further proposes that memories get labelled with the chemical emotional state prevailing when they were laid down, so that when recalled they bring with them values: good, bad, sad, funny and so on.

AI systems could be, probably will be, eventually enabled to fake some kinds of emotional response, but in order to really feel they’d need to have something at stake. Our brains store a continually updated model of the outside world, another of our own body and its current internal state, and continually process the intersection of these two models to see what is threatening or beckoning to us. Meanwhile our memory stores a more or less complete record of our lives to-date along with the values of the things that have happened to us. All our decisions result from integrating these data sources. To provide anything equivalent for an AI system will be staggeringly costly in memory and CPU: the most sophisticated self-driving vehicle is less than a toy by comparison.

Which is not to say that AI is useless, far from it. Just as simpler computers excel at arithmetic or graphics, AI systems can excel at kinds of reasoning in which we are slow or error-prone, precisely due to the emotional content of our own reason. Once we stop pretending that they’re intelligent in the same way as us (or ever can be), and acknowledge that they can be given skills that complement our own, then AIs become tools as essential as spreadsheets are now. The very name Artificial Intelligence positively invites this confusion, so we’d perhaps better call it Artificial Reasoning or something like that.

And we need to stop pressing the panic button before we can acknowledge these limits of AI. If you design an AI that fires people in order to increase profits, it will. If you design it to kill people, it will. But the same is true of human accountants and soldiers. Lacking emotions, an AI can never have its own interests or ambitions, so it can never be as good or as bad as we can. And if we fail to fit it with a fail-safe off switch then it’s our own stupid fault.






No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...