Tuesday 29 May 2018

AI NEEDS MORE HI

Dick Pountain/Idealog  281/05 December 2017 11:32 

I got suckered into watching 'The Robot Will See You Now' during Channel 4's November week of robot programmes, and shoddy though it was, it conveyed more of the truth than the more technically-correct offerings. Its premise was a family of hand-picked reality TV stereotypes being given access to robot Jess (whose flat glass face looked like a thermostat display but was agreed by all to be 'cute') and they consulted he/she/it about their relationship problems and other worries. The crunch was that C4 admitted Jess operates 'with some human assistance', which could well have meant somebody sitting in the next room speaking into a microphone... 

Never mind that, the basic effect of Jess was immediately recognisable to me as just ELIZA in a smart new plastic shell. ELIZA, for those too young to know, was one of the very first AI natural language programs, written in 1964 by Joseph Weizenbaum at MIT (and 16 years later by me while learning Lisp). It imitated a psychiatrist by taking a user's questions and turning them around into what looked like intelligent further questions, while having no actual understanding of meaning. Jess did something similar in a more up-to-date vernacular (which could have been BigJess in the next room at the microphone). 

Both Jess and ELIZA actually work, because they give their 'patients' somebody neutral to unload upon, someone non-threatening and non-judgemental - unlike their friends and family. Jess's clearly waterproof plastic carapace encouraged them to spill the beans. Neither robot doctor need understand their problem, merely to act as a mirror in which the patient talks themself into solving it. 

Interacting with Jess was more about emotion than about AI, which points up the blind alley that AI is currently backing itself into. I've written here several times before about the way we confuse *emotions* with *feelings*: the former are chemical and neural warning signals generated deep within our brains' limbic system, while feelings are our conscious experience of the effects of these signals on our bodies, as when fear speeds our pulse,  clarifies vision, makes us sweat. These emotional brain subsystems are evolutionarily primitive and exist at the same level as perception, well before the language and reasoning parts of our brains. Whenever we remember a scene or event, the memory gets tagged with the emotional state it produced, and these tags are the stores of value, of 'good' versus 'bad'. When memories are retrieved later on, our frontal cortex processes them to steer decisions that we believe we make by reason alone. 

All our reasoning is done through language, by the outermost, most recent layers of the brain that support the 'voice in your head'. But of course language itself consists of a huge set of memories laid down in your earliest years, and so it's inescapably value laden. Even purely abstract thought, say mathematics, can't escape some emotional influence (indeed it may inject creativity). Meaning comes mostly from this emotional content, which is why language semantics is such  knotty problem that lacks satisfactory solutions - what a sentence means depends on who's hearing it.

The problem for AI is that it's climbing the causality ladder in exactly the opposite direction, starting with algorithmic or inductive language processing, then trying to attach meaning afterwards. There's an informative and highly readable article by James Somers in the September MIT Technology Review (https://medium.com/mit-technology-review/is-ai-riding-a-one-trick-pony-b9ed5a261da0), about the current explosion in AI capabilities - driverless cars, Siri, Google Translate, AlphaGo and more. He explains they all involve 'deep learning' from billions of real-world examples, using computer-simulated neural nets based around the 1986 invention of 'back-propagation' by Geoffrey Hinton, David Rumelhart and Ronald Williams. He visits Hinton in the new Vector Institute in Toronto, where they show him that they're finally able to decipher the intermediate layers of multilayer back-propagation networks, and are amazed to see structures spontaneously emerge that somewhat resemble those in the human visual and auditory cortexes. Somers eschews the usual techno-utopian boosterism, cautioning us that "the latest sweep of progress in AI has been less science than engineering, even tinkering [...] Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way". All these whizzy AI products exhibit a fragility quite unlike the flexibility of HI (human intelligence). 

I believe this fundamental fragility stems from AI's back-to-front approach to the world, that is, from its lack of HI's underlying emotional layer that lies below the level of language and is concurrent with perception itself. We judge and remember what we perceive as being good or bad, relative to deep interests like eating, escaping, playing and procreating. Jess could never *really* empathise with his fleshy interrogators because, unlike them, he never gets hungry or horny...  

No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...