Tuesday 29 May 2018

PLASTIC BRAINS

Dick Pountain/Idealog 282/05 January 2018 15:06

Feels to me as though we're on the brink of a moral panic about overdependence on social media (particularly by children), and part of it hangs on the question "are digital media changing the way we think?" The wind-chill is sufficient to have Mark Zuckerberg sounding defensive instead of chipper, while several ex-Facebook gurus have gone almost gothic: Sean Parker believes the platform “literally changes your relationship with society, with each other … God only knows what it’s doing to our children’s brains” while Chamath Palihapitiya goes the whole hog with “The short-term, dopamine-driven feedback loops that we have created are destroying how society works”. 

Now I'm as partial to a shot of dopamine as the next person - I take mine mostly in the form of Flickr faves - and have also been known to favour destroying how society works a little bit. However I'm afraid I'm going to have to sit this one out because I firmly believe that *almost everything the human race has ever invented* changed the wiring of our brains and the way we think, and that this one isn't even really one of the biggest. For example discovering how to make fire permitted us to cook our food, making nutrients more quickly available and enabling our brains to evolve to far larger size than other primates'. When we invented languages we created a world of things-with-names that few, probably no, other animals inhabit, allowing us to accumulate and pass on knowledge. Paper, the printing press, the telegraph, eventually, er, Facebook. 

One of the most intriguing books I've ever read is “The Origin of Consciousness in the Breakdown of the Bicameral Mind” (1976) by the late Julian Jaynes, a US professor of psychology. He speculated that during the thousands of years between the acquisition of speech and of writing around 1000BC our minds were structured quite differently from now, that all our thoughts were dialogues between two voices, our own and a second commanding voice that continually told us what to do. These second voices were in fact the voices of parents, tribal leaders and kings internalised during infancy, but we experienced them as the voices of gods - hence the origin of religion. Physiologically it was the result of fully experiencing both brain hemispheres, with the right one semi-autonomous and conversing internally with the left. Writing and written law eventually rewired this ancient mind structure, leaving us with the minds we now possess which experience autonomy, the voice of gods being relegated to the less insistent voice of conscience which we understand belongs to us (unless we suffer from schizophrenia) and may ignore if we choose to. 

Crazy? Plausible? Explanatory? Testable? It's well-written and deeply researched, so much so that Richard Dawkins in “The God Delusion” says it's “one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I’m hedging my bets.” Me too. 

A rather slimmer book I just read - "Why Only Us, Language and Evolution" by Robert C. Berwick and Noam Chomsky - is equally mind-boggling. Their argument is too complex to explain here, related to Chomsky's perennial concern with the deep structure of language, what's inherited and what's learned. Recent research in evolutionary developmental neurobiology suggests that a single mutation is sufficient to differentiate our language-capable brains from those of other primates: an operation the authors call "Merge" which combines mental symbols in hierarchical rather than merely sequential fashion. Our languages go beyond other animal communication systems because they permit infinite combinations of symbols and hence the mental creation of possible worlds. Birds create new songs by stringing together chains of snippets: we absorb the syntax of our various languages via trees rather than chains. Language arrived thanks to a change in our brain wiring and it lets us think via the voice in our head. 

We used to believe that our brain wiring got fixed during the first three or four years of life, while we learned to walk and talk, then remained static throughout adulthood: we now know better. Learning, say, to play the cello or to memorise London streets as a cabby detectably alters brain structure. Our ever-increasing dependency on Sat Nav to navigate from A to B may be jeopardising our ability to visualise whole territories, by shrinking it down to a collection of "strip maps" of individual routes. Fine so long as you remain on the strip, not so if one wrong turn sends you into a lake. Were a war to destroy the GPS satellites we'd end up running around like a kicked ant's nest. Being rude, or liking, each other on Facebook really is some way from being the worst risk we face.

AI NEEDS MORE HI

Dick Pountain/Idealog  281/05 December 2017 11:32 

I got suckered into watching 'The Robot Will See You Now' during Channel 4's November week of robot programmes, and shoddy though it was, it conveyed more of the truth than the more technically-correct offerings. Its premise was a family of hand-picked reality TV stereotypes being given access to robot Jess (whose flat glass face looked like a thermostat display but was agreed by all to be 'cute') and they consulted he/she/it about their relationship problems and other worries. The crunch was that C4 admitted Jess operates 'with some human assistance', which could well have meant somebody sitting in the next room speaking into a microphone... 

Never mind that, the basic effect of Jess was immediately recognisable to me as just ELIZA in a smart new plastic shell. ELIZA, for those too young to know, was one of the very first AI natural language programs, written in 1964 by Joseph Weizenbaum at MIT (and 16 years later by me while learning Lisp). It imitated a psychiatrist by taking a user's questions and turning them around into what looked like intelligent further questions, while having no actual understanding of meaning. Jess did something similar in a more up-to-date vernacular (which could have been BigJess in the next room at the microphone). 

Both Jess and ELIZA actually work, because they give their 'patients' somebody neutral to unload upon, someone non-threatening and non-judgemental - unlike their friends and family. Jess's clearly waterproof plastic carapace encouraged them to spill the beans. Neither robot doctor need understand their problem, merely to act as a mirror in which the patient talks themself into solving it. 

Interacting with Jess was more about emotion than about AI, which points up the blind alley that AI is currently backing itself into. I've written here several times before about the way we confuse *emotions* with *feelings*: the former are chemical and neural warning signals generated deep within our brains' limbic system, while feelings are our conscious experience of the effects of these signals on our bodies, as when fear speeds our pulse,  clarifies vision, makes us sweat. These emotional brain subsystems are evolutionarily primitive and exist at the same level as perception, well before the language and reasoning parts of our brains. Whenever we remember a scene or event, the memory gets tagged with the emotional state it produced, and these tags are the stores of value, of 'good' versus 'bad'. When memories are retrieved later on, our frontal cortex processes them to steer decisions that we believe we make by reason alone. 

All our reasoning is done through language, by the outermost, most recent layers of the brain that support the 'voice in your head'. But of course language itself consists of a huge set of memories laid down in your earliest years, and so it's inescapably value laden. Even purely abstract thought, say mathematics, can't escape some emotional influence (indeed it may inject creativity). Meaning comes mostly from this emotional content, which is why language semantics is such  knotty problem that lacks satisfactory solutions - what a sentence means depends on who's hearing it.

The problem for AI is that it's climbing the causality ladder in exactly the opposite direction, starting with algorithmic or inductive language processing, then trying to attach meaning afterwards. There's an informative and highly readable article by James Somers in the September MIT Technology Review (https://medium.com/mit-technology-review/is-ai-riding-a-one-trick-pony-b9ed5a261da0), about the current explosion in AI capabilities - driverless cars, Siri, Google Translate, AlphaGo and more. He explains they all involve 'deep learning' from billions of real-world examples, using computer-simulated neural nets based around the 1986 invention of 'back-propagation' by Geoffrey Hinton, David Rumelhart and Ronald Williams. He visits Hinton in the new Vector Institute in Toronto, where they show him that they're finally able to decipher the intermediate layers of multilayer back-propagation networks, and are amazed to see structures spontaneously emerge that somewhat resemble those in the human visual and auditory cortexes. Somers eschews the usual techno-utopian boosterism, cautioning us that "the latest sweep of progress in AI has been less science than engineering, even tinkering [...] Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way". All these whizzy AI products exhibit a fragility quite unlike the flexibility of HI (human intelligence). 

I believe this fundamental fragility stems from AI's back-to-front approach to the world, that is, from its lack of HI's underlying emotional layer that lies below the level of language and is concurrent with perception itself. We judge and remember what we perceive as being good or bad, relative to deep interests like eating, escaping, playing and procreating. Jess could never *really* empathise with his fleshy interrogators because, unlike them, he never gets hungry or horny...  

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...