Saturday 5 January 2019

LIFE IS CHEAPER

Dick Pountain/ Idealog 288/ 6th July 2018 20:19:52

I was struck by a recent article in Alphr about the newly important field of Emotional AI (http://www.alphr.com/artificial-intelligence/1009663/can-ai-really-be-emotionally-intelligent), the attempt to imbue computer-based Artificial Intelligence systems with something approaching human emotions, as a way of making communication with such systems easier, more effective and satisfying.

At one end these attempts are simplistic but do-able: Google’s Assistant is being tweaked to emulate some emotional connection, so that it apologises for errors (instead of that irritating ‘Something went wrong!’) and responds to praise with gratitude. At a slightly higher level, systems can be trained to deduce the emotional state of human users by combining cues in their recent inputs and knowledge of recent activities - so, you’re probably angry if you just got cut off before completing an online purchase.

At the highest level some researchers would like to build proper models of human emotional response that an AI system can employ when interacting with us, but critics have been quick to point out the flaw in this - an AI system that can understand human emotions but not feel them itself would be, according to the textbook definition, a sociopath, and how many more smart sociopaths do we need? And of course this would be the position, because as I’ve pointed out ad nauseam in these columns, a computer can’t feel emotion because emotions are biochemical processes that exist only in the bodies of living, moving, perceiving, eating, breeding creatures. Most computers have nothing resembling a body. The few that do - say the control system of an autonomous vehicle which has vision, motion and collision sensing - could perhaps be provided with a simple emotional system, but the emotions it would employ would be so unlike ours they’d be of no more use in communication than those of an ant would be.

Wanting AI systems to think and feel the way we do is actually futile pursuit, a product of juvenile sci-fi thinking. What AI is good for is amplifying, assisting and correcting our own perceptions and agency in the world, not for replacing us. Driverless cars will never be safe or economically viable, but super-smart cars that remove much of the burden of driving from a human driver are just around corner.

The pursuit of the android is futile not because of the software difficulties, great as those are, but from simple energetics. All living things are composed of cells (the ultimate modules, honed by 4 billion years of evolution) each of which contains not only its own energy storage but also the full blueprint and mechanism for its reproduction. Androids with brains made from silicon chips can’t ever reproduce themselves, whatever 3D-printer zealots would have you believe - mineral mines, metal works, chemical factories, wafer fabs are required for their reproduction. It will always be cheaper to train a human to do some difficult job, maybe assisted by sophisticated AI tools, than to build an android to do it.

And here’s where things have the potential to turn nasty. Stunning advances in genetic engineering enabled by CRISP technology (see Idealog 284) mean that it will soon be easier and cheaper to consider modifying an animal to do some difficult job than build an android robot to do it, and once created these would be self-replicating. I don’t normally go in for pitching sci-fi movie treatments, but I’m very tempted by an update of ‘The Island of Dr Moreau’ in which the evil scientist, instead of chopping up animals and sewing their bits back together in different combinations, instead employs CRISPR to create a race of not undead but living zombies, which have sufficient intelligence to follow instructions but few emotions and no ability to disobey.

You may believe, as I’d like to, that we would recoil from such an immoral and disgusting invention and ban it immediately, but how sure are you of that? People are now seriously discussing the problems of unemployment caused by automation by robots, but the concensus nevertheless remains that it will happen and we’ll have to find ways to cope with it, for example a univeral basic income or similar. If the imperative to profitability leads people to aquiesce in that, why wouldn’t they eventually aquiesce to genetically-modified zombie slaves?

If you still object that it would never be allowed, may I recommend a book for your summer beach reading: it’s called ‘A History of the World in Seven Cheap Things’ by Raj Patel and Jason W. Moore. It explores the stuff humans have been doing ever since the 1400s, and are doing still, to maximise profits by rendering labour, money, food, energy, care, lives, and nature itself, as cheap as possible. Admit it, you’d like a zombie butler.


[Dick Pountain would still rather open the pod bay door himself]


No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...