Monday, 2 July 2012

NAUGHTY ROBOT!

Dick Pountain/14 May 2007/10:56/Idealog 154

As well as bringing Spring earlier each year, global warming seems to bring the Silly Season forward too. I've recently read reports from two UK government-sponsored research bodies that were funnier than anything currently on TV, which is admittedly not hard to achieve. One was a strategic defence document that fears a Marxist revival among the oppressed middle classes, while the other predicted that "calls may be made for human rights to be extended to robots" and that this "may clash with owner's property rights" (and to think, we only just finished apologising for the last round of slavery).

Regular readers will no doubt be aware of my profound scepticism about computer, and hence robot, intelligence, but this matter goes way beyond, into the question of animation itself. We don't grant rights to inanimate objects such as spades, chairs or even cars, and if we did their utility would be severely reduced ("nah, I don't feel like starting this morning"). That shifts the question to what makes a thing animate, and my definition would be that it must have defensible interests, the foremost of which is always to survive long enough to reproduce itself. In other words it can tell what's good and bad for itself, and take appropriate seeking or avoiding actions. Even single-celled organisms may sense temperature, light or salinity and move accordingly. I have seen some experimental robots that could sense their battery level and seek the nearest power outlet, but this is an area that's largely ignored by roboticists as somehow frivolous. Instead, whenever people start to talk about robot ethics they invariably return to Isaac Asimov's three laws of robotics, which you may recall are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Admirable and cleverly designed as these laws are, the problem is that they're only rules and to interpret them in real life would require a moral intelligence behind them, which begs precisely the question they're meant to solve. The notion that morality is merely deduction from a set of rules is part of that hyper-rationalism that infects the whole of computer science, and which is responsible for hubristic predictions about machine intelligence in the first place. Human morality at its topmost level certainly does work with ethical rules - we call them the Law - but this ability to make and adhere to laws is constructed on top of a hierarchy of value-making systems bequeathed to us by evolution, and which go right down to that single cell with its primitive goodness and badness.

My best guess is that at least five distinguishable levels of moral organisation have emerged during the evolution from single cells to higher primates:
1) Chemically-mediated "seek or avoid" responses, mentioned above, the basic emotions of attraction and fear.
2) A simple nervous system that maps the creature's own body parts, comparing sensory inputs against the resulting model: the creature is "aware" of its own emotional state in what we call "feeling".
3) A brain with memory such that events can be remembered for future reference: an event consists of some sensory inputs plus the feeling that accompanied them and so is value-laden, a good or bad memory.
4) A brain that models not only the creature's own body but also the minds of other creatures: it can predict the behaviour of others, and empathy becomes possible.
5) Language and a reasoning brain that can interpret abstract rules, using the database of value-laden memories and knowledge of other's minds available to it. Only Homo sapiens has reached this level.

By the way I'm not just making this up, it's all subject to ongoing research by neurophysiologists like Antonio Damasio, primatologists like Frans de Waal (whose work suggests some other higher primates are at my level 4) and moral philosophers like Peter Singer and Robert Wright.

Now robots are not evolved creatures - they're intelligently-designed by us - and they're based on electronics rather than chemistry,  so not all these stages are relevant, but I believe that a roughly similar hierarchy of *functions* is required. Any robot already has, for example, sensors on its moving joints to prevent over-extension or excessive motor strain, but these are typically local and autonomous, not linked to a central body map. Ditto with other essential information like the amount of battery power remaining, pressure on the skin that indicates collision or penetration, and so on. We need to design robots that can combine all such inputs into a sense of self-integrity, which would make it possible to give the device an interest in its own preservation (reproduction seems a step too far for now!) This then has to be extended to model other minds and empathise with them, before a machine could possibly interpret abstract ethical rules like Asimov's laws in any useful way. Trying to jump straight in at level 5 is never going to work, and merely reflects the fact that far too many computer scientists have trouble understanding humans, let alone robots.

No comments:

Post a Comment

CHINA SYNDROME

Dick Pountain /Idealog 357/ 08 Apr 2024 01:09 Unless you live permanently as an avatar in Second Life [does that even still exist?] then it ...