Monday 24 August 2020

THE SKINNER BOX

 Dick Pountain/ Idealog 303/ 4th October 2019 10:27:48

We live in paranoid times, and at least part of that paranoia is being provoked by advances in technology. New techniques of surveillance and prediction cut two ways: they can be used to prevent crime and to predict illness, but they can also be abused for social control and political repression – which of these one sees as more important is becoming a matter of high controversy. Those recent street demonstrations in Hong Kong highlighted the way that sophisticated facial recognition tech, when combined with CCTV built into special lamp-posts can enable a state to track and arrest individuals at will. 

But the potential problems go way further than this, which is merely an extension of current law-enforcement technology. Huge advances in AI and Deep Learning are making it possible to refine thise more subtle means of social control often referred to as ‘nudging’. To nudge means getting people to do what you want them to do, or what is deemed good for them, not by direct coercion but by clever choice of defaults that exploit people’s natural biases and laziness (both of which we understand better than ever before thanks to the ground-breaking psychological research of Daniel Kahneman and Amos Tversky).   

The arguments for and against nudging involve some subtle philosophical principles, which I’ll try to explain as painlessly as possible. Getting people to do “what’s good for them” raises several questions: who decides what’s good; is their decision correct; even if it is, do we have the right to impose it, what about free will? Liberal democracy (which is what we still do just about have, certainly compared to Russia or China) depends upon citizens being capable of making free decisions about matters important to the conduct of their own lives, but what if advertising, or addiction, or those intrinsic defects of human reasoning that Kahneman uncovered, so distort their reckoning as to make them no longer meaningfully free – what if they’re behaving in ways contrary to their own expressed interests and injurious to their health? Examples of such behaviours, and the success with which we’ve dealt with them, might be compulsory seat belts in cars (success), crash helmets for motorcyclists (success), smoking bans (partial success), US gun control (total failure).


 


Such control is called “paternalism”, and some degree of it is necessary to the operation of the state in complex modern societies, wherever the stakes are sufficiently high (as with smoking) and the costs of imposition, in both money and offended freedom, are sufficiently low. However there are libertarian critics who reject any sort of paternalism at all, while an in-between position, "libertarian paternalism", claims that the state has no right to impose but may only nudge people toward correct decisions, for example over opting-in versus opting-out of various kinds of agreement – mobile phone contracts, warranties, mortgages, privacy agreements. People are lazy and will usually go with the default option, careful choice of which can nudge rather than compel them to the desired decision. 



The thing is, advances in AI are already enormously amplifying the opportunities for nudging, to a paranoia-inducing degree. The nastiest thing I saw at the recent AI conference in King’s Cross was an app that reads shoppers’ emotional states using facial analysis and then 


raises or lowers the price of items offered to them on-the-fly! Or how about Ctrl-Lab’s app that non-invasively reads your intention to move a cursor (last week Facebook bought the firm). Since vocal chords are muscles too, that non-invasive approach might conceivably be extended with even deeper learning to predict your speech intentions, the voice in your head, your thoughts…

I avoid both extremes in such arguments about paternalism. I do believe that climate crisis is real and that we’ll need to modify human behaviour a lot in order to survive, so any help will be useful. On the other hand I was once an editor at Oz magazine and something of a libertarian rebel-rouser in the ‘60s. In a recent Guardian interview, the acerbic comedy writer Chris Morris (‘Brass Eye’, ‘Four Lions’) described meeting an AA man who showed him the monitoring kit in his van that recorded his driving habits. Morris asked “Isn’t that creepy?” but the man replied “Not really. My daughter’s just passed her driving test and I’ve got half-price insurance for her. A black box recorder in her car and camera on the dashboard measures exactly how she drives and her facial movements. As long as she stays within the parameters set by the insurance company, her premium stays low.” This sort of super-nudge comes uncomfortably close to China’s punitive Social Credit system: Morris called it a “Skinner Box”, after the American behaviourist BF Skinner who used one to condition his rats…



No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...