Tuesday 4 September 2018

PATCHES ON YOUR GENES

Dick Pountain/ Idealog 284/ 7th March 2018 11:29:07

When I was 8 or 9 one of my Christmas presents was a lavishly illustrated book called “How And Why It Works”, which explained everything from airliners to oil-wells to reflecting telescopes. It immediately became my favourite, along with a nature book about curious animals like the echidna. My course in life was set right there and then. I just wanted to know how everything works – including you and me – so I became a biochemist, and then through a series of flukes a computer nerd.

Books can still have that sort of effect on me, but fairly rarely nowadays, and when one does I occasionally write about it here. The last time that happened was back in August 2016 when "Endless Forms Most Beautiful" by Sean B. Carroll overcame my reluctance to get to grips with Evolutionary Developmental Biology (Evo Devo). That book helped me understand that all living things are indeed computational systems, but not in naive way that the AI brigade would have us believe. Every living thing contains a genetic apparatus which combines a database of inherited features with a collection of distributed, self-modifying, real-time processors and 3D printers whose outputs are flesh, blood and bones, leaves, bacterial cell walls, and also those nerves and brains that AI concerns itself with.

Well, it’s just happened again. I’d been vaguely aware for several years of a revolution in the technology of gene editing, one that will enable us to actually reprogram this system for ourselves (for better or for worse). But as with Evo Devo, I’d pushed it to the back of my mind, unwilling to tackle the mental effort needed to understand. What’s fortified me this time around is a short article (https://www.lrb.co.uk/v40/n04/rupert-beale/diary) in the London Review of Books by Dr Rupert Beale, a scientist at the MRC Laboratory of Molecular Biology in Cambridge, which explains in admirably lucid and non-technical fashion the new techniques of CRISPR.

Beale’s knows his stuff because he researches bacteriophages, viruses that infect bacteria. It was work on phages around 20 years ago (by a Danish industrial yoghurt company among others) that triggered this revolution. Bacteria, though just single cells, have evolved a very simple immune system – whenever they survive a phage attack they snapshot a chunk of its genetic sequence into their own DNA as a memory of the crime. In any future infection a bacterium can recognise that sequence and use an enzyme called Cas9 to snip it out, thus killing the invading phage. These snapshots consist of “clustered regularly interspaced short palindromic repeats”, or CRISPR for short.

Those of you involved in computer security (hi Davey) might recognise this as much the same mechanism used by AV software to detect computer viruses from their “signature” code sequence. Molecular geneticists can now deploy the combination of CRISPR and Cas9 as tools to cut-and-paste gene sequences into the DNA of other creatures besides bacteria, up-to-and-including homo sapiens. In practice they don’t actually snip out target genes but rather disable


them: Cas9 cuts the DNA strand but the host cell repairs it, over and again until it makes a mistake so that gene stops working. Knock out all 20,000 genes in the human genome one at a time, and you can build a vast library of gene-removed cells, for example to test cancer chemotherapy drugs by finding which genes are involved in a response. As Beale explains: “With CRISPR-Cas9 techniques we can kill genes, switch them on and, if we are lucky, replace bits of one gene with another. It doesn’t stop there: the guidance system can be employed to perform almost any function that can be bolted onto a protein.” In other words CRISPR will make it possible to directly code the human genome, and we’ll soon be seeing patches that cure specific genetic diseases, add resistance to infections and more. Patch Tuesday could eventually become something you do at your local clinic as well as on your PC.

Of course the risks as well as the benefits of such patching will pretty quickly become apparent (hopefully they’ll not be as bad as Windows 10). CRISPR has become big business and there are ongoing squabbles over the patent rights between various corporations and universities. Jennifer Doudna, a leading CRISPR researcher at Berkeley, in her excellent popular book “A Crack in Creation”, tackles some of the ethical problems that will arise as we supplant “the deaf dumb and blind system that has shaped genetic material on our planet for eons and replace it with a conscious intentional system of human directed evolution”. If GM lettuces created a ferocious worldwide protest, expect way more at the prospect of GM babies...












OUT THE WINDOW

Dick Pountain/ Idealog 283/ 9th February 2018 09:38:21

A few days ago my Chromebook told me it would like to reboot in order to update its operating system. I‘d just made a cup of tea at the time (Bai Mu Dan if you’re interested). I hit the power button and nervously watched the progress bar – a habit picked up from 30 years of Windows use – and it finished updating while the tea was still hot. With the reboot completed a pop-up informed me that I could now download and run Android apps on the Chromebook. That was worth another biscuit (Biscoff Lotus if you’re interested). 

Running Android apps on the Chromebook had been an ambition for most of the previous year, but up until now it had involved techie adventures in developer mode that I didn’t feel like attempting. The first driver of this ambition was a need to program in Python on the machine, satisfied instantly by downloading the Android version QPython3. It compiled and ran everything I’ve written under Windows, the only change required being to prefix any file paths with “/storage/emulated/0/” (of which more later). 

Almost as strong was a desire for certain Android apps I’d come to depend upon, like the PC-Pro-Award-winning Citymapper which enables me to navigate through London with unprecedented ease. I had tried the browser-based version for Chrome, but it was so inferior to the Android app in UI terms, particularly the interactivity of the maps, that I preferred to use it on my phone or tablet. And that brings me to the main point of this column: just like Windows native applications, Android apps nowadays exploit the hardware (especially screen real-estate) so much better than browser-based versions that there’s no contest. 

Another application I use every day is Spotify, on which I listen to music at home via Windows or Chromebook, while walking on heath or park on my phone. But until this update I had three different versions of the Spotify client, differing from one another in various ways, some subtle, some downright infuriating. The Windows version is still the most complete in that it supports playlist folders to organise my scores of lists, and also drag-and-drop to rearrange these folders and their contents. The Android version has folders that aren’t drag-and-drop, but does support a new UI with a taskbar at the bottom that’s easier to use on small phone screens. The browser version I’d been using on the Chromebook is a nightmare that doesn’t support folders at all and steamrollers them into un-navigably flat lists which aren’t even complete: the Artists tab only displays a fraction of what’s there. Spotify is nowadays ambivalent about playlists and deprecates them in favour of its newer, non-hierarchical Your Music (Save| Songs| Artists| Albums) system, hence this bodge which I thoroughly enjoyed uninstalling.

The combination of Google Contacts, Calendar and the ever-increasingly-wonderful Google Keep ensures that all my appointments, addresses, notes and other important data are always automatically synced between Windows, Chrome and Android machines – I can even do voice dictation on my phone and have it there waiting on the desktop when I get home. 

So what about writing? Well, the answer is that I’m writing this column in Google Docs. Running Android means that I could now have Microsoft Word, but in truth I’d stopped using MS Office even under Windows several years ago, in favour of LibreOffice. Google Docs does everything I need except for its lack of simple macros. Under Windows I used an external app Macro Express, plus a few Word Basic scripts. I’ve found an almost complete replacement in a Chrome extension called ProKeys that stores ‘snippets’ of text, complete with placeholders and automatic date and time stamping, which covers 90% of what I want to do. It can’t do my text editing macros – swapping pairs of letters or words under the cursor – but I can’t be bothered to learn Google Docs’ JavaScript API and will do without. 

Is there a downside? Well yes, the confusion of three different file locations, the default My Drive in the cloud, the small local Download drive and a completely separate local Android drive. Apps mostly hide this from you, but it really is time Google did that long-promised merger of Android and ChromeOS. As a grizzled pioneer of the personal computer revolution I’ll never be entirely happy having everything in someone else’s cloud and will always want local copies of current work and vital data, so here’s a suggestion. A merged OS (ChromDroid?) might have a file attribute that takes these three values:

  l = local only
  c = cloud only
  s = local first, then sync to cloud

Give me that and Windows would go right, er, out the window.

Tuesday 29 May 2018

PLASTIC BRAINS

Dick Pountain/Idealog 282/05 January 2018 15:06

Feels to me as though we're on the brink of a moral panic about overdependence on social media (particularly by children), and part of it hangs on the question "are digital media changing the way we think?" The wind-chill is sufficient to have Mark Zuckerberg sounding defensive instead of chipper, while several ex-Facebook gurus have gone almost gothic: Sean Parker believes the platform “literally changes your relationship with society, with each other … God only knows what it’s doing to our children’s brains” while Chamath Palihapitiya goes the whole hog with “The short-term, dopamine-driven feedback loops that we have created are destroying how society works”. 

Now I'm as partial to a shot of dopamine as the next person - I take mine mostly in the form of Flickr faves - and have also been known to favour destroying how society works a little bit. However I'm afraid I'm going to have to sit this one out because I firmly believe that *almost everything the human race has ever invented* changed the wiring of our brains and the way we think, and that this one isn't even really one of the biggest. For example discovering how to make fire permitted us to cook our food, making nutrients more quickly available and enabling our brains to evolve to far larger size than other primates'. When we invented languages we created a world of things-with-names that few, probably no, other animals inhabit, allowing us to accumulate and pass on knowledge. Paper, the printing press, the telegraph, eventually, er, Facebook. 

One of the most intriguing books I've ever read is “The Origin of Consciousness in the Breakdown of the Bicameral Mind” (1976) by the late Julian Jaynes, a US professor of psychology. He speculated that during the thousands of years between the acquisition of speech and of writing around 1000BC our minds were structured quite differently from now, that all our thoughts were dialogues between two voices, our own and a second commanding voice that continually told us what to do. These second voices were in fact the voices of parents, tribal leaders and kings internalised during infancy, but we experienced them as the voices of gods - hence the origin of religion. Physiologically it was the result of fully experiencing both brain hemispheres, with the right one semi-autonomous and conversing internally with the left. Writing and written law eventually rewired this ancient mind structure, leaving us with the minds we now possess which experience autonomy, the voice of gods being relegated to the less insistent voice of conscience which we understand belongs to us (unless we suffer from schizophrenia) and may ignore if we choose to. 

Crazy? Plausible? Explanatory? Testable? It's well-written and deeply researched, so much so that Richard Dawkins in “The God Delusion” says it's “one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I’m hedging my bets.” Me too. 

A rather slimmer book I just read - "Why Only Us, Language and Evolution" by Robert C. Berwick and Noam Chomsky - is equally mind-boggling. Their argument is too complex to explain here, related to Chomsky's perennial concern with the deep structure of language, what's inherited and what's learned. Recent research in evolutionary developmental neurobiology suggests that a single mutation is sufficient to differentiate our language-capable brains from those of other primates: an operation the authors call "Merge" which combines mental symbols in hierarchical rather than merely sequential fashion. Our languages go beyond other animal communication systems because they permit infinite combinations of symbols and hence the mental creation of possible worlds. Birds create new songs by stringing together chains of snippets: we absorb the syntax of our various languages via trees rather than chains. Language arrived thanks to a change in our brain wiring and it lets us think via the voice in our head. 

We used to believe that our brain wiring got fixed during the first three or four years of life, while we learned to walk and talk, then remained static throughout adulthood: we now know better. Learning, say, to play the cello or to memorise London streets as a cabby detectably alters brain structure. Our ever-increasing dependency on Sat Nav to navigate from A to B may be jeopardising our ability to visualise whole territories, by shrinking it down to a collection of "strip maps" of individual routes. Fine so long as you remain on the strip, not so if one wrong turn sends you into a lake. Were a war to destroy the GPS satellites we'd end up running around like a kicked ant's nest. Being rude, or liking, each other on Facebook really is some way from being the worst risk we face.

AI NEEDS MORE HI

Dick Pountain/Idealog  281/05 December 2017 11:32 

I got suckered into watching 'The Robot Will See You Now' during Channel 4's November week of robot programmes, and shoddy though it was, it conveyed more of the truth than the more technically-correct offerings. Its premise was a family of hand-picked reality TV stereotypes being given access to robot Jess (whose flat glass face looked like a thermostat display but was agreed by all to be 'cute') and they consulted he/she/it about their relationship problems and other worries. The crunch was that C4 admitted Jess operates 'with some human assistance', which could well have meant somebody sitting in the next room speaking into a microphone... 

Never mind that, the basic effect of Jess was immediately recognisable to me as just ELIZA in a smart new plastic shell. ELIZA, for those too young to know, was one of the very first AI natural language programs, written in 1964 by Joseph Weizenbaum at MIT (and 16 years later by me while learning Lisp). It imitated a psychiatrist by taking a user's questions and turning them around into what looked like intelligent further questions, while having no actual understanding of meaning. Jess did something similar in a more up-to-date vernacular (which could have been BigJess in the next room at the microphone). 

Both Jess and ELIZA actually work, because they give their 'patients' somebody neutral to unload upon, someone non-threatening and non-judgemental - unlike their friends and family. Jess's clearly waterproof plastic carapace encouraged them to spill the beans. Neither robot doctor need understand their problem, merely to act as a mirror in which the patient talks themself into solving it. 

Interacting with Jess was more about emotion than about AI, which points up the blind alley that AI is currently backing itself into. I've written here several times before about the way we confuse *emotions* with *feelings*: the former are chemical and neural warning signals generated deep within our brains' limbic system, while feelings are our conscious experience of the effects of these signals on our bodies, as when fear speeds our pulse,  clarifies vision, makes us sweat. These emotional brain subsystems are evolutionarily primitive and exist at the same level as perception, well before the language and reasoning parts of our brains. Whenever we remember a scene or event, the memory gets tagged with the emotional state it produced, and these tags are the stores of value, of 'good' versus 'bad'. When memories are retrieved later on, our frontal cortex processes them to steer decisions that we believe we make by reason alone. 

All our reasoning is done through language, by the outermost, most recent layers of the brain that support the 'voice in your head'. But of course language itself consists of a huge set of memories laid down in your earliest years, and so it's inescapably value laden. Even purely abstract thought, say mathematics, can't escape some emotional influence (indeed it may inject creativity). Meaning comes mostly from this emotional content, which is why language semantics is such  knotty problem that lacks satisfactory solutions - what a sentence means depends on who's hearing it.

The problem for AI is that it's climbing the causality ladder in exactly the opposite direction, starting with algorithmic or inductive language processing, then trying to attach meaning afterwards. There's an informative and highly readable article by James Somers in the September MIT Technology Review (https://medium.com/mit-technology-review/is-ai-riding-a-one-trick-pony-b9ed5a261da0), about the current explosion in AI capabilities - driverless cars, Siri, Google Translate, AlphaGo and more. He explains they all involve 'deep learning' from billions of real-world examples, using computer-simulated neural nets based around the 1986 invention of 'back-propagation' by Geoffrey Hinton, David Rumelhart and Ronald Williams. He visits Hinton in the new Vector Institute in Toronto, where they show him that they're finally able to decipher the intermediate layers of multilayer back-propagation networks, and are amazed to see structures spontaneously emerge that somewhat resemble those in the human visual and auditory cortexes. Somers eschews the usual techno-utopian boosterism, cautioning us that "the latest sweep of progress in AI has been less science than engineering, even tinkering [...] Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way". All these whizzy AI products exhibit a fragility quite unlike the flexibility of HI (human intelligence). 

I believe this fundamental fragility stems from AI's back-to-front approach to the world, that is, from its lack of HI's underlying emotional layer that lies below the level of language and is concurrent with perception itself. We judge and remember what we perceive as being good or bad, relative to deep interests like eating, escaping, playing and procreating. Jess could never *really* empathise with his fleshy interrogators because, unlike them, he never gets hungry or horny...  

Saturday 28 April 2018

THE MAPLOTUBAZON TRIANGLE

Dick Pountain/Idealog 280/29 October 2017 10:57

Living in 21st-century London poses many threats to the health of elderly gentlemen, from acid-flinging scooter thugs to spice-addled footpads, but these dwindle into statistical insignificance for me personally compared to the threat of getting lost in the Maplotubazon Triangle - the seductive maze of online tech retailers that encourages one to squander money without even leaving the house. 

The mid-life collecting impulse that afflicts many males with disposable incomes is a pretty well-understood pathology, which often manifests itself through fast cars, motorcycles or vintage electric guitars. I try to resist it myself, partly out of principle but also from a certain parsimony towards the confiscatory prices that prevail in these adult-toy markets. However I'm still vulnerable because I do enjoy playing guitar. I don't collect guitars as such, being satisfied with two I purchased years ago - a 1984 Gibson Blue Ridge acoustic and a 1987 Fender MIJ (Made in Japan) ‘Hendrix re-issue’ Stratocaster, plus a more recent Hofner Beatle Bass replica. I do however have an unhealthy relationship with effects pedals, provoked by my doomed desire to emulate Bill Frisell, who deploys delay, looping and reverb as instruments in their own right. 

It's not that I actually collect pedals either, already having (most) of the ones I want, but rather that I need to connect them all into a *system*, the ergonomics of which have become a bit of an obsession. I have five pedals that need connecting: an Akai Head Rush delay/echo/looper (bought after I first saw KT Tunstall); a Zoom MRT-3 drum machine, which in addition to hundreds of preset patterns lets me use its pads interactively; a Zoom G1on multi-effect processor; a Belcat tremolo pedal and a Rowin twin looper. I can play guitar(s) and drum patterns into the looper, then play over it like a one-man trio.  

The problem is the vast number of ways you can connect these up: which should go before which, parallel or series, which can be bypassed? It's even forced me to draw flow charts. Do I need tremolo on the drums? No. All these pedals ultimately feed into a small Marshall practice amp and the "glue" that holds them together is a dense thicket of short jack leads, 9v power leads and mixer boxes. And sniffing all this glue is what caused me to be cast away in the Maplotubazon Triangle - I have a branch of Maplins just down the road, I have a highly-active Amazon account, and I watch guitar-pedal porn on YouTube.  

This whole enterprise means waging war against two powerful enemies, mains-hum and impedance mismatch, my Scylla and Charybdis. Try new configuration, works until I add in one last pedal and then it hums. Bung in another £17 Maplins passive 3-channel mixer and the hum goes away, but I lose 40dB and the volume knob on my Strat behaves like an on/off switch. Put active mixer back in and it hums when you turn the third knob... Spend hours reading specs and user reviews of multi-pedal power supplies on Amazon, but never actually settle on the one to order. Will its power leads be long enough? Will it really be hum-free? Agony...

There's a deeper level of OCD still to which I've not succumbed, yet, namely to open up the boxes and alter their gubbins with a soldering iron. I've tasted this forbidden fruit via a YouTube video about hacking the MRT-3 to add an 8-socket patch panel and noise generator. I'm pretty damned good with a soldering iron (mine's a Draper butane-powered cordless one, if you're interested (which I'm sure you are)) but no... just no.

You may wonder whether there's any purpose beyond all this fiddling, and the short answer is yes. One day, once it all works well enough to navigate in real-time with confidence, and assuming that occurs within my lifetime, I'd like to perform in public. All pedals attached with velcro, just a single mains plug, hoick the rig down to The Green Note open-mike night and treat them to 15 minutes of space-age howls and twangs. You may wonder why I don't just find two human musicians who are into electronic, effect-heavy, free-jazz guitar music. I would, but it seems most of them are working part-time on remote unicorn ranches. 

And anyway, I'm almost there now. I've just discovered a configuration that works hum-free and loud so long as I use three separate wall wart power supplies. That being the case I've ordered, from Amazon, a Caline 10-way isolated power supply, which should fix me up for sure. I just found 1,140 YouTube videos, half of which say it's the bee's knees and half of which say it's crap, so a 50:50 chance...


[Dick Pountain has been a good boy in 2017 and would like Santa to bring him a DigiTech PDS-8000 Echo-Plus]

AWFUL LOT OF COFFEE

Dick Pountain/ Idealog 279 /05 October 2017 13:33

I love coffee. Or rather, I love making and drinking coffee - not talking about it, bragging about it or agonising over equipment choices. I neither roast nor grind my own beans, and am a stranger to the burr-grinder. Instead I work my way along Sainsbury's shelf of Fairtrade ground coffees (tip: Sumatra Mandheling is very nice). Occasionally, when I can be arsed, I go to the little guy in Delancey Street NW1 who employs gorgeous old-school gas roasters that give off sparks and fill the street with perfume. I do not buy any coffee that has passed through the alimentary canal of a tree-bound feliform, and avoid those hipster blends that have the pH of battery acid.

I've owned almost every type of maker, from horrid percolator to mini moka, over the last 50 years, but ended up preferring the espresso machine: living in Italy for 14 years will do that. I do *not* need to hear your stories about the Aeropress, and I've never owned a bean-to-cup machine because they combine the most irritating features of the laser printer and the photocopier: whenever you want a shot, some damned internal organ is either full or empty, and it tells you so on its horrid LCD display. I've had to buy several "desktop" espresso machines over the years - their pumps tend to expire two weeks after the warranty does - but still doubt I've spent the price of a bean-to-cup. 

I hear you all mutter "What's all this got to do with computers. He's finally lost the plot...", but I do have a connection, and it's not just that programmers run on coffee. The first book I ever had published was "A Tutorial Introduction To Occam Programming" in 1987, commissioned by Inmos and co-written with David May. When I began writing, the only parallel  demo program they could give me to work from was, you guessed it, to run a coffee machine. A classic real-time, concurrent problem, how to run heater and pump while also handling user input. I never actually got to build a coffee machine controlled by a transputer, and it would have been way too expensive. 

As a result of this early experience I developed a fascination with the user-interface of the espresso machine. Apart from switching it on and off there are really only three things you need to ask of it, namely to deliver coffee, deliver hot water or deliver steam to froth milk. You should be able to accomplish this with at most three buttons and lights, but it's astonishing what a pig's ear many manufacturers have made of it. Many try to skimp by only having two buttons or lights, which introduces needless combinations and confusions. Some of them believe a pressure gauge  adds to the machismo, but the fact is that if a machine doesn't already know what 15 Bar feels like then it's a bomb rather than a coffee maker.

Anyhow, my last machine's pump conked out a while ago and I replaced it with De Longhi's cute little Dedica EC680M. Narrow as a pod machine and beautiful in brushed-chrome and black, its user interface was designed by someone who gives damn - and who ought to be teaching UX design in Redmond. It has just three buttons with back-lit icons on them that depict one cup, two cups and a puff of steam. There's no LCD display and no menu, everything being controlled by a simple protocol based on the state of these icons: unlit, steady-lit or flashing. They can flash sequentially to indicate programming mode, in which you can set coffee temperature, water hardness and auto-off period by a short press. But the truly marvellous feature - which set me skipping round the kitchen when I discovered it - is the way you set how much coffee is delivered for one-cup or two-cups. Press the requisite button, hold it until it's delivered as much as you want, let go and it remembers that until further notice...

Staggeringly simple, highly effective, almost biological, being the way that many neuronal operations behave inside our brains. It's also so simple as to be beyond the designers of much computer software. In 2017 I still, every day, encounter programs in both Windows and ChromeOS that won't remember the last menu option you chose, hence forcing you back to the top and rendering some repetitive operations unspeakably tedious. Of course no protocol is entirely impervious to error, and rarely I'll press something at the wrong time, upset the sequencing and all the lights just flash. In that case an error handling routine is needed, and De Longhi's designer chose the most sophisticated and most popular one there is: turn it off and turn it back on again.

Friday 16 February 2018

ME, ROBOT?

Dick Pountain/Idealog 278/04 September 2017 09:57

Am I worried about a robot taking my job? The very question triggers in my imagination (and probably yours too) the vision of a shiny white plastic humanoid, possibly bearing a Honda logo, whirring smoothly into my study, sitting down in my typist's chair and tapping out this column on the keys of my laptop. But of course this isn't what happens at all.

Much of my job has already been automated away. 200 years ago I'd be writing this in ink with a dip pen, and forming each letter by hand. Now I press a key and the firmware of my computer forms each letter: I just choose the words and put them into order (were I foolish enough to turn on predictive text, the computer would try to bugger that up too). When I finished 200 years ago I'd probably roll up the paper and hand it to a boy who would run it round to the editorial office - robots had his job long ago. Now I press another key or two to send it by email. I'm just a word chooser and orderer and I get paid to do it, a crucial point.

Since we're such a social species it's hardly surprising that we're obsessed by humanoid robots, but they really aren't the biggest threat. They will certainly continue to improve in capability, and find roles in many service industries and social care where the more human they seem the better. Such applications raise deep ethical questions, and some very able people are already working on answers My old friend Prof Alan Winfield (alanwinfield.blogspot.co.uk) works with the EPSRC (Engineering and Physical Sciences Research Council), IEEE and other bodies on a code for ethical regulation of robotics, based on principles like:

~ Robots should not be designed solely or primarily to kill or harm humans.
~ Robots should be designed to comply with existing laws, rights and freedoms, including privacy.
~ Humans, not robots, are responsible agents, so a person must be attributed legal responsibility for every robot.
~ Robots are industrial products that must meet industry standards of safety and security.
~ Robots should not employ deliberately deceptive appearance to exploit vulnerable users: their mechanical nature should remain obvious.

No, the main threat is not a robot taking your job but of your job disappearing through less visible automation. "Moravec’s paradox" (named for AI researcher Hans Moravec) observes that it's easier for AI to imitate the advanced cognitive skills of a chess grandmaster than the simple perceptual and motor skills of a two-year old child. The hardest part of my opening scenario isn't writing this column, but walking through the door and sitting in the chair, which makes a humanoid robot hopelessly inefficient and far too expensive for the job. Cheaper and more efficient instead to generate this column using an AI program that scrapes all my 20 years of previous columns and does some inferring (though I do flatter myself that you might notice the difference...) 

In a powerful recent Guardian article (https://www.theguardian.com/business/2017/aug/20/robots-are-not-destroying-jobs-but-they-are-hollow-out-the-middle-class) Larry Elliot proposes a scenario in which increasingly polarised Western capitalist societies fragment further still. A tiny rich minority purchases the technology to automate away middle class jobs, then re-deploys the labour so displaced to perform cheaply those manual and service tasks that can't profitably be performed by machines. Automating those tasks would also have caused mass unemployment and destroyed the market for products, whereas paying a minimal universal wage might keep the whole shebang running after a fashion, with 1% living in extreme luxury and 99% still living, but in extreme drudgery.

Last Sunday I booked online our winter's worth of concert tickets - no box-office clerks were employed, even on the telephone. When the various days arrive, we'll travel, say to the Wigmore Hall, by bus (in 5 years time that might be driverless) to watch very talented people make  unamplified music on acoustic wooden instruments designed over a century ago. And I'm prepared to pay and travel to hear this even though I could listen to "the same" music on Spotify. That evening I watched on TV a BBC Prom of marvellous Indian classical music, and noticed that one trio was using an electronic drone box in place of a human tanpura player, thus saving one quarter of their labour cost.

These are the sorts of decision that automation will increasingly face us with. I think I'm pretty good at choosing words and putting them into the right order, but the market may eventually not agree. However if we keep listening only to the market, then sooner rather than later we may find life is no longer worth living. What kinds of skills do we wish to preserve, regardless of efficiency or profit?




FAME GAME

Dick Pountain/Idealog 277/05 August 2017 11:05

I'm not overly prone to hero-worship, but that's not to say that I don't have a few: they include soldiers, scientists, philosophers and musicians, from Garibaldi to Richard Feynman. One of these heroes died in February of this year, Hans Rosling a Swedish doctor, academic, statistician and public speaker. That may not sound a typically heroic CV, but his heroism consisted in inventing ways to make statistics both exciting and comprehensible to the public, then deploying this ability for a humane end, namely to counteract panic about overpopulation. This wasn't speculation, but based on his early experience as a doctor in various developing countries.

His brilliant 2013 documentary "Don't Panic - the Truth About Population" (https://www.youtube.com/watch?v=FACK2knC08E) employed state-of-the-art Musion 3D animated infographics to show that as a nations' population achieves higher living standards their average fertility drops so steeply that total world population is already peaking, and is set to plateau at around 11 billion by 2100. And he believed this number could be fed if resources (particularly African land) were sensibly used. To be sure 2013 now feels like a previous epoch in which one could sensibly assume people would get steadily more prosperous, and that no crackpot religion would take power to push the birthrate back up again.

Anyhow, I wanted to pay tribute to Rosling's amazing facility with statistics and his belief that when used properly they reveal truths that can help us survive, and so I've devised a thought-experiment that might have appealed to his impish Scandinavian sense of humour. It goes something like this.

Create a 3D coordinate system with three orthogonal axes labelled Knowledge, Fame and Wealth. Define these three quantities simplistically but pragmatically by devising functions to extract them from existing available databases. For example you might create a Knowledge function that compiles each person's years of primary, secondary and possibly tertiary education; Fame might be derived from a person's extended family size, to which add Google hits on their name, Facebook and Twitter friends, and for a few add professional data like number of TV or movie performances, books published, sporting successes and so on; Wealth would have to come from government tax databases, bank records (pehaps supplemented from the Panama Papers) and similar. Compile these three parameters for every person in the world, then plot them all into your 3D space.

You'll protest that this is impossible and I'll agree, but will then point out that a) it's only a thought-experiment and b) think back to the 2016 US election when certain Big Data firms like Cambridge Analytics claimed to have done something not too far off for most of the US electorate. It's nowhere near so far-fetched as it was even two years ago.

What you'd now be looking at is a solid of roughly spherical proportions close to the origin, containing the vast majority of the world population, with numerous spiky protuberances that contain all the world's academics, celebrities and plutocrats: a sort of world hedgehog. The length and volume of these protuberances would be a measure of the inequalities along all three axes. Now let's get more implausible still, by updating this chart on a yearly basis and animating it in Rosling/Musion style, so that it throbs and twitches, grows and shrinks in various directions.

If you could project the data back into the medium-distant past you'd see the effect of various political programs and social movements: following World War Two the whole sphere would expand along at least the Wealth and Knowledge axes, up until the late 1970s when Wealth motion might cease, and may even go into reverse. From that point onwards you'd see some swelling along the Fame axis as the internet gives more people their Warholian 15 minutes, but the spikes along Fame and Wealth directions would grow enormously longer and far thinner as Wealth becomes far more concentrated. As for Knowledge, who knows: are we really getting smarter or dumber? It's easy to jump to conclusions here. Literacy is still probably increasing through much of the developing world, except where religious extremists obstruct it, while university attendance has continued to spread in the developed world, but may soon go into reverse due to massively increased costs - and there are furious arguments about quality and maintenance of standards. I do wish I had this experiment running and could see the "real" picture. 

I'm sure you know that cynical modern proverb "The early bird gets the worm, but the second mouse gets the cheese". Well this proverb, like my thought experiment, illustrates the difference between natural and social competition. In human societies the playing field is anything but flat, in fact it's a spiky surface, something like a naval mine or huge sea urchin.

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...