Sunday 13 December 2015

LOSING THE PLOT?

Dick Pountain/Idealog 252/27 June 2015 16:27

Eagle-eyed readers may have noticed that I haven't mentioned my Nexus 7 tablet in recent months, which is because, until a couple of days ago it languished in a drawer, fatally wounded by the Android 5.0 Lollipop update. (Google should have broken with its confectionery-oriented naming convention and named it "GutShot"). Lollipop rendered it so slow as to be unusable - five minutes or more to display the home screen - and even after scores of reboots, cache clearances and a factory reset, it remained not just slow but it appeared its battery had died too, taking a day to recharge and barely an hour to discharge again.

I did investigate downgrading back to 4.4 KitKat, but the procedures involved are absolutely grisly, requiring not just rooting, but downloading huge ISO image files via a PC with the ever-present chance of a failure that bricks the tablet completely: all totally unacceptable for a consumer-oriented device. (It did set me wondering how the Sale of Goods act might apply to destructive OTA upgrades that aren't reversible by normal human beings...) Instead I went to my local PC World and picked up an Asus Memo Pad 7 for £99, which I repopulated with all my apps and data within a morning, thanks to the brighter side of Google's Cloud, and has worked a treat ever since, and has a front camera and card-slot too. Then last week I discovered that Android 5.1.1 was now available for the Nexus and, with nothing to lose, installed it. A mere six months after its assassination my Nexus came back to life again, faster and slicker than the Asus, with its battery miraculously resurrected and lasting longer than originally.

There has to be a moral to this tale somewhere, but I'll be damned if I can identify it. Google's testing of 5.0 was clearly inadequate, and  its lethargy in keeping us victims informed and releasing a fix not far short of criminal. But stuff like this happens on the IT battlefield all the time. A bigger issue is that it destroys confidence in the whole Over-The-Air update model which I'd come to see as the way forward. If Google (or Apple, or Microsoft) wishes to mess directly with my machine, then at the very least they'll need to provide a simple, fool-proof mechanism to unwind any damage done. But that leads on to another, deeper issue: it feels to me as though all these new generation, cloud-oriented firms, are approaching some sort of crisis of manageability. The latest phones and tablets are marvels of hardware engineering, with their cameras and motion sensors and GPS and NFC and the rest, but all these services have to be driven from and integrated into operating system kernels that date back to the 1980s, using programming languages that fall some way short of state-of-the-art. The result is a spectacular cock-up like Lollipop, or those minor memory-leaks that cause your iPad to gradually clag up until you have to reboot it.

It is of course inconceivable to start from scratch at this point in history, but I was reminded last week of what might have been when I exchanged emails, after twenty years, with Cuno Pfister, a Swiss software engineer I knew back in Byte days who used to work on Oberon/F with Niklaus Wirth in Zurich. Oberon was Wirth's successor to Modula-2, the culmination of his programming vision, and Oberon/F was a cross-platform, object-oriented framework with the language compiler at its heart, complete with garbage collection to combat memory leakage, preconditions to assist debugging, and support for a Model-View-Controller architecture. Its basic philosophy was that an operating system should become a software layer that "imports hardware and exports applications". New hardware drivers and applications were written as insulated modules, usually by extending some existing module, and they clicked into place like Lego bricks. Strong modularity and strong typing enabled 90% of errors to be caught at compile time, while garbage collection and preconditions simplified debugging the rest. It was precisely the sort of system we need to program today's tablets, but of course it could make no headway at all against the sheer inertia of Unix and C++.

What I miss most about that concept is having the programming language compiler built right into the OS. I still occasionally get the urge to write little programs, but all the tools I have are either massive overkill like Visual Studio, or command-line austerity like Ruby, and the APIs you have to learn are hideous too. I did recently discover a quite usable Android JavaScript tool called DroidScript, and the first thing I wrote in it, as is my historical habit, was a button that when pressed says "bollox"...  

Monday 16 November 2015

BITS STILL AIN'T ATOMS

Dick Pountain/Idealog 251/07 June 2015 13:48

I'd started to write that I'm as fond of gadgets as the next man, but in truth I'm only as fond as the one after the one after him (which is still fairly fond). For example I get enormous pleasure from my recently-acquired Zoom G1on guitar effects pedal, frightening the neighbours with my PhaseFunk twangs. However I've resisted the hottest of today's gadgets, the 3D printer, with relative ease. Partly it's because I have no pressing need for one: being neither a vendor of cornflakes nor a devotee of fantasy games or toy soldiers I just don't need that many small plastic objects. I can see their utility for making spare parts for veteran mechanical devices, but I don't do that either. What deters me more though is the quasi-religious atmosphere that has enveloped 3D printing, as typified by those reverential terms "making" and "maker". People desperately want to bridge the gap between digital representation and real world, between CGI fantasy and life, and they've decided 3D printing is a step on the way, but if so it's a tiny step toward a very short bridge that ends in mid-air.

One problem is precisely that 3D printing tries to turn bits into atoms, but pictures don't contain the internal complexity of reality. Serious applications of 3D printing are, for example, the aerospace industry where components can be printed in sintered metal quicker, more cheaply and of greater geometric complexity than by traditional forging or casting techniques. Even so two things remain true: such parts are typically homogeneous (all the same metal) and made in relatively small quantities since 3D printing is slow - if you need 100,000 of something then 3D print one and make a mold from it for conventional casting. Printing things with internal structure of different materials is becoming possible, but remains topologically constrained to monolithic structures.  

That's the second problem, that 3D printing encourages thinking about objects as monolithic rather than modular. Modularity is a profound property of the world, in which almost every real object is composed from smaller independent units. In my Penguin Dictionary of Computing I said: "modules must be independent so that they can be constructed separately, and more simply than the whole. For instance it is much easier to make a brick than a house, and many different kinds of house can be made from standard bricks, but this would cease to be true if the bricks depended upon one another like the pieces of a jigsaw puzzle." The basic module in 3D printing is a one-bit blob firmly attached to the growing object.

I recently watched a YouTube video about a project to 3D print mud houses for developing countries, and it was undeniably fascinating to watch the print head deposit mud (slowly) in complex curves like a wasp building its nest. But it struck me that, given the computing power attached to that printer, it would be faster to design a complex-curved brick mold, print some and then fill them with mud and assemble the houses manually.

The ultimate example of modularity, as I never tire of saying, is the living cell, which has a property that's completely missing from all man-made systems: every single cell contains not only blueprints and stored procedures for building the whole organism, but also the complete mechanism for reproducing itself. This mind-boggling degree of modularity is what permitted evolution to operate, by accidentally modifying the blueprints, and which has lead to the enormous diversity of living beings. No artificial "maker" system can possibly approach this status so long as fabrication remains homogeneous and monolithic, and once you do introduce heterogeneous materials and internal structure you'll start to confront insuperable bandwidth barriers as an exponentially-exploding amount of information must be introduced from outside the system rather than being stored locally. A machine that can make a copy of itself seems to really impress the maker community, but you just end up with a copy of that machine. A machine that copies itself, then makes either an aeroplane, or a bulldozer, or a coffee machine out of those copies is some way further down the road.

I was lead to these thoughts recently while watching Alex Garland's excellent movie Ex Machina. In its marvellous denouement the beautiful robot girl Ava kills her deeply unpleasant maker and escapes into the outside world to start a new, independent life, but first she has to replace her arm, damaged in the final struggle, with a spare one. Being self-repairing at that level of granularity is feeble by biological standards, and as she stood beaming at a busy city intersection it struck me that such spare parts would be in short supply at the local hospital...  

STRICT DISCIPLINARIAN

Dick Pountain/Idealog 250/05 May 2015 11:23

After photography my main antidote to computer-trauma is playing the guitar. Recently I saw Stefan Grossman play live for the first time at London's King's Place, though I've been learning ragtime picking from his books for the last 30 years. He played his acoustic Martin HJ-38 through a simple PA mike, and played it beautifully. Another idol of mine is Bill Frisell, who could hardly be more different in that he employs the whole gamut of electronic effects, on material from free jazz, through bluegrass to surf-rock. Dazzled by his sound I just purchased a Zoom G1on effects pedal from Amazon, and am currently immersed in learning how to deploy its 100 amazing effects.

The theme I'm driving at here is the relationship between skill, discipline and computer-assistance. There will always of course be neo-Luddites who see the computer as the devil's work that destroys all skills, up against pseudo-modernists who believe that applying a  computer to any banal material will make it into art. Computers are labour-savers: they can be programmed to relieve humans of certain repetitive tasks and thereby reduce their workload. But what happens when that repetitive task is practising to acquire a skill like painting or playing a musical instrument?

The synth is a good example. When I was a kid learning to play the piano took years, via a sequence of staged certificates, but now you can buy a keyboard that lets you play complex chords and sequences after merely perusing the manual. Similarly if you can't sing in tune a not-that-inexpensive Auto-Tune box will fudge that for you. Such innovations have transformed popular music, and broadened access to performing it, over recent decades. Does that make it all rubbish? Not really, it's only around 80% rubbish, like every other artform. The 20% that isn't rubbish is made by people who still insist on discovering all the possibilities and extending their depth, whether that's in jazz, hiphop, r&b, dance or whatever.

Similar conflicts are visible with regard to computer programming itself. I've always maintained that truly *great* programming is an art, structurally not that unlike musical composition, but the vast majority of the world's software can't be produced by great programmers. One of my programming heroes, Prof Tony Hoare, has spent much of his career advocating that programming should become a chartered profession, like accountancy, in the interests of public safety since so much software is now mission-critical. What we got instead is the "coding" movement which encourages absolutely everybody to start writing apps using web-based frameworks: my favourite Guardian headline last month was "Supermodels join drive for women to embrace coding". Of course it's a fine idea to improve everyone's understanding of computers and help them make their own software, but such a populist approach doesn't teach the really difficult disciplines involved in creating safe software: it's more like assembling Ikea furniture, and if that table-leg has an internal flaw your table's going to fall over.

Most important of all though, there's a political-economic aspect to all this. Throughout most of history, up until the last century, spending years acquiring a skill like blacksmithing, barbering, medicine, singing, portrait painting might lead to some sort of a living income, since people without that skill would pay you to perform it for them. Computerised deskilling now threatens that income stream in many different fields. Just to judge from my own friends, the remuneration of graphic designers, illustrators, photographers and animators has taken a terrible battering in recent years, due to digital devices that opened up their field and flooded it with, mostly mediocre, free content. The arguments between some musicians and Spotify revolves around a related issue, not of free content but of the way massively simplified distribution reduces the rates paid.

We end up crashing into a profound contradiction in the utilitarian philosophy that underlies all our rich Western consumer societies, which profess to seek the greatest good for the greatest number: does giving more and more people ever cheaper, even free, artefacts trump the requirement to pay those who produce such artefacts a decent living? I think any sensible solution probably revolves around that word "decent": what exactly constitutes a decent living, and who or what decides it? Those rock stars who rail against Spotify aren't sore because their children are starving, but because of some diminution in what most would regard as plutocratic mega-incomes. Some people will suggest that it's market forces that sort out such problems (and of course that's exactly what Spotify is doing). I've no idea what Stefan Grossman or Bill Frisell earn per annum, but I don't begrudge them a single dollar of it and I doubt that I'm posing much of threat to either of them  (yet).

Wednesday 16 September 2015

MENTAL GYMNASTICS

Dick Pountain/Idealog 249/09 April 2015 21:04

Recently the medical profession has discovered that stimulating our brains with difficult puzzles and problems - mental exercise in other words - has a beneficial effect on health.Such exercises can't, as some have suggested, actually cure dementia but there's evidence it may delay its onset. Just a few years ago one of the big electronic vendors advertised its hand-held games console by showing ecstatic grey-beards using it to play Connect Four on the sofa with their grandsprogs. As a senior citizen myself I must feign interest in such matters, but in truth I've never really worried that I'm not getting enough mental exercise, because the sheer cack-handedness that prevails in the IT business supplies all the exercise I can use, for free, every single month.

Take for example the impact of new security measures on the attempt to keep a working website. Plagued by hacks, leaks, LulzSec, Heartbleed, NSA surveillance, every online vendor is tightening security, but they're not all that good at notifying you how. I've had a personal website since 1998, and more recently I've also been running three blogs while maintaining an archive of the works of a dear friend who died a couple of years ago. I've never believed in spending too much money on these ventures, so I hosted my very first attempt at a website on Geocities, and built it using a free copy of NetObjects Fusion from a PC Pro cover disk. Mine was thus one of the 38,000,000 sites orphaned when Yahoo closed Geocities in 2009, so I moved to Google Sites and built a new one from scratch using Google's online tools. Around this time I also shelled out money to buy my own domain name dickpountain.co.uk from the US-based registrar Freeparking.

My low budget set-up has worked perfectly satisfactorily, without hiccup, for the last six years, that is until January of this year when I suddenly found that www.dickpountain.co.uk no longer accessed my site. To be more exact, it said it had accessed it but no pages actually appeared. I checked that the site was working via its direct Google Sites address of https://sites.google.com/site/dickpountainspages/ and it was, so perhaps redirection from my domain had stopped working properly? To check that, I went to log into my account at Freeparking's UK site, only to find that entering my credentials merely evoked the message "A secure connection cannot be established because this site uses an unsupported protocol. Error code: ERR_SSL_VERSION_OR_CIPHER_MISMATCH?"

Locked out of my account I couldn't reach Freeparking support, so I mailed RWC columnist Paul Ockenden who immediately asked whether I was using Chrome. Yes I was. Did I know that nowadays it disables SSL3 by default? No I didn't, thank you Paul. With SSL3 enabled I managed to get into my account, only to find nothing had changed: it was still set to redirect dickpountain.co.uk to that Google address. About then I received another email from Paul saying a source view on http://www.dickpountain.co.uk/ shows the frameset there but not visible: Google was suddenly refusing to display pages in an external frameset, problem not with Freeparking.

An evening plodding through the forums revealed that Google too has upped security, and now you have to verify ownership of your site (even one that's been running for six years already). Their help page explaining verification offers four different methods: add a meta tag to your home page to prove you have access to the source files; add the Google Analytics code you use to track your site; upload an HTML file to your server; or verify via your domain name provider by adding a new DNS record. None of the first three worked because my site was built in Google's own tools, which strip out any added meta tags and won't allow uploading raw HTML. I needed to make a trek into into the belly of the beast, into Mordor, into... DNS. Now DNS scares me the way the phone company scared Lenny Bruce ("mess with it and you'll wind up using two dixie cups and a string"). Log into Freeparking site, go to ominously-named "Original DNS Manager Interface (advanced users)" and edit a CNAME record to point, as instructed by Google Help to ghs.google.com. Nothing happens. Try again, five times, before it finally sticks. Half an hour later www.dickpountain.co.uk works again!

You might expect me to be annoyed by such a nerve-wracking and unnecessary experience, but you must be kidding, I was jubilant. I still have it! The buggers didn't grind me down! It's like climbing Everest while checkmating Kasparov! Macho computing, bring it on,  mental whoops and high-fives. It did wear off after a couple of days, but I still smirk a little every time I log onto my site now...

Wednesday 12 August 2015

SYNCING FEELING

Dick Pountain/Idealog 248 /05 March 2015 15:29

Astute readers may have noticed that I'm deeply interested in (a nicer way of saying obsessed by) note-taking. This is no coincidence, because all my main occupations - editing the Real World section, writing this column, writing and reviewing books - involve reading the work of others and gathering together important points. Anything that reduces the amount of re-typing required is a bit like the difference between weeding a field of beans using a tea-spoon and using a tractor. Just few years ago my desk groaned under thick hard-back books that bristled like porcupines with small yellow Post-It notes to mark those pages I needed quotes from, or had pencilled margin notes on.

Making notes on a tablet that could sync with my laptop removed the need for those yellow flags, but still left me the job of re-typing the quotes into my own text. (Over the years I'd tried several of those pen-like or roller-like handheld scanners, but none was effective enough to be worth the hassle). No, the logical final step is for the source material I'm reading to be online too, but it's taken until now to arrive there. For the very first time I'm reviewing a book in its Kindle rather than paper edition, which means I can search its full text for relevant phrases and cut-and-paste all the resulting notes and quotes. In theory that is, because it turns out not be quite so simple.

Amazon's Kindle reader software certainly enables you to place bookmarks, highlight passages and make notes, but none of these functions is without its quirks, and the way they work varies between versions. I like to use an actual hardware Kindle when outdoors because it's light, readable in sunlight and has great battery life. Indoors I prefer to read and note-take on my Android tablet, but I write the actual review on my Windows laptop, and all these platforms run different versions of the reader.

First quirk is that the granularity of Kindle bookmarks is too broad, working only to whole page boundaries. When I view Notes & Marks the short extract presented is only the top few lines of that *page*, though my interest might lie further down. Highlights are more useful because then the extract is from the start of the highlighted area, not the whole page. I can attach a note to any single word on a page, but in Notes & Marks only its text appears, so I end up with a cryptic list like "yes", "no", "good", "really?" with no idea what each refers to until I click it and go to that page, which becomes dizzying after a while. The best compromise is to highlight a sentence or paragraph and then attach a note to its first word.

Next quirk: notes, highlights and bookmarks should sync automatically between Kindle, tablet and desktop readers, but notes made on my tablet weren't showing up on the laptop. This matters because I can only cut and paste highlighted quotes from the laptop version, as  Kindle and tablet versions have no copy function. Solving this required a stiff yomp through the forums, where sure enough I found an answer - you have to manually sync by hitting that little curly-arrows icon. Still didn't work. More forums and the real answer. You have to hit *not* the sync icon inside the book in question, but the one on the home screen with all books closed. Doh! But it does work.

The last quirk is that you can't run multiple instances of Kindle reader on the same device. It so happens I have another book on my Kindle that's relevant to this review and I'd like to quote from it: have to go out into the library, open other book, find quote, cut-and-paste (but  on laptop version only). It would be nice to keep two books open in two instances of Kindle reader on same machine. I really shouldn't grouse too much though, because merely being able to search, make notes and cut-and-paste them has hugely reduced the amount of tedious re-typing involved in the reviewing process, and I also need to remember that Amazon is obliged by copyright and fair-usage to restrict some of these functions (a copyright notice gets placed on every quote I paste, which I delete).

Nevertheless I do believe that Amazon is missing a trick here, and that just making a few fairly minor tweaks would establish a really effective collaborative network for students and researchers to share notes and quotes, which wouldn't need to carry advertising since the product has already been paid for. That would of course grant Amazon the sort of dominance that the US courts have already refused to Google, but let's not go there...
 

SLEEPY HOLO?

Dick Pountain/  /08 February 2015 12:20/ Idealog237

With the TV news full of crashing airliners, beheadings and artillery bombardments it's hardly surprising that a lot of people wish to escape into a virtual reality that's under their own control, which is becoming ever more possible thanks to recent technology. As Paul Ockenden explained in a recent column, the miniaturised components required for smartphones are precisely those whose lack has been holding back virtual reality for the last couple of decades: displays, graphics processors, high-bandwidth comms and batteries. The embarassing withdrawal of Google's Glass project (no-one wanted to be a glasshole) suggests that gaming remains the principal application for this technology and Microsoft's HoloLens goggles, announced at the Windows 10 launch, merely confirm that Redmond is thinking the same way.

The HoloLens employs unprecedented amounts of mobile GPU power to mix 3D holographic images into your normal field of view, creating an augmented, rather than virtual, reality effect: you see what's really there combined seamlessly with whatever someone wants to insert. It's an exciting development with many implications for future UI design, but it might create some unprecedented problems too, and that's because we already live in a naturally augmented reality. You might think that everything you're seeing right this second is what's "really" there, but in fact much of the peripheral stuff outside your central zone of attention is a semi-static reconstruction of what was there a few seconds ago: like yesterday's TV sets, your eyes lack sufficient bandwidth to live stream HD across their whole field of view. That's because poor old Evolution had no access to silicon, gallium arsenide or metallic conductors and had to make do with warm salty water.

But that's the least of it, because *everything* you see is actually a reconstruction and none of it is directly "live". Your visual cortex reads data from the rods and cones of your retinas, filters this data for light, shade, edges and other features and uses these to identify separate objects. The objects it finds get inserted into a constantly-updated model of the world stored in your brain, and that model is what you're seeing as "really" there, not the raw sense data. Everything is already a reconstruction, which is why we're occasionally prone to see things that aren't there, to hallucinations and optical illusions. (If you're interested, all this stuff is brilliantly explained in Chris Frith's "Making Up The Mind", Blackwell 2007 ).

There's even more. These objects that get accepted into the world model aren't neutral, but like all your memories get a tag indicating your emotional state, in the strict biochemical sense of hormone and neurotransmitter levels, when they were added. This world map in your brain is value-ridden, full of nicer and nastier places and things. You maintain a similar brain model of your own body and its functions, and the US neuroscientist Antonio Damasio believes the mystery of consciousness will one day be solved in the way these twin mappings get superimposed and differentially analysed in the brain. (We're still a long, long way from such a solution and the hard road toward it might conceivably just stop, or worse still become a Möbius strip that circles for ever).

Neuroscientists aren't the only people who understand this stuff. Painters, sculptors and movie makers, at least the good ones, know very perfectly well how visual representations and emotions are connected: some spaces like dungeons are just creepy, some faces are admirable, others irritating. A horror movie - let's say Sleepy Hollow, to validate the weak pun in my column title - is already a primitive form of augmented reality. Most of what appears on the screen depicts real stuff like trees, sky, people, furniture, buildings and only a few parts are unnatural CGI creations, but since all are only two-dimensional the brain has no trouble distinguishing them from "real" objects. That will longer be the case with the new holographic 3D augmented reality systems.

The cruder kinds of early VR system I used to write about years ago - those ones where you staggered around in circles wearing a coal-scuttle on your head - suffered noticeable problems with motion-sickness, because the entirely artificial and laggardly background scenery violated the physics of people's inner world models and upset their inner-ear balance. It seems likely that augmented reality systems of the calibre of HoloLens may escape such problems, being utterly physically convincing because their backdrop is reality itself. But what completely unknown disorders might AR provoke? Could AR objects stray out of the perceptual model into memory and become permanent residents of the psyche, like ghosts that people will in effect be haunted by? Will we see epidemics of PLSD (Post-Ludic Stress Disorder)? And as for AR porn, the potential for embarassing encounters doesn't bear thinking about...

Wednesday 1 July 2015

TRIX WITH PIX

Dick Pountain/Idealog 246/06 January 2015 14:05

My major digital pastime has for several years now been photography rather than programming: reading my profile reminds me I joined Flickr eight years ago and have now posted 1500+ pictures there (www.flickr.com/photos/dick_pountain/). The digital imaging market has been through a technical revolution during those years, and now faces what tech gurus love to call "disruption" thanks to the mobile phone. A whole generation now prefers their mobile to a proper camera, and phones' performance has improved extraordinarily by incorporating sensors and image processors from real camera manufacturers like Sony. Camera makers are striking back with gorgeous-looking retro designs that recall the golden age of the Leica, fitted with huge sensors, fixed "prime" lens and astounding image quality - and premium £1000+ prices aimed at separating "real photographers" from selfie-snappers.

As for me I've resisted both these trends. I started out posting mostly travel pics, street photos and landscapes, over-sharpening and saturation-boosting them to match the approved Flickr aesthetic, but in recent years I've become more and more interested in post-processing photos to make them more like paintings (abstract or otherwise). There are plenty of software tools available nowadays to spice up photos - some like Google's Nik Collection of plug-in filters for Photoshop and Light Room are very good indeed - but I'm less interested in buffing up my pics than in dismantling and reconstructing them completely. And my chosen tool is therefore, er, Photoshop Elements version 5. This ancient version lacks all the smart cut-out and similar features of later versions, and many abilities of full Photoshop, but it has all I want which is basically layers, blend modes and a handful of filters.

My modus operandi is as eccentric as my choice of platform. I perform long sequences of operations on each picture, duplicating and saving layers, tinting, filtering and blending them in different modes, but rather than write down this sequence so I can repeat it I deliberately do *not* do that. I merely watch the continually changing image until I like it well enough to stop. I can never repeat exactly that effect again, which I've convinced myself makes it "art" rather than mere processing, just as an oil painting can never be exactly repeated. Doing this so many times has given me a fairly deep grasp of how pictures are made up, about manipulating different levels of detail and tonality. One of my favourite filters is High Pass, which can separate out different levels of detail so that you can enhance or remove just that level. Another favourite trick is mixing some percentage of an outrageously processed image back into the original to temper the effect and make it more subtle.

In view of all my coal-face experience of the internal makeup of digital pictures, I was interested to hear about a joint project by GCHQ and the National Crime Agency (NCA), announced in December 2014 by PM Cameron, to deploy new recognition algorithms for identifying online pictures of child abuse, to aid in their prosecution. The press release said these algorithms are "hash based": that is, they process the bitstream of a digital picture to reduce it to a single number that becomes a "fingerprint" of that picture. Such fingerprinting is essential for evidence to be acceptable legally: it's necessary to prove that a picture confiscated from some offender is the same as one obtained from someone else, and obviously filenames are of no use as they're only loosely attached properties that can be easily changed.

The US website Federal Evidence Review suggests an algorithm called SHA-1 (Secure Hash Algorithm version 1) is in use for this purpose, but it appears to me that algorithm is designed for use on texts, gun serial numbers and other alphanumeric data sets, and I can hardly believe it would generate useable hashes from bitmapped images whose contrast, saturation, sharpness and so on may have been altered - either deliberately during enhancement, or merely by accident through repeated sloppy copying of JPEGs. Pictures that are perceptually similar might have bitstreams quite different enough to change the hash.

I'd guess that content analysis, not merely hashing the bits, will be needed to prove the identity of two versions of any bitmapped image. Face recognition is well advanced nowadays (recent compact cameras can even distinguish smiles) and so is dissection of bitmaps into separate objects in Photoshop. It would remain challenging to create a unique hash from the collection of persons, furniture and stuff isolated from each picture, and oddly enough it's in fine art rather than criminology that the required expertise is most advanced. Iconclass is a hierarchical notation developed by Dutch painting scholars for cataloguing unique configurations of picture elements, and what's needed is something similar for far less salubrious subject matter.  

THE EMOTION GAME

Dick Pountain/Idealog 245/05 December 2014 11:02

In Viewpoints last month Nicole Kobie fairly skewered ("Good at PCs? It doesn't mean you're bad with people") Hollywood's sloppy assumption that Alan Turing must have been autistic because he was a mathematical genius who didn't like girls. I almost didn't go to see "The Imitation Game" for a different reason - the sensational trailer that seemed to be trying to recruit Turing into the James Bond franchise - but I forced myself and was pleasantly surprised that although it took some liberties with the facts, it did grippingly convey the significance of Bletchley Park to the war effort. The movie's major "economy with the truth" lay in excluding GPO engineer Tommy Flowers, who actually built the kit and wrestled with those wiring looms that Turing was portrayed as doing alone. (It also lumped together two generations of hardware, the "Bombes" and Colossus, and barely even attempted to explain Turing's seminal paper on computable numbers, but those I excuse as they'd have hugely slowed the pace).

The film doesn't mention Asperger Syndrome - just as well since it was unknown in Turing's lifetime, and we now have to call it autistic spectrum disorder anyway - but as Nicole pointed out Cumberbatch's depiction of Turing was clearly based on modern notions about the stunting of emotional expression and social interaction that comprise that disorder. The plot depends heavily upon Turing overcoming the dislike his coldness provokes in the other team members, assisted of course by the token emotionally-literate woman played by Keira Knightley, and the tragic ending shows Turing being chemically castrated by injections of female hormone. And that combination of emotions with hormones set me off to read between the lines of The Imitation Game's script to a deeper meaning which the writer may or may not have intended.

The film is named after a test of machine intelligence that Turing invented, in which the machine must try to imitate human conversation sufficiently well to fool another human being, on the assumption that language is the highest attribute of human reason. However recent research in Affective Neuroscience has revealed the astonishing extent to which reason and emotion are totally entangled in the human mind. The weakness of the whole AI project, of which Turing was a pioneer, lies in failing to recognise this, in its continuing attachment to 18th-century notions of rationalism. Those parts of our brain that manipulate language and symbols are far from being in ultimate control, and are more like our mind's display than its CPU. I am, therefore I think, some of the time. US neuroscientist Jaak Panksepp has uncovered a collection of separate emotional operating systems in the brain's limbic system, each employing a different set of neurotransmitters and hormones. These monitor and modulate all our sensory inputs and behaviour, the most familiar examples being sexual arousal (testosterone and others), fight/flight (adrenaline) and maternal bonding (oxytocin), but there are at least four more and counting. What's more it's now clear that motivation itself is under the control of the dopamine reward system: we can't do *anything* without it, and its failure leads to Parkinsonism and worse. Now add to this the findings of Antonio Damasio, who claims all our memories get tagged with the emotional state that prevailed at the time they were recorded, and that our reasoning abilities employ these tags as weightings when making all decisions.

These lines of study suggest two things: firstly all rationalist AI is doomed to fail because the meaning of human discourse is permeated through and through with emotion (if you think about it, that's why we had to invent computer languages, to exclude such content); and secondly AI-based robots will never become wholly convincing until they mimic not only our symbolic reasoning system but also our hormonally-based emotional systems. Sci-fi authors have known this for ever hence their invention of biological androids like those in Bladerunner, with real bodies that mean they have something at stake - avoiding death, finding dinner and a mate (a bit like the IT Crowd). Steven Hawking's recent grim warnings about AI dooming our species should be tempered by these considerations: however "smart" machines get at calculating, manipulating and moving, their actual *goals* still have to be set by humans, and it's those humans we need to worry about.

So as well as a great deal of pleasure from its serious treatment of Turing, the two big lessons I took away from The Imitation Game were these:  machines will never be truly intelligent until they can feel as well as think (which would depend as much on advances in biology as solid-state physics and software engineering); and it would be nice if they were to start planning an "Imitation Game 2: The Tommy Flowers Story".

Wednesday 8 April 2015

TELE-ABSENCE

Dick Pountain/Idealog 244/06 November 2014 10:04

Hello. My name is Dick Pountain and I'm a Flickrholic. Instead of interacting normally with other human beings, I spend too many hours slumped at the computer, Photoshopping photographs I took earlier to make them look like mad paintings (www.flickr.com/photos/dick_pountain). Then I fritter away my remaining time probing the complete works of Bill Frissell, Brandt Bauer Frick and Bartok on that notorious online service Spotify (a villainous outfit which steals food from the mouths of Taylor Swift and Chris Martin). For a while I was a Multiple-Service Abuser, enslaved also to the hideous FaceBook, but that addiction cured itself once the user experience deteriorated to a point where it turned into Aversion Therapy. Yes folks, it's official, the internet is bad for all of us. In South Korea you can get sent on a cure. Here the Guardian runs stories every other day about how it's driving all our young folk into mental illness: cyber-bullying, trolling, sexting, and ultra-hard-core violent porn. We all live in terror of having our identities stolen, our bank accounts drained, or our local sewage works switched into reverse gear by shadowy global hacker gangs.

I'm going to exit heavy-sarcasm mode now, because though all these threats do get magnified grotesquely by our circulation-mad media, there's more than a pinch of truth to them, and mocking does little to help. An anti-digital backlash is stirring from many different directions. In a recent interview Christopher Nolan - director of sci-fi blockbuster Interstellar - expressed his growing dissatisfaction with digital video. Obsessive about picture quality, he feels he can't guarantee it with digital output (the exact opposite of orthodox opinion): “This is why I prefer film to digital [..] It’s a physical object that you create, that you agree upon. The print that I have approved when I take it from here to New York and I put it on a different projector in New York, if it looks too blue, I know the projector has a problem with its mirror or its ball or whatever. Those kind of controls aren’t really possible in the digital realm.” Or consider Elon Musk, almost a God among technophiles, who's recently taken to warning about the danger that AI might spawn unstoppable destructive forces (he compared it to "summoning the demon"), and this from a man who invests in AI.  

What all these problems have in common is that they occur at the borderline between physical reality and its digital representation. My Flickr addiction is pretty harmless because it's just pictures (pace Chris Nolan's critique), while Musk's fears become real when AI systems act upon the real world, say by guiding a drone or a driverless car, or controlling some vast industrial plant. And the problem has two complementary aspects. Firstly, people continue to confuse the properties of digital representations with the things they depict. I can repaint my neighbour's Volkswagen in 5 minutes in Photoshop, but on his real car it would take several hours, a lot of mess, and he'd thump me for doing it without permission. Secondly too much absorption in digital representations steals people's attention away from the real world. As I wander around Camden Town nowadays I'm struck by the universal body-language of the lowered head peering into a smartphone - while walking, while sitting, while eating, even while talking to someone else.

If you want a name for this problem then "tele-absence" (the downside of telepresence) might do, and it's problematic because evolution, both physical and cultural, has equipped us to depend on the physical presence of other people in order to behave morally. The controller of a remote drone strike sleeps sounder at night than he would if he'd killed those same people face-to-face with an M4 carbine: the internet troll who threatens a celebrity with rape and murder wouldn't say it to her face. And "face" is the operative word here, as the Chinese have understood for several thousands of years (and Mark Zuckerberg rediscovered more recently).

Maintaining "face" is crucial to our sense of self, and "loss of face" something we make great efforts to avoid. But we can't make face entirely by ourselves: it's largely bestowed onto us by other people, according to the conscious and unconscious rules of our particular society. Tele-absence robs us of most of the cues that great theorist of social interaction, Erving Goffman, listed as "a multitude of words, gestures, acts and minor events". We've barely begun to understand the ways that's changing our behaviour, which is why criminalising trolling or abolishing online anonymity are unlikely to succeed. Safety lies in hanging out online with people of similar interests (Flickr for me), but at the cost of reinforcing an already scary tendency toward social fragmentation.

[Dick Pountain sometimes wishes his shaving mirror supported Photoshop's Pinch filter]

YOUR INPUT IS ALWAYS WELCOME

Dick Pountain/Idealog 243/10 October 2014 15:42

Last month I wrote here about my recent infatuation with voice input in Google Keep, and now this month Jon Honeyball's column discovers a web source for vintage and superior keyboards. For consumers of media content output may be the more interesting topic (my display is higher-res than yours, my sound is higher-fi than yours) but we at the coalface who have to produce content have a far deeper interest in input methods.

It was ever thus. During my first flirtations with the underground press in the 1970s I used to write my copy longhand with a Bic ballpoint pen and hand it straight to Caroline, our stoical typesetter. Upon elevation (?) to the IT biz on PCW I was firmly told by our late, lamented chief Felix Dennis that he wasn't having any editors who wrote longhand, and so he'd signed me up for a Sight & Sound course. That was perhaps the most surreal week of my life, huddled in a darkened room at a manual Imperial typewriter with blanked-out yellow keys (pressing Shift was like lifting a house-brick with your little finger) touch-typing endless streams of nonsense words. I emerged capable of 35 words per minute, then graduated immediately to a CP/M computer running Wordstar and thus bypassed the typewriter era altogether.

In those days computer keyboards were modelled on mainframe terminals, with deep shiny plastic keys with inlaid characters on their caps, satisfying travel, resistance and click. They had few special keys besides Esc (which CP/M didn't recognise anyway). After that keyboards slithered down two separate hills: in 1982 Clive Sinclair launched the Spectrum with its ghastly squashy keys, probably made by Wrigleys, which became the archetype for all cheap keyboards to the present day; then in 1985 IBM launched the PC XT whose keys and layout largely persist on Windows computers today, Ctrl, Alt, function and Arrow keys and the rest. Jon remembers the IBM AT keyboard fondly as something of a cast-iron bruiser, but I had one that made it look quite flimsy, the Keytronic 5151 (http://blog.modernmechanix.com/your-system-deserves-the-best/). This brute, the size of an ironing board and weight of a small anvil, corrected certain dubious choices IBM had made by providing full-width shift keys, separate numeric and cursor keypads, and function keys along the top where they belong. I loved it, typed several books on it, and kept it until the PS/2 protocol made it redundant in the early 1990s.

It was around then that I suffered my one and only bout of RSI, brought on largely by the newfangled mouse in Windows 3. I fixed it using a properly adjustable typist's chair, wrist rests, and a remarkable German keyboard I found while covering the 1993 CeBIT show, Marquart's Mini-Ergo (see http://deskthority.net/wiki/Marquardt_Mini-Ergo). It was the first commercial split-keypad design, with twin spacebars and a curious lozenge shape reminiscent of a stealth bomber or a stingray. Marvellous to type on, and I carried on using it until I gave up desktop PCs and bought my first ThinkPad (a definite step backwards input-wise). Since then it's been all downhill, at increasing speed. My various successive laptops have had shallower and shallower chiclet-style keys (for added slimness), with less and less feel and travel. My latest Lenovo Yoga compounds the offence by making the function keys require a Fn shift. And on every laptop I've had since that first ThinkPad, the key labels for Right Arrow and A have soon worn off, being merely painted on.

What to do? On-screen tablet keyboards, however large they may become, have little appeal, even though I've gotten pretty quick nowadays at Google's gesture/swipe typing. And I most definitely *won't* be going back to writing in longhand. I may have been one of the earliest Palm Pilot adopters, and I may indeed run Grafitti Pro on both my Android phone and tablet, but writing with a finger is tiring and those pens with squashy sponge tips are pretty horrible. But another, possibly eccentric, solution just occurred to me. It was while ambling through the seething online casbah that is Amazon's Cabling and Adapters section that I discovered, for £1.99 an AT-to-PS2 adapter, followed by a small black box that's a PS2-to-Bluetooth converter (for another £19). It struck me that these two gizmos put together should enable me to use either my Keytronic or Marquart keyboards with all my current devices, phone, tablet and Yoga PC. How amusing it would look to deploy the Yoga in its "tent" configuration as a monitor. Best of all, this arrangement might provide me with plenty to do on cold, dark winter evenings, trying to get a bloody £ sign in place of the #, just like the good old days...






Tuesday 17 February 2015

NOTEWORTHY

Dick Pountain/Idealog 242 /14 September 2014 13:35

One of the inescapable facts of life is that memory worsens as you get older. 40 years ago I could wake in the night with an idea and still remember it next morning. 30 years ago I put a notepad and pencil by my bedside to record ideas. 20 years ago I started trying to use mobile computers to take notes. I last wrote here exactly two years ago about how tablet computing was helping my perpetual quest (I confessed that text files stored on DropBox worked better for me than Evernote and its many rivals). So why revisit the topic just now? Well, because I just dictated this paragraph into my Nexus 7 using Google Keep and that still feels like fun.

I've played with dictation software for years, right from the earliest days of Dragon Dictate, but always found it more trouble than it was worth and practically unusable. So why is Google Keep any different? Mainly because I was already using it as my principal note-taker, as it syncs between my Nexus and my Yoga laptop with great success. I actually prefer Keep's simplistic "pinboard" visual metaphor to complex products like OneNote and Evernote that offer hierarchical folder structures, and its crude colour-coding scheme is remarkably useful. So when one day Google announced that I could now talk into Keep I tried and it just worked, transcribing my mumblings with remarkable accuracy and speed. Voice only works on Android, not on Windows, and it doesn't support any fancy editing commands (but who needs them for note taking)? Does that mean my 30-year quest is over and I've settled on one product? Er, actually no - I now have *three* rival note storage systems working at the same time, can't make a final choice between them and find myself struggling to remember whether I saved that tamale recipe into Keep, OneNote or Pocket. Doh...

The thing is, there are notes and notes. When I get an idea for, say, a future Idealog column, that's only a few dozen words that fit neatly onto a Google Keep "card". I colour these green to spot them more easily, though like everything Google, Keep is search-based so just typing "Ide" brings up all Idealog notes. On the other hand long chunks, or whole articles, clipped from magazines and newspapers stretch Keep's card metaphor beyond its usefulness (and its integration of pictures is rudimentary). For such items OneNote's hierarchical navigation becomes useful to separate different categories of content. Then there are whole web pages I want to save for instant reference, like recipes, maps or confirmation pages from travel sites. In theory I *could* send these straight to OneNote, or even paste them into Keep, but Pocket is way more convenient than either and works equally well from inside Chrome on Nexus, Yoga and phone (Chrome's Send To OneNote app doesn't work properly on my Yoga).

The fundamental problem is perhaps insoluble. Capturing fleeting ideas requires the fastest possible access: no more than one click is tolerable. But to find that idea again later you need structure, and structure means navigation, and navigation means clicks... Mu current combination is far, far better than any I've tried before - popping up a new Keep note or saving a Pocket page at a click is pretty good - but once I accumulate sufficient data the question of where I stored a particular item *will* become critical. This wouldn't be such a big deal if either Android or Windows 8 searches could see inside these applications, but they can't. Neither tablet search nor the otherwise impressive Win8 search will find stuff inside either Keep or OneNote, which isn't that surprising given that both apps store their data in the cloud rather than locally, and in different clouds owned by different firms who hate each other's guts.

On top of that there's the problem of different abilities within the same app on the different platforms. I've said Keep voice only works on Android, not on Windows (voice input also works in Google Search on Android, and tries to on Yoga but says it can't get permission to use the mike). OneNote on Android can record sound files but can't transcribe them, and though it syncs them with its Windows 8 app, the latter can't record and plays files clunkily in Windows Media Player. In short, it's a mess. Perhaps the paid-for, full version of OneNote is slicker, though I'm not greatly tempted to find out. Perhaps Google will soon enhance the abilities of Keep *without* rendering it slow and bloated. Perhaps there is a Big Rock Candy Mountain and somewhere among its lemonade fountains there's a software start-up that gets it about note taking...

OOP TO SCRATCH

Dick Pountain/Idealog 240/05 August 2014 10:34

So, 20 years of this column now, and so far I haven't run out of ideas. Of course one good way to achieve that is to keep banging on about the *same* ideas, and I plead guilty to that with a broad grin. My very first Idealog column in 1994 was entitled "OOPS Upside Your Head" (that prehistoric disco reference will be lost on the youth of today) and it expressed my faith in the long-term potential of object-oriented programming, along with my disgust at the misuse of the term by marketeers and the maladroit/cynical implementations by leading vendors like Microsoft, and this 240th column will be about object-oriented programming too, largely because I recently encountered a curious little OOP product that has renewed that faith.

The product is called Scratch and is intended to teach programming to very young children. At first sight it looks like Lego, thus neatly linking the topics of two of my other recent columns. You build programs by dragging different coloured "blocks" into a window, where they will only fit together in certain ways decided by their shapes, thus removing at a stroke the possibility of making syntax errors (probably biggest source of frustration for beginners). Some of these blocks are control structures, some are data objects and some are actions, and what powerful actions they are, a complete multimedia toolkit that lets you create animations with sound (warning: requires Flash) remarkably simply.

You'll not be too surprised to learn that Scratch was developed at MIT's Media Lab - started in 2003 by a team under Mitchel Resnick - and was originally implemented in a modern dialect of Smalltalk. The latest version 2 is written in Adobe's ActionScript. It's free to download from scratch.mit.edu and you use it online via your browser (though you can store the source for your projects locally too). Scratch is most certainly not suitable for professional development as it can only handle small, visually-oriented projects, but what grabbed me forcefully are the principles it demonstrates.

Scratch is based around a little-used style of object-orientation that employs cloning instead of class-based inheritance. Everything you build is an object called a "sprite" which contains all its own code. Every sprite is executable (it doesn't have to be visual) and one of the things it can do is spawn clones of itself at run-time. Clones behave like their parent, but have their own identity and all run concurrently: they're automatically destroyed and garbage-collected once execution ends. Scratch is also event-driven and supports events like keypresses, mouse clicks, sound-level thresholds, and message-passing between different sprites and their clones.

My first impression of Scratch was so Toy Town that I almost walked away, but then the old Byte training kicked in and nagged me to write The Sieve of Eratosthenes benchmark. It took me half an hour to grasp the principles, the program stretched to all of 10 "lines", looked like a colourful Lego picture, and required a leisurely 40 seconds to find the first 10000 primes. I rapidly knocked out some other old chestnuts like Fibonacci and Factorial to convince myself Scratch could do maths, then had the brainwave of reviving an abandoned Ruby project I called Critters, an animated ecosystem in which various bacteria-like species swim around eating each other, recognising their preferred prey by colour. I'd scrapped my Ruby version when the graphics became too tedious, but Scratch got an impressive version working inside an evening, thanks to predefined blocks that detect whether one sprite or colour is touching another, and a built-in sprite editor to draw the critters. That done, my other old favourite - an animated real-time bank queue simulation - submitted equally gracefully.

Scratch has several, deliberate, limitations. It supports lists and parameterised procedures, but neither are first-class objects that you can pass as parameters, which limits the level of abstraction you can achieve. Everything is tangible, thus overcoming another steep obstacle faced by novices (at the cost of failing to teach them abstraction). The only I/O is export and import of text files into lists (surprisingly usable) and the ability to talk to Arduino and Lego WeDo controller boards. While reading up about Scratch I discovered that a group at Berkeley has created a derivative called Snap! which extends Scratch by introducing first-class lists and local procedure variables. I duly tried it and it works well, but to my own amazement I actually prefer the challenges that Scratch poses to an experienced programmer! In our programming world of 2014, every development tool from C++ through Javascript to Python employs OOP, and I no longer need to defend the technique, but Scratch looks to me like by far the most fun way to teach it.  

[Dick Pountain had rather hoped that his second childhood might no longer involve computers - dammit!]

Sunday 11 January 2015

THE NUMBER OF EVERYTHING

Dick Pountain/Idealog 240 /09 July 2014 11:45

My interest in computing has always been closely connected to my love of maths. I excelled in maths at school and could have studied it instead of chemistry (where would I be now?) My first experience of computing was in a 1960 school project to build an analog machine that could solve sixth-order differential equations. I used to look for patterns in the distribution of primes rather than collect football cards - you're probably getting the picture. I still occasionally get the urge to mess with maths, as for example when I recently discovered Mathlab's marvellous Graphing Calculator for Android, and I'll sometimes scribble some Ruby code to solve a problem that's popped into my head.

Of course I've been enormously pleased recently to witness the British establishment finally recognising the genius of Alan Turing, after a disgracefully long delay. It was Turing, in his 1936 paper on computable numbers, who more than anyone forged the link between mathematics and computing, though it's for his crucial wartime cryptography that he's remembered by a wider public. While Turing was working on computable numbers at King's College Cambridge, a college friend of his David Champernowne, another rmathematical prodigy, was working on something rather different that's recently come to fascinate me. Champernowne soon quit maths for economics; studied under John Maynard Keynes; helped organise aircraft production during WWII; in 1948 helped Turing write one of the first chess-playing programs; and then wrote the definitive book on income distribution and inequality (which happens be another interest of mine and is how I found him). But what Champernowne did back in 1933 at college was to build a new number.

That number, called the Champernowne Constant, has some pretty remarkable properties, which I'll try to explain here fairly gently. The number is very easy to construct: you could write a few million decimal places of it this weekend if you're at a loose end. In base 10 it's just zero, a decimal point, followed by the decimal representations of each successive integer concatenated, hence:

0.12345678910111213141516171819202122232425262728293031....

It's an irrational real number whose representation goes on for ever, and it's also transcendental (like pi) which means it's not the root of any polynomial equation. What most interested Champernowne is that it's "normal", which means that each digit 0-9, and each pair, triple and so on of such digits appear in it equally often. That ensures that any number you can think of, of whatever length, will appear somewhere in its expansion (an infinite number of times actually). It's the number of everything, and it turns out to be far smaller (if somewhat longer) than Douglas Adams' famous 42.

Your phone number and bankcard PIN, and mine, are in there somewhere, so it's sort of like the NSA's database in that respect. Fortunately though, unlike the NSA, they're very, very hard to locate. The Unicode-encoded text of every book, play and poem ever written, in every language (plus an infinite number of versions with an  infinite number of spelling mistakes) is in there somewhere too, as are the MPEG4 encodings of every film and TV programme ever made  (don't bother looking). The names and addresses of everyone on earth, again in Unicode, are in there, along with those same names with the wrong addresses. Perhaps most disturbingly of all, every possible truncated approximation to Champerknowne's constant itself should be in there, an infinite number of times, though I'll confess I haven't checked.  

Aficionados of the Latin-American fiction will immediately see that Champernowne's constant is the numeric equivalent to Jorge Luis Borges' famous short story "The Library of Babel", in which an infinite number of librarians traipse up and down an infinite spiral staircase connecting shelves of random texts, searching for a single sentence that makes sense. However Champernownes' is a rather more humane construct, since not only does it consume far less energy and shoe-leather, but it also avoids the frequent suicides -- by leaping down the stairwell -- that Borges imagined.

A quite different legend concerns an Indian temple at Kashi Vishwanath, where Brahmin priests were supposed to continually swap 64 golden disks of graded sizes between three pillars (following the rules of that puzzle better known to computer scientists as the "Tower of Hanoi"). When they complete the last move of this puzzle, it's said the world will end. It can be shown that for priests of average agility this will take around 585 billion years, but we could remove even that small risk by persuading them to substitute instead a short Ruby program that builds Champerknownes' constant (we'll need the BigDecimal module!) to be left running on a succession of PCs. Then we could be absolutely certain that while nothing gets missed out, the end will never arrive...    

I, ROBOT?

Dick Pountain/ Idealog 239/ 06 June 2014 09:56

Like many males of my generation I grew up fairly well-disposed toward the robot. Robbie the Robot filmstar was all the rage when I was 11, and Asimov's Laws of Robotics engaged my attention as a teenaged sci-fi reader. By the time I became involved in publishing underground comics in the early 1970s the cuteness was wearing off robots, but even so the threat was moderated by humour. The late Vaughn Bodé - nowadays beloved by all the world's graffiti artists - drew a strip called "Junkwaffel" that depicted a world cleansed of humans but gripped in permanent war between foul-mouthed, wise-cracking robot soldiers. In some ways these were the (far rougher) prototypes of R2D2 and C3PO.

Out in the real world robots started to appear on factory production lines, but they were doing those horrible jobs that humans shouldn't do, like spraying cellulose paint, and humans were still being employed to do the other stuff. When I got involved in computer programming myself I was drawn toward robotics thanks to an interest in Forth, a language originally invented to control observatory telescopes and ideally suited to robot programming. The problems of robots back then were all about *training* them to perform desired motions (as opposed to spelling out in X,Y,Z coordinates) and building-in enough intelligence to give them more and more autonomy. I still vividly remember my delight when a roboticist friend at Bristol Uni showed me robot ducklings they'd built that followed each other just like the real thing, using vision alone.

Given this background, it will come rather hard to have to change my disposition toward the robot, but events in today's world are conspiring to force me to do just that. While reading a recent issue of New Scientist (26 April 2014), I was struck by two wholly unrelated articles that provide a powerful incentive for such a change of attitude. The first of these involved the Russian Strategic Missile Force, which has for the first time deliberately violated Asimov's main law by building a fully-autonomous lethal robot that requires no permission from a human to kill.

The robot in question is a bit aesthetically disappointing in that it's not even vaguely humanoid-looking: it looks like, indeed *is*, a small armoured car on caterpillar tracks that wields a 12.7mm heavy machine gun under radar, camera and laser control. It's being deployed to guard missile sites, and will open fire if it sees someone it doesn't like the look of. I do hope it isn't using a Windows 8 app for a brain. Whatever your views on the morality of the US drone fleet, it's important to realise that this is something quite different. Drones are remotely controlled by humans, and can only fire their weapons on command from a human, who must make all the necessary tactical and moral decisions. The Russian robot employs an algorithm to make those decisions. Imagine being held-up at gunpoint by Siri and you'll get the difference.

However it was the other article that profoundly upset my digestive system, an interview with Andrew McAfee, research scientist at MIT's Center for Digital Business. Asked by interviewer Niall Firth "Are robots really taking our jobs?", McAfee replied with three possible scenarios: first, that robots will in the short term, but a new equilibrium will be reached as it was after the first Industrial Revolution; second, they'll replace more and more professions and massive retraining will be essential to keep up; third, the sci-fi-horror scenario where robots can perform almost all jobs and "you just don't need a lot of labour". He thinks we'll see scenario three in his lifetime (which I hope and trust will be longer than mine).

It was when he was then asked about any possible upside that my mind boggled and my gorge rose: the "bounty" he saw arising was a greater variety of stuff of higher quality at lower prices, and most importantly "you don't need money to buy access to Instagram, Facebook or Wikipedia". That's just as well really, since no-one except the 0.1% who own the robots will have any money. On that far-off day I forsee when a guillotine (of 3D-printed stainless steel) has been erected outside Camden Town tube-station, McAfee may be still remembered as a 21st-century Marie Antoinette for that line.

The bottom line is that robots are still really those engaging toys-for-boys that I fell for back in the 1950s, but economics and politics require the presence of grown-ups. Regrettably the supply of grown-ups has been dwindling alarmingly since John Maynard Keynes saved us from such imbecilities the last time around. If you're going to make stuff, you have to pay people enough to buy that stuff, simples.



THE JOY OF CODING?

Dick Pountain/ Idealog 238/ 08 May 2014 19:30

I've admitted many times in this column that I actually enjoy programming, and mostly do it for fun. In fact I far prefer programming to playing games. Given my other interests, people are often surprised that I don't enjoy chess, but the truth is that the sort of problems it creates don't interest me: I simply can't be bothered to hurt my brain thinking seven moves ahead when all that's at stake is beating the other guy. I did enjoy playing with Meccano as a kid, and did make that travelling gantry crane. I can even imagine the sort of satisfaction that might arise from building Chartres Cathedral out of Lego, though having children myself rendered me phobic about the sound of spilling Lego bricks (and the pain of stepping on one in bare feet). But programming is the ultimate construction game, where your opponent is neither person nor computer but the complexity of reality itself.

Object-oriented programming is especially rewarding that way. You can simulate anything you can imagine, describe its properties and its behaviours, then - by typing a single line of code - create a thousand (or a million) copies of it and set them all working. Then call that whole system an object and create a hundred copies of that. It's all stuff you can't do in the heavy, inertial, expensive world of matter: making plastic bits and pieces by 3D printing may be practical, even useful, but it lacks this Creator of Worlds buzz.

Since I'm so besotted by programming as recreation, I must surely be very excited by our government's "Year of Code" initiative, which aims to teach all our children how to write programs - or about "coding" as the current irritating locution would have it? Actually, no I'm not. I'm perfectly well aware that my taste for programming as recreation is pretty unusual, very far from universal, perhaps even eccentric, a bit like Base Jumping or worm farming. The idea that every child in the country is going to develop a such taste is ludicrous, and that rules out coding for pleasure as a rationale. It will most likely prove as unpleasant as maths to a lot of kids, and put them off for life.

But what about "coding" as job skill, as vital life equipment for gaining employment in our new digital era? Well there's certainly a need for a lot of programmers, and the job does pay well above average. However you can say much the same about plumbers, electricians and motor mechanics, and no-one is suggesting that all children should be taught those skills. The aim is to train school teachers to teach coding, but it makes no more sense for every child to learn programming than it does to wire up a ring-main or install a cistern. Someone who decides to pursue programming as a profession needs solid tuition in maths and perhaps physics, plus the most basic principles of programming like iteration and conditionality which ought to be part of the maths curriculum anyway. Actual programming in real languages is for tertiary education, not for the age of five as the Year of Code seeks.    

The whole affair reeks of the kind of gimmicky policy a bunch of arts and humanities graduates, clueless about technology, might think up after getting an iPad for Christmas and being bowled over by the wonderful new world of digital communications. Their kids probably already know more about "coding" than they do via self-tuition. However there are those who detect a more sinister odour in the air. For example Andrew Orlowski, curmudgeon-in-chief at The Register, has pointed out a network of training companies and consultants who stand to make big bucks out of the Year of Code, in much the same way firms did during the Y2K panic: they include venture capital company Index Ventures, which has Year of Code's chairman Rohan Silva as its "Entrepreneur in Residence", and US training company Codecademy. Organisations that are already reaching children who are actually interested in programming, like the Raspberry Pi foundation, appear to be sidelined and cold-shouldered by the hype merchants: the foundation's development director Clive Beale has claimed that "The word 'coding' has been hijacked and abused by politicians and media who don't understand stuff”.

Personally I'd love to believe all children could be taught to gain as much pleasure from programming as I do, but it's unlikely. Like singing, dancing, drawing, playing football, some can do it and like it, others can't and won't, and the notion that everyone has to be able do it for the sake of the national economy has unpleasantly Maoist undertones, with backyard code foundries instead of steelworks.

 

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...