Wednesday, 8 April 2015

TELE-ABSENCE

Dick Pountain/Idealog 244/06 November 2014 10:04

Hello. My name is Dick Pountain and I'm a Flickrholic. Instead of interacting normally with other human beings, I spend too many hours slumped at the computer, Photoshopping photographs I took earlier to make them look like mad paintings (www.flickr.com/photos/dick_pountain). Then I fritter away my remaining time probing the complete works of Bill Frissell, Brandt Bauer Frick and Bartok on that notorious online service Spotify (a villainous outfit which steals food from the mouths of Taylor Swift and Chris Martin). For a while I was a Multiple-Service Abuser, enslaved also to the hideous FaceBook, but that addiction cured itself once the user experience deteriorated to a point where it turned into Aversion Therapy. Yes folks, it's official, the internet is bad for all of us. In South Korea you can get sent on a cure. Here the Guardian runs stories every other day about how it's driving all our young folk into mental illness: cyber-bullying, trolling, sexting, and ultra-hard-core violent porn. We all live in terror of having our identities stolen, our bank accounts drained, or our local sewage works switched into reverse gear by shadowy global hacker gangs.

I'm going to exit heavy-sarcasm mode now, because though all these threats do get magnified grotesquely by our circulation-mad media, there's more than a pinch of truth to them, and mocking does little to help. An anti-digital backlash is stirring from many different directions. In a recent interview Christopher Nolan - director of sci-fi blockbuster Interstellar - expressed his growing dissatisfaction with digital video. Obsessive about picture quality, he feels he can't guarantee it with digital output (the exact opposite of orthodox opinion): “This is why I prefer film to digital [..] It’s a physical object that you create, that you agree upon. The print that I have approved when I take it from here to New York and I put it on a different projector in New York, if it looks too blue, I know the projector has a problem with its mirror or its ball or whatever. Those kind of controls aren’t really possible in the digital realm.” Or consider Elon Musk, almost a God among technophiles, who's recently taken to warning about the danger that AI might spawn unstoppable destructive forces (he compared it to "summoning the demon"), and this from a man who invests in AI.  

What all these problems have in common is that they occur at the borderline between physical reality and its digital representation. My Flickr addiction is pretty harmless because it's just pictures (pace Chris Nolan's critique), while Musk's fears become real when AI systems act upon the real world, say by guiding a drone or a driverless car, or controlling some vast industrial plant. And the problem has two complementary aspects. Firstly, people continue to confuse the properties of digital representations with the things they depict. I can repaint my neighbour's Volkswagen in 5 minutes in Photoshop, but on his real car it would take several hours, a lot of mess, and he'd thump me for doing it without permission. Secondly too much absorption in digital representations steals people's attention away from the real world. As I wander around Camden Town nowadays I'm struck by the universal body-language of the lowered head peering into a smartphone - while walking, while sitting, while eating, even while talking to someone else.

If you want a name for this problem then "tele-absence" (the downside of telepresence) might do, and it's problematic because evolution, both physical and cultural, has equipped us to depend on the physical presence of other people in order to behave morally. The controller of a remote drone strike sleeps sounder at night than he would if he'd killed those same people face-to-face with an M4 carbine: the internet troll who threatens a celebrity with rape and murder wouldn't say it to her face. And "face" is the operative word here, as the Chinese have understood for several thousands of years (and Mark Zuckerberg rediscovered more recently).

Maintaining "face" is crucial to our sense of self, and "loss of face" something we make great efforts to avoid. But we can't make face entirely by ourselves: it's largely bestowed onto us by other people, according to the conscious and unconscious rules of our particular society. Tele-absence robs us of most of the cues that great theorist of social interaction, Erving Goffman, listed as "a multitude of words, gestures, acts and minor events". We've barely begun to understand the ways that's changing our behaviour, which is why criminalising trolling or abolishing online anonymity are unlikely to succeed. Safety lies in hanging out online with people of similar interests (Flickr for me), but at the cost of reinforcing an already scary tendency toward social fragmentation.

[Dick Pountain sometimes wishes his shaving mirror supported Photoshop's Pinch filter]

YOUR INPUT IS ALWAYS WELCOME

Dick Pountain/Idealog 243/10 October 2014 15:42

Last month I wrote here about my recent infatuation with voice input in Google Keep, and now this month Jon Honeyball's column discovers a web source for vintage and superior keyboards. For consumers of media content output may be the more interesting topic (my display is higher-res than yours, my sound is higher-fi than yours) but we at the coalface who have to produce content have a far deeper interest in input methods.

It was ever thus. During my first flirtations with the underground press in the 1970s I used to write my copy longhand with a Bic ballpoint pen and hand it straight to Caroline, our stoical typesetter. Upon elevation (?) to the IT biz on PCW I was firmly told by our late, lamented chief Felix Dennis that he wasn't having any editors who wrote longhand, and so he'd signed me up for a Sight & Sound course. That was perhaps the most surreal week of my life, huddled in a darkened room at a manual Imperial typewriter with blanked-out yellow keys (pressing Shift was like lifting a house-brick with your little finger) touch-typing endless streams of nonsense words. I emerged capable of 35 words per minute, then graduated immediately to a CP/M computer running Wordstar and thus bypassed the typewriter era altogether.

In those days computer keyboards were modelled on mainframe terminals, with deep shiny plastic keys with inlaid characters on their caps, satisfying travel, resistance and click. They had few special keys besides Esc (which CP/M didn't recognise anyway). After that keyboards slithered down two separate hills: in 1982 Clive Sinclair launched the Spectrum with its ghastly squashy keys, probably made by Wrigleys, which became the archetype for all cheap keyboards to the present day; then in 1985 IBM launched the PC XT whose keys and layout largely persist on Windows computers today, Ctrl, Alt, function and Arrow keys and the rest. Jon remembers the IBM AT keyboard fondly as something of a cast-iron bruiser, but I had one that made it look quite flimsy, the Keytronic 5151 (http://blog.modernmechanix.com/your-system-deserves-the-best/). This brute, the size of an ironing board and weight of a small anvil, corrected certain dubious choices IBM had made by providing full-width shift keys, separate numeric and cursor keypads, and function keys along the top where they belong. I loved it, typed several books on it, and kept it until the PS/2 protocol made it redundant in the early 1990s.

It was around then that I suffered my one and only bout of RSI, brought on largely by the newfangled mouse in Windows 3. I fixed it using a properly adjustable typist's chair, wrist rests, and a remarkable German keyboard I found while covering the 1993 CeBIT show, Marquart's Mini-Ergo (see http://deskthority.net/wiki/Marquardt_Mini-Ergo). It was the first commercial split-keypad design, with twin spacebars and a curious lozenge shape reminiscent of a stealth bomber or a stingray. Marvellous to type on, and I carried on using it until I gave up desktop PCs and bought my first ThinkPad (a definite step backwards input-wise). Since then it's been all downhill, at increasing speed. My various successive laptops have had shallower and shallower chiclet-style keys (for added slimness), with less and less feel and travel. My latest Lenovo Yoga compounds the offence by making the function keys require a Fn shift. And on every laptop I've had since that first ThinkPad, the key labels for Right Arrow and A have soon worn off, being merely painted on.

What to do? On-screen tablet keyboards, however large they may become, have little appeal, even though I've gotten pretty quick nowadays at Google's gesture/swipe typing. And I most definitely *won't* be going back to writing in longhand. I may have been one of the earliest Palm Pilot adopters, and I may indeed run Grafitti Pro on both my Android phone and tablet, but writing with a finger is tiring and those pens with squashy sponge tips are pretty horrible. But another, possibly eccentric, solution just occurred to me. It was while ambling through the seething online casbah that is Amazon's Cabling and Adapters section that I discovered, for £1.99 an AT-to-PS2 adapter, followed by a small black box that's a PS2-to-Bluetooth converter (for another £19). It struck me that these two gizmos put together should enable me to use either my Keytronic or Marquart keyboards with all my current devices, phone, tablet and Yoga PC. How amusing it would look to deploy the Yoga in its "tent" configuration as a monitor. Best of all, this arrangement might provide me with plenty to do on cold, dark winter evenings, trying to get a bloody £ sign in place of the #, just like the good old days...






Tuesday, 17 February 2015

NOTEWORTHY

Dick Pountain/Idealog 242 /14 September 2014 13:35

One of the inescapable facts of life is that memory worsens as you get older. 40 years ago I could wake in the night with an idea and still remember it next morning. 30 years ago I put a notepad and pencil by my bedside to record ideas. 20 years ago I started trying to use mobile computers to take notes. I last wrote here exactly two years ago about how tablet computing was helping my perpetual quest (I confessed that text files stored on DropBox worked better for me than Evernote and its many rivals). So why revisit the topic just now? Well, because I just dictated this paragraph into my Nexus 7 using Google Keep and that still feels like fun.

I've played with dictation software for years, right from the earliest days of Dragon Dictate, but always found it more trouble than it was worth and practically unusable. So why is Google Keep any different? Mainly because I was already using it as my principal note-taker, as it syncs between my Nexus and my Yoga laptop with great success. I actually prefer Keep's simplistic "pinboard" visual metaphor to complex products like OneNote and Evernote that offer hierarchical folder structures, and its crude colour-coding scheme is remarkably useful. So when one day Google announced that I could now talk into Keep I tried and it just worked, transcribing my mumblings with remarkable accuracy and speed. Voice only works on Android, not on Windows, and it doesn't support any fancy editing commands (but who needs them for note taking)? Does that mean my 30-year quest is over and I've settled on one product? Er, actually no - I now have *three* rival note storage systems working at the same time, can't make a final choice between them and find myself struggling to remember whether I saved that tamale recipe into Keep, OneNote or Pocket. Doh...

The thing is, there are notes and notes. When I get an idea for, say, a future Idealog column, that's only a few dozen words that fit neatly onto a Google Keep "card". I colour these green to spot them more easily, though like everything Google, Keep is search-based so just typing "Ide" brings up all Idealog notes. On the other hand long chunks, or whole articles, clipped from magazines and newspapers stretch Keep's card metaphor beyond its usefulness (and its integration of pictures is rudimentary). For such items OneNote's hierarchical navigation becomes useful to separate different categories of content. Then there are whole web pages I want to save for instant reference, like recipes, maps or confirmation pages from travel sites. In theory I *could* send these straight to OneNote, or even paste them into Keep, but Pocket is way more convenient than either and works equally well from inside Chrome on Nexus, Yoga and phone (Chrome's Send To OneNote app doesn't work properly on my Yoga).

The fundamental problem is perhaps insoluble. Capturing fleeting ideas requires the fastest possible access: no more than one click is tolerable. But to find that idea again later you need structure, and structure means navigation, and navigation means clicks... Mu current combination is far, far better than any I've tried before - popping up a new Keep note or saving a Pocket page at a click is pretty good - but once I accumulate sufficient data the question of where I stored a particular item *will* become critical. This wouldn't be such a big deal if either Android or Windows 8 searches could see inside these applications, but they can't. Neither tablet search nor the otherwise impressive Win8 search will find stuff inside either Keep or OneNote, which isn't that surprising given that both apps store their data in the cloud rather than locally, and in different clouds owned by different firms who hate each other's guts.

On top of that there's the problem of different abilities within the same app on the different platforms. I've said Keep voice only works on Android, not on Windows (voice input also works in Google Search on Android, and tries to on Yoga but says it can't get permission to use the mike). OneNote on Android can record sound files but can't transcribe them, and though it syncs them with its Windows 8 app, the latter can't record and plays files clunkily in Windows Media Player. In short, it's a mess. Perhaps the paid-for, full version of OneNote is slicker, though I'm not greatly tempted to find out. Perhaps Google will soon enhance the abilities of Keep *without* rendering it slow and bloated. Perhaps there is a Big Rock Candy Mountain and somewhere among its lemonade fountains there's a software start-up that gets it about note taking...

OOP TO SCRATCH

Dick Pountain/Idealog 240/05 August 2014 10:34

So, 20 years of this column now, and so far I haven't run out of ideas. Of course one good way to achieve that is to keep banging on about the *same* ideas, and I plead guilty to that with a broad grin. My very first Idealog column in 1994 was entitled "OOPS Upside Your Head" (that prehistoric disco reference will be lost on the youth of today) and it expressed my faith in the long-term potential of object-oriented programming, along with my disgust at the misuse of the term by marketeers and the maladroit/cynical implementations by leading vendors like Microsoft, and this 240th column will be about object-oriented programming too, largely because I recently encountered a curious little OOP product that has renewed that faith.

The product is called Scratch and is intended to teach programming to very young children. At first sight it looks like Lego, thus neatly linking the topics of two of my other recent columns. You build programs by dragging different coloured "blocks" into a window, where they will only fit together in certain ways decided by their shapes, thus removing at a stroke the possibility of making syntax errors (probably biggest source of frustration for beginners). Some of these blocks are control structures, some are data objects and some are actions, and what powerful actions they are, a complete multimedia toolkit that lets you create animations with sound (warning: requires Flash) remarkably simply.

You'll not be too surprised to learn that Scratch was developed at MIT's Media Lab - started in 2003 by a team under Mitchel Resnick - and was originally implemented in a modern dialect of Smalltalk. The latest version 2 is written in Adobe's ActionScript. It's free to download from scratch.mit.edu and you use it online via your browser (though you can store the source for your projects locally too). Scratch is most certainly not suitable for professional development as it can only handle small, visually-oriented projects, but what grabbed me forcefully are the principles it demonstrates.

Scratch is based around a little-used style of object-orientation that employs cloning instead of class-based inheritance. Everything you build is an object called a "sprite" which contains all its own code. Every sprite is executable (it doesn't have to be visual) and one of the things it can do is spawn clones of itself at run-time. Clones behave like their parent, but have their own identity and all run concurrently: they're automatically destroyed and garbage-collected once execution ends. Scratch is also event-driven and supports events like keypresses, mouse clicks, sound-level thresholds, and message-passing between different sprites and their clones.

My first impression of Scratch was so Toy Town that I almost walked away, but then the old Byte training kicked in and nagged me to write The Sieve of Eratosthenes benchmark. It took me half an hour to grasp the principles, the program stretched to all of 10 "lines", looked like a colourful Lego picture, and required a leisurely 40 seconds to find the first 10000 primes. I rapidly knocked out some other old chestnuts like Fibonacci and Factorial to convince myself Scratch could do maths, then had the brainwave of reviving an abandoned Ruby project I called Critters, an animated ecosystem in which various bacteria-like species swim around eating each other, recognising their preferred prey by colour. I'd scrapped my Ruby version when the graphics became too tedious, but Scratch got an impressive version working inside an evening, thanks to predefined blocks that detect whether one sprite or colour is touching another, and a built-in sprite editor to draw the critters. That done, my other old favourite - an animated real-time bank queue simulation - submitted equally gracefully.

Scratch has several, deliberate, limitations. It supports lists and parameterised procedures, but neither are first-class objects that you can pass as parameters, which limits the level of abstraction you can achieve. Everything is tangible, thus overcoming another steep obstacle faced by novices (at the cost of failing to teach them abstraction). The only I/O is export and import of text files into lists (surprisingly usable) and the ability to talk to Arduino and Lego WeDo controller boards. While reading up about Scratch I discovered that a group at Berkeley has created a derivative called Snap! which extends Scratch by introducing first-class lists and local procedure variables. I duly tried it and it works well, but to my own amazement I actually prefer the challenges that Scratch poses to an experienced programmer! In our programming world of 2014, every development tool from C++ through Javascript to Python employs OOP, and I no longer need to defend the technique, but Scratch looks to me like by far the most fun way to teach it.  

[Dick Pountain had rather hoped that his second childhood might no longer involve computers - dammit!]

Sunday, 11 January 2015

THE NUMBER OF EVERYTHING

Dick Pountain/Idealog 240 /09 July 2014 11:45

My interest in computing has always been closely connected to my love of maths. I excelled in maths at school and could have studied it instead of chemistry (where would I be now?) My first experience of computing was in a 1960 school project to build an analog machine that could solve sixth-order differential equations. I used to look for patterns in the distribution of primes rather than collect football cards - you're probably getting the picture. I still occasionally get the urge to mess with maths, as for example when I recently discovered Mathlab's marvellous Graphing Calculator for Android, and I'll sometimes scribble some Ruby code to solve a problem that's popped into my head.

Of course I've been enormously pleased recently to witness the British establishment finally recognising the genius of Alan Turing, after a disgracefully long delay. It was Turing, in his 1936 paper on computable numbers, who more than anyone forged the link between mathematics and computing, though it's for his crucial wartime cryptography that he's remembered by a wider public. While Turing was working on computable numbers at King's College Cambridge, a college friend of his David Champernowne, another rmathematical prodigy, was working on something rather different that's recently come to fascinate me. Champernowne soon quit maths for economics; studied under John Maynard Keynes; helped organise aircraft production during WWII; in 1948 helped Turing write one of the first chess-playing programs; and then wrote the definitive book on income distribution and inequality (which happens be another interest of mine and is how I found him). But what Champernowne did back in 1933 at college was to build a new number.

That number, called the Champernowne Constant, has some pretty remarkable properties, which I'll try to explain here fairly gently. The number is very easy to construct: you could write a few million decimal places of it this weekend if you're at a loose end. In base 10 it's just zero, a decimal point, followed by the decimal representations of each successive integer concatenated, hence:

0.12345678910111213141516171819202122232425262728293031....

It's an irrational real number whose representation goes on for ever, and it's also transcendental (like pi) which means it's not the root of any polynomial equation. What most interested Champernowne is that it's "normal", which means that each digit 0-9, and each pair, triple and so on of such digits appear in it equally often. That ensures that any number you can think of, of whatever length, will appear somewhere in its expansion (an infinite number of times actually). It's the number of everything, and it turns out to be far smaller (if somewhat longer) than Douglas Adams' famous 42.

Your phone number and bankcard PIN, and mine, are in there somewhere, so it's sort of like the NSA's database in that respect. Fortunately though, unlike the NSA, they're very, very hard to locate. The Unicode-encoded text of every book, play and poem ever written, in every language (plus an infinite number of versions with an  infinite number of spelling mistakes) is in there somewhere too, as are the MPEG4 encodings of every film and TV programme ever made  (don't bother looking). The names and addresses of everyone on earth, again in Unicode, are in there, along with those same names with the wrong addresses. Perhaps most disturbingly of all, every possible truncated approximation to Champerknowne's constant itself should be in there, an infinite number of times, though I'll confess I haven't checked.  

Aficionados of the Latin-American fiction will immediately see that Champernowne's constant is the numeric equivalent to Jorge Luis Borges' famous short story "The Library of Babel", in which an infinite number of librarians traipse up and down an infinite spiral staircase connecting shelves of random texts, searching for a single sentence that makes sense. However Champernownes' is a rather more humane construct, since not only does it consume far less energy and shoe-leather, but it also avoids the frequent suicides -- by leaping down the stairwell -- that Borges imagined.

A quite different legend concerns an Indian temple at Kashi Vishwanath, where Brahmin priests were supposed to continually swap 64 golden disks of graded sizes between three pillars (following the rules of that puzzle better known to computer scientists as the "Tower of Hanoi"). When they complete the last move of this puzzle, it's said the world will end. It can be shown that for priests of average agility this will take around 585 billion years, but we could remove even that small risk by persuading them to substitute instead a short Ruby program that builds Champerknownes' constant (we'll need the BigDecimal module!) to be left running on a succession of PCs. Then we could be absolutely certain that while nothing gets missed out, the end will never arrive...    

I, ROBOT?

Dick Pountain/ Idealog 239/ 06 June 2014 09:56

Like many males of my generation I grew up fairly well-disposed toward the robot. Robbie the Robot filmstar was all the rage when I was 11, and Asimov's Laws of Robotics engaged my attention as a teenaged sci-fi reader. By the time I became involved in publishing underground comics in the early 1970s the cuteness was wearing off robots, but even so the threat was moderated by humour. The late Vaughn Bodé - nowadays beloved by all the world's graffiti artists - drew a strip called "Junkwaffel" that depicted a world cleansed of humans but gripped in permanent war between foul-mouthed, wise-cracking robot soldiers. In some ways these were the (far rougher) prototypes of R2D2 and C3PO.

Out in the real world robots started to appear on factory production lines, but they were doing those horrible jobs that humans shouldn't do, like spraying cellulose paint, and humans were still being employed to do the other stuff. When I got involved in computer programming myself I was drawn toward robotics thanks to an interest in Forth, a language originally invented to control observatory telescopes and ideally suited to robot programming. The problems of robots back then were all about *training* them to perform desired motions (as opposed to spelling out in X,Y,Z coordinates) and building-in enough intelligence to give them more and more autonomy. I still vividly remember my delight when a roboticist friend at Bristol Uni showed me robot ducklings they'd built that followed each other just like the real thing, using vision alone.

Given this background, it will come rather hard to have to change my disposition toward the robot, but events in today's world are conspiring to force me to do just that. While reading a recent issue of New Scientist (26 April 2014), I was struck by two wholly unrelated articles that provide a powerful incentive for such a change of attitude. The first of these involved the Russian Strategic Missile Force, which has for the first time deliberately violated Asimov's main law by building a fully-autonomous lethal robot that requires no permission from a human to kill.

The robot in question is a bit aesthetically disappointing in that it's not even vaguely humanoid-looking: it looks like, indeed *is*, a small armoured car on caterpillar tracks that wields a 12.7mm heavy machine gun under radar, camera and laser control. It's being deployed to guard missile sites, and will open fire if it sees someone it doesn't like the look of. I do hope it isn't using a Windows 8 app for a brain. Whatever your views on the morality of the US drone fleet, it's important to realise that this is something quite different. Drones are remotely controlled by humans, and can only fire their weapons on command from a human, who must make all the necessary tactical and moral decisions. The Russian robot employs an algorithm to make those decisions. Imagine being held-up at gunpoint by Siri and you'll get the difference.

However it was the other article that profoundly upset my digestive system, an interview with Andrew McAfee, research scientist at MIT's Center for Digital Business. Asked by interviewer Niall Firth "Are robots really taking our jobs?", McAfee replied with three possible scenarios: first, that robots will in the short term, but a new equilibrium will be reached as it was after the first Industrial Revolution; second, they'll replace more and more professions and massive retraining will be essential to keep up; third, the sci-fi-horror scenario where robots can perform almost all jobs and "you just don't need a lot of labour". He thinks we'll see scenario three in his lifetime (which I hope and trust will be longer than mine).

It was when he was then asked about any possible upside that my mind boggled and my gorge rose: the "bounty" he saw arising was a greater variety of stuff of higher quality at lower prices, and most importantly "you don't need money to buy access to Instagram, Facebook or Wikipedia". That's just as well really, since no-one except the 0.1% who own the robots will have any money. On that far-off day I forsee when a guillotine (of 3D-printed stainless steel) has been erected outside Camden Town tube-station, McAfee may be still remembered as a 21st-century Marie Antoinette for that line.

The bottom line is that robots are still really those engaging toys-for-boys that I fell for back in the 1950s, but economics and politics require the presence of grown-ups. Regrettably the supply of grown-ups has been dwindling alarmingly since John Maynard Keynes saved us from such imbecilities the last time around. If you're going to make stuff, you have to pay people enough to buy that stuff, simples.



THE JOY OF CODING?

Dick Pountain/ Idealog 238/ 08 May 2014 19:30

I've admitted many times in this column that I actually enjoy programming, and mostly do it for fun. In fact I far prefer programming to playing games. Given my other interests, people are often surprised that I don't enjoy chess, but the truth is that the sort of problems it creates don't interest me: I simply can't be bothered to hurt my brain thinking seven moves ahead when all that's at stake is beating the other guy. I did enjoy playing with Meccano as a kid, and did make that travelling gantry crane. I can even imagine the sort of satisfaction that might arise from building Chartres Cathedral out of Lego, though having children myself rendered me phobic about the sound of spilling Lego bricks (and the pain of stepping on one in bare feet). But programming is the ultimate construction game, where your opponent is neither person nor computer but the complexity of reality itself.

Object-oriented programming is especially rewarding that way. You can simulate anything you can imagine, describe its properties and its behaviours, then - by typing a single line of code - create a thousand (or a million) copies of it and set them all working. Then call that whole system an object and create a hundred copies of that. It's all stuff you can't do in the heavy, inertial, expensive world of matter: making plastic bits and pieces by 3D printing may be practical, even useful, but it lacks this Creator of Worlds buzz.

Since I'm so besotted by programming as recreation, I must surely be very excited by our government's "Year of Code" initiative, which aims to teach all our children how to write programs - or about "coding" as the current irritating locution would have it? Actually, no I'm not. I'm perfectly well aware that my taste for programming as recreation is pretty unusual, very far from universal, perhaps even eccentric, a bit like Base Jumping or worm farming. The idea that every child in the country is going to develop a such taste is ludicrous, and that rules out coding for pleasure as a rationale. It will most likely prove as unpleasant as maths to a lot of kids, and put them off for life.

But what about "coding" as job skill, as vital life equipment for gaining employment in our new digital era? Well there's certainly a need for a lot of programmers, and the job does pay well above average. However you can say much the same about plumbers, electricians and motor mechanics, and no-one is suggesting that all children should be taught those skills. The aim is to train school teachers to teach coding, but it makes no more sense for every child to learn programming than it does to wire up a ring-main or install a cistern. Someone who decides to pursue programming as a profession needs solid tuition in maths and perhaps physics, plus the most basic principles of programming like iteration and conditionality which ought to be part of the maths curriculum anyway. Actual programming in real languages is for tertiary education, not for the age of five as the Year of Code seeks.    

The whole affair reeks of the kind of gimmicky policy a bunch of arts and humanities graduates, clueless about technology, might think up after getting an iPad for Christmas and being bowled over by the wonderful new world of digital communications. Their kids probably already know more about "coding" than they do via self-tuition. However there are those who detect a more sinister odour in the air. For example Andrew Orlowski, curmudgeon-in-chief at The Register, has pointed out a network of training companies and consultants who stand to make big bucks out of the Year of Code, in much the same way firms did during the Y2K panic: they include venture capital company Index Ventures, which has Year of Code's chairman Rohan Silva as its "Entrepreneur in Residence", and US training company Codecademy. Organisations that are already reaching children who are actually interested in programming, like the Raspberry Pi foundation, appear to be sidelined and cold-shouldered by the hype merchants: the foundation's development director Clive Beale has claimed that "The word 'coding' has been hijacked and abused by politicians and media who don't understand stuff”.

Personally I'd love to believe all children could be taught to gain as much pleasure from programming as I do, but it's unlikely. Like singing, dancing, drawing, playing football, some can do it and like it, others can't and won't, and the notion that everyone has to be able do it for the sake of the national economy has unpleasantly Maoist undertones, with backyard code foundries instead of steelworks.

 

POD PEOPLE

Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...