Thursday 15 November 2012

UNDER THE OLD WHIFFLETREE

Dick Pountain/PC Pro/Idealog 214 11/05/2012

It all started when I was asked to write a preface for a new book on the history of Dennis Publishing, which required reminiscing about our start in the early 1970s. That triggered memories of the way we put magazines together back then:  type the copy on an IBM Selectric "golfball" composer, cut it up into strips with scalpels and stick it down on the page with hot wax. The smell of that hot wax and the machine-gun rattle of the IBM came flooding back.

That prompted me to look up IBM Selectric on this new fangled Web thingy, where I soon stumbled across a neat little video clip by engineer Bill Hammack (http://www.up-video.com/v/57042,ibm-selectric-typewriter-.html) which shows how that unforgettable sound arose, but more importantly explains that the IBM golfball mechanism contained a fiendishly cunning example of a mechanical digital-to-analog converter. The problem that needed solving was to rotate an almost spherical print-head around two different axes, to position the correct character over the paper - unlike older typewriters, this print-head moved while the paper stood still (as in all modern computer printers which it foreshadowed). Rotation control involved adding together two digital "signals", using four bits to specify the tilt and 22 bits to specify rotation around the vertical axis, which originated as depressions of keys on the keyboard and were then transmitted via cables like those used to change gears on a bicycle. The mechanism that performed this addition went by the glorious name of a "whiffletree" (or whippletree). Now I was hooked.

Googling for whiffletree produced a total surprise. This mechanism has been known since at least the Middle Ages, perhaps even in the ancient world, as a method for harnessing horses to a plough! It solves the problem of various horses pulling with different strengths, by adding together and averaging their pulls onto the plough. It's a "tree" in exactly the same way a directory tree is: each *pair* of horses is harnessed to a horizontal wooden bar, then all these bars get connected to a larger bar and so on (a big team might require three levels). The pivot links between bars can be put into one of several of holes to "program" the whiffletree's addition sum: if the lead horse is pulling twice as hard as the others, put its pivot at the two-thirds mark. Without a diagram it's hard to convey just how damned elegant this mechanism is.

As an aside, at this point I ought to tell you that my first ever brush with computing happened in the sixth-form at school in 1961, as part of a team that built an electronic analog computer from RAF surplus radar components to enter a county prize competition. It could solve sixth-order differential equations in real-time (for instance to emulate the swing of pendulum that travels partially through oil) and we programmed it by plugging cables into a patch-panel, like an old-fashioned  synthesiser or telephone switchboard.

In thrall to the whiffletree, I wondered where else such ingenious devices have been used, and that lead me straight to Naval gunnery controllers. Throughout WWII and right up into the 1970s, American warships were fitted with electro-mechanical fire control systems that worked on a principle not unlike the IBM Golfball. An enemy plane is approaching, your radar/sonar system is telling you from which direction, keep the anti-aircraft gun pointed in such directions that its stream of shells intercepts the moving plane's path. This problem was solved continuously in real-time, by gears, levers, cables and a few valves.

Ever since Alan Turing's 1936 seminal paper we've known that digital computers can imitate anything such mechanical or electrical analog devices can do, but sometimes there's little advantage in doing so. We used to be surrounded by simple analog computers, especially in our cars, and still are to a lesser extent. One that's long gone was the carburettor, which slid needles of varying taper through nozzles to compute a ferociously complex function relating petrol/air ratio to engine load. One that remains is the camshaft, whose varying cam profiles compute a similar function to control valve timing. A less obvious one is the humble wind-screen wiper, whose blade is actually attached via a whiffletree to spread the torque from the motor evenly along its length.

Just as my analog nostalgia was starting to wane, I turned on BBC 4 last night and watched a documentary about the Antikythera mechanism, an enigmatic bronze device of ancient Greek origin that was found on the sea-bed by pearl divers in 1900. Over fifty years of scientific investigation have revealed that this was a mechanical analog computer, almost certainly designed by Archimedes himself, whose rear face accurately calculated the dates of future solar and lunar eclipses, and front face was an animated display of the then-known-planets' orbits around the sun. It worked using around 70 hand-cut bronze gears with up to 253 teeth each. We're constantly tempted toward hubris concerning our extraordinary recent advances in digital technology, but once you've allowed for some four hundred years of cumulative advances in chemistry and solid-state physics, it ought to be quite clear that those ancient Greeks possessed every bit as much sheer human ingenuity as we do. And look what happened to them...

PUBLISH AND BE DROWNED

Dick Pountain/PC Pro/Idealog 213: 12/04/2012

A couple of years ago I became quite keen on the idea of publishing my own book on the Web. I got as far as opening PayPal and Google Checkout accounts and setting up a dummy download page on my website to see whether their payment widgets worked. In the end I didn't proceed because I came to realise that though publishing myself minimised costs (no trees need die, no publisher's share taken), the chance of my rather arcane volume becoming visible amid the Babel of the internet also hovered around zero, even if I devoted much of my time to tweaking and twiddling and AdSensing. What's more the internet is so price-resistant that charging even something reasonable like £2 was likely to deter all-comers. But perhaps the real cause of my retreat was that not having a tangible book just felt plain wrong. It's possible I'll try again via the Kindle Store, but I feel no great urgency.

I'm not alone in this lack of enthusiasm: the fact that mainstream book publishers still vastly overcharge for their e-books suggests their commitment is equally tepid (I recently bought Pat Churchland's "Braintrust" in print for £1 less than the Kindle edition). I'm well versed in Information Theory and fully understand that virtual and paper editions have identical information content but, as George Soros   reminded us again recently, economics isn't a science and economic actors are not wholly rational. The paper version of a book just is worth more to me than the e-version, both as a reader and as an author. I really don't want to pay more than £1 for an e-book, but I also don't want to write a book that sells for only £1, and that's all there is to it.

As Tom Arah ruefully explains in his Web Applications column this month, the ideal of a Web where everyone becomes their own author is moving further away rather than nearer, as Adobe dumps mobile Flash after Microsoft fails to support it in Windows 8 Metro. It's precisely the sort of contradictory thinking that afflicts me that helped firms like Apple monopolise Web content by corralling everything through its walled-garden gate. The Web certainly does enable people to post their own works, in much the same way as the Sahara Desert enables people to erect their own statues: what's the use dragging them across the dunes if no-one can find them.

There's always a chance your work will go viral of course but only if it's the right sort of work, preferably with a cat in it (in this sense nothing much has changed since Alan Coren's merciless 1976 parody of the paperback market "Golfing for Cats" - with a swastika on the cover). The truth is that the internet turns out to be a phenomenally efficient way to organise meaningless data, but if you're bothered about meaning or critical judgement it's not nearly so hot (whatever happened to the Semantic Web?) This has nothing to do with taste or intelligence but is a purely structural property of the way links work. All the political blogs I follow display long lists of links to other recommended blogs, but the overlap between these is almost zero and the result is total overload. I regularly contribute to the Guardian's "Comment Is Free" forums but hate that they offer no route for horizontal communication between different articles on related topics. Electronic media invariably create trunk/branch/twig hierarchies where everyone ends up stuck on their own twig.

If the Web has a structural tendency to individualise and atomise, this can be counteracted by institutions like forums and groups that pull humans back together again to share critical judgments. Writing a novel or a poem may best be done alone, but publishing a magazine requires the coordinated efforts and opinions of a whole group of people. A musician *can* now create professional results on their own in the back-bedroom, but they might have more fun and play better on a stage, or in a studio, with other people. The success of a site like Stumblr shows that people are desperate for anything that can filter and concentrate the stuff they like out from the great flux of nonsense that the Web is becoming. The great virtue of sites like Flickr and SoundCloud is that they offer a platform on which to display your efforts before a selected audience of people with similar interests, who are willing and able to judge your work. Merely connecting people together is not enough. 

The billion dollars Facebook just paid for Instagram perhaps doesn't look so outrageous once you understand that it wasn't really technology but savvy and enthusiastic users - the sort Facebook wishes it was creating but isn't because it's too big, too baggy and too unorganised - that it was purchasing. It will be interesting to see how their enthusiasm survives the takeover. The Web is a potent force for democratising and levelling, but it's far from clear yet how far that's compatible with discovering and nurturing unevenly-distributed talent. If publishers have a future at all, it lies in learning to apply such skills as they have in that department to the raging torrent of content.

MUSIC MAESTRO PLEASE

Dick Pountain/PC Pro/Idealog 212 14/03/2012

I love music. By that I don't merely mean that I *like* music, and I don't mean that I write to a constant background of pop music from Spotify or the radio (on the contrary I can't write to music because I can't not listen so it distracts me). The kind of music I like is *good* music, by which I mean that ~1% of every genre that delivers the goods, that takes you away. It started in my teens with American rock and R&B (Little Richard, Chuck Berry, Bo Diddley), progressed through blues to jazz, then to classical (the links ran Charlie Parker to Bartok, back to Bach, then forward via Mozart, Beethoven and Schubert to Wagner, Chopin, Ravel, Debussy). Bluegrass, country, reggae, dubstep, Irish, Indian, just about anything as long as it's excellent. I play music too - guitar, bass guitar and dobro tolerably well, saxophone crudely - and I'm renowned among my friends for being able to extract a tune from any bizarre instrument I encounter, from conch shell to bamboo nose flute.

This being so, it's not surprising that over the years I've used computers to listen to music, to store music, to play music and even to compose music. As soon as I got a PC with a sound card I was writing programs, first in Basic, then in Forth, to play tunes on it, but perhaps because I play real non-keyboard instruments that aspect never really grabbed me - I've never owned a MIDI keyboard or other MIDI instrument. What did grab me was the challenge of trying to program the computer itself to generate sounds that might be accepted as music. Doing that from scratch is no mean feat because computers are completely without musical feeling, they have no sense of melody, harmony or rhythm, so you the human have to supply all that, one way or another.

The most obvious way is by creating an authoring platform that has the rules of some musical genre built-into it. There are dozens of sequencer-like apps available now that achieve this for various strains of dance music, since computers are really good at calculating complicated beat patterns. There's also Koan Pro, well-known to fans of Brian Eno (I'm not one), which provides an enormously complex grid on which you can compose abstract kinds of music by tweaking hundreds of parameters. I wrestled with it for some months many years ago, but something in its structure still lent everything I tried a regular, dance-like beat. My formative years were spent immersed in bebop, '60s free jazz and country blues, where beat is vital but flexible, springy, variable - Parker's lightning scales, Robert Johnson's frantic strums, a Danny Richmond drum flurry - and I wanted that sort of sprung-step feeling in my computer-generated music rather than a strict BPM metronome. 

There was nothing for it but to build my own, so in the early '90s I wrote myself a MIDI generator. In those days Turbo Pascal was my language of choice and so I read the MIDI spec, deciphered the file format and wrote myself an API that let me output streams of valid MIDI events from a Pascal program. Then I wrote a library of functions that generate phrases, loops, rhythm patterns and other music elements. One crucial decision was not to represent whole notes but to separate pitch, duration and volume so programs could manipulate them separately. There were mathematical transforms to reverse or invert a tune, in the manner of Bach or John Adams. I composed "tunes" by expressing an algorithm in a short Pascal program, then compiling and running it to output a playable MIDI file. One early effort was based on the first 2000 prime numbers (my excuse; I'd just read "Godel, Escher, Bach"). These early tunes are feeble examples of then-fashionable minimalism, multi-part fugues that no human could play, sub-Adams experiments in phase shifting, piano pieces like Conlon Nancarrow on a very bad day.

Windows killed off Turbo Pascal and though I always meant to re-write an interactive version (that is, cutting out the intermediate MIDI file) in Delphi, I never got around to it. Later I fell for the charms of Ruby and planned to write an interactive composing platform in that: I still have the Gem containing the necessary MIDI interface, but that never happened either. What finally revived my interest was meeting a young American muso who turned me onto SoundCloud.com, which does for music what Flickr does for photos, and reading Philip Ball's superb book "The Music Instinct" which transformed my understanding of harmony by explaining it at both physical and physiological levels.

I can write Turbo Pascal better than ever by installing its command line compiler as a Tool in the excellent TextPad editor. MIDI remains MIDI and tools for mangling it are ten-a-penny. My latest efforts are closer to free-form jazz, Gil Evans on horse tranquilisers rather than Philip Glass. If you liked Frank Zappa, or the theme music from South Park, you might be able to tolerate them: if you lean more towards Adele or Michael Buble then (Health Warning) they might make you ill. And anyone who mentions Tubular Bells is asking for a punch up the 'froat.

[Dick Pountain hopes that the way to Carnegie Hall is via a FOR..NEXT loop, but isn't holding his breath. You can sample the tunes at http://snd.sc/Aumzot]

Wednesday 12 September 2012

YOUR ATTENTION PLEASE

Dick Pountain/PC Pro/Idealog 211 07/02/2012

My partner, along with thousands of others, received an iPad for Christmas this year. She's fairly computerphobic (though copes with her Windows XP netbook) and so she's pretty pleased with the device, and I'd have to say that I am too. The minimal user interface works very well, and the clean graphics put current Android-based competitors to shame. I'd like a hardware Back button like Android's, and a USB port would be nice too, but as a portable email client and e-book reader it's hard to beat.

What I've learnt from my iPad experience so far is the significance of its large form factor. I've used plenty of touch screens before - Palm Pilots right from the 1996 launch and an Android phone for more than year - and I've even written simple apps for both, but I've never until now used an A4-sized touch-only device for any length of time. It's a game changer. Size really does matter. The reason is simple physiology: a mobile phone's screen is small enough to fall wholly within your visual attention zone, but an A4-sized screen is more like a magazine or newspaper page where attention can only take in one section at a time. That makes design into a battle to direct attention. Now there's quite a lot known about this class of problem, and if you're planning to write apps for full-sized tablet computers you'd do well to acquaint yourself with this knowledge.

Obviously cognitive scientists have been studying the subject for years, driven by the needs of the aviation and automobile industries to accomodate more and more instruments into cockpit displays. They need to strike the right balance between grabbing immediate attention for high-priority alerts while retaining appropriate visibility for stuff you need to monitor continually. Equally obviously there's a huge reservoir of expertise in the graphic design profession, especially in the magazine and news trade. We get a head start in iPad app design thanks to our in-house art people, but all the tools of the trade from "White Out Box" to "News in Brief" will need tweaking to become suitable for the tablet screen.

You can stumble across further lessons in attention-seeking in the oddest of places. Last week's Guardian Food Blog featured an article entitled "The hidden messages in menus: Some restaurant menus can tell the diner as much about themselves as what's for dinner". The gist was that researchers at San Francisco State University recently overturned conventional wisdom about restaurant menu design, which was that diners start to read at the middle of the right-hand column, then jump to top left and read downwards. This belief has governed where restaurants position their special offers for years, and hence directly affects their turnover, but eye-movement sensors showed the SFSU team that it's not true - in fact diners read menus just like any book, down from top left and up again. This may cause a boom among US local print shops, to reprint billions of menus. The rest of the article describes how professional menu designers can manipulate diners emotionally, making certain options feel like bargains, while others provoke cheapskate guilt or flatter generosity.

I'm currently reading a fascinating book called "Thinking Fast and Slow" by a great guru of cognitive psychology, Daniel Kahneman, Princeton professor and Nobel laureate, in which he summarises a lifetime of study into the role of intuition in psychology. He's identified two independent thought mechanisms within the human brain. What he calls System 1 is intuitive, jumps to conclusions, is fast, amoral and fairly inaccurate: it's what saves us from hidden tigers and falling rocks. The other, System 2, is slower, deliberative, moral and responsible for self-restraint and future planning: it's who we think we are, though in fact System 1 is in charge most of the time. I already knew this because of my ability (which I'm sure I share with many others) to instantly re-find my place in a long text after taking a break - it only works if I *completely* trust where my eyes first fall, and if try to remember or think about it at all, it's lost. System 1 is also what makes me, temporarily, improve at darts or pool between my third and fourth pints. Kahneman's book is a goldmine of information about cognitive capabilities, delusions, illusions and misconceptions that ought to be a great help in UI design, given that System 1 is almost always in charge of such interactions.   

A Windows PC is pretty much a System 2 device, its plethora of menus, tabs and control panels forcing you to think about what you're doing for much of the time. I don't mind that but it appears that millions of people do, hence the success of the iPad. In effect Apple's device is a portable laboratory for cognitive science experiments, controlled entirely by gesture and sub-concious perception. I expect the revolution in UI design that it provoked to develop in quite unexpected directions over the next few years, which should put a lot of the fun back into computing. Will I be buying my own iPad? Perhaps not, but I can now afford to wait till Android tab vendors get it right (which may take some while, to judge by present form). 


THE WALLS HAVE EARS

Dick Pountain/PC Pro/Idealog 210 - 18/01/2012

I'm very far from being an online privacy nut. I don't pay for fancy password storage services, I often click buttons to give feedback or share things like playlists, and none of my time is spent ranting against Google and Facebook on online forums. However events over the last couple of weeks have possibly pushed me over some threshold of tolerance. But perhaps it's best if I tell the story from the beginning.

I went away for two weeks over the New Year, and made the stupid mistake of trying to be a responsible citizen by turning off all my electrical appliances before leaving. That included my BT broadband router, but when I returned last week I discovered that my internet service had not actually gone away entirely but was being throttled down to an unusable 200kbps, up and down. This has actually happened to me once before, last autumn after a far longer period of disuse, but that time it came back of its own accord overnight. This time it didn't and so on the second day I rang BT Broadband support to report a fault. I'm happy to report that both the Indian gentlemen who handled my problem were models of politeness and efficiency, displayed a full technical grasp of the problem, rang back when they promised and escalated the fault to its proper level. (Why BT permits it to happen in the first place is for a future column and isn't my topic here).

Once the engineers have reset your ADSL line it takes several days to re-train the local DSLAM before full speed is achieved again, and so I emailed our own RWC net guru Cassidy to gripe about things in general. He advised me to use the line heavily during the retraining process because "the more data it moves the more retrain info it has to go on". I thought for a while about the best way to achieve unattended line loading, and decided that Spotify, set to repeat the same playlist, is the easiest way for downloads while syncing a huge directory of photographs up to DropBox is a reasonable way to occupy the uplink overnight. That's when things started to get weird.

Next day I received an email from my sister in the far north of Scotland inquiring whether everything was all right. It read "Maggie Pountain commented on a playlist you listened to: AndrĂ¡s Schiff - Bach: Goldberg Variations. Maggie wrote: 'Haven't you turned this off or can't you sleep.'" She was clearly able to see what was playing on my Spotify account in real time, and worried because it hadn't changed for 12 hours or more. Next day I had more mails from various friends commenting on my listening habits, worried that I was becoming obsessed by Ravel and Debussy. Now much as I love Bach, Ravel and Debussy's piano music, I had of course chosen these particular playlists for their length rather than quality on this occasion, and was only actually listening in short bursts whenever I happened to be at the keyboard. The main point though was that, all of a sudden, everyone in the world seemed to know what I was listening to.

In my devil-may-care, I'm-not-a-privacy-nut mode I had indeed voluntarily agreed to link my Spotify and Facebook accounts so that friends could see and share my playlists (and vice versa), but that's not at all the same as everyone knowing what you're listening to right now in real time, which is decidedly creepy. Then I realised some of the friends who'd mailed weren't even on Facebook. I logged into my Spotify preferences, which I hadn't touched for several years, and discovered two tick boxes called "Share my activity on Spotify Social" and "Show what I listen to on Facebook" which I don't recall seeing before and which were both ticked. I unticked them both. A rootle around among the preferences also explained that "Private Session" option which had started to appear on the pull-down menu for my account: if you don't want everyone to see what you're listening to you can choose to make this session private, but the default is public and your private session will terminate each time you restart the Spotify client. This is pretty much the sort of behaviour that makes real privacy nuts rant against Facebook, and even if Spotify did catch the disease from Facebook that's really not any sort of an excuse.

I can't really say why people listening-in to what I'm playing right now is more disturbing to me than any of the similar stunts Facebook pulls, it just is. It's not as though I spend a lot of time furtively listening to Hitler's speeches, Lloyd-Webber musicals or porno-music (is there such a thing?), but music is important to me and my current choice feels far more intimate than, say, my political opinions, which I'm happy to share with anyone who'll stand still for long enough. A nagging feeling persists that this episode has tipped me over some threshold, into becoming an antisocial networker: I find myself ever more irritated with Facebook and have been poised on the verge of closing my account several times recently. Spotify though I can't do without.

Tuesday 7 August 2012

THE SLOPES OF MOUNT NIKON

Dick Pountain/PC Pro/Idealog 209 - 12/12/2011

It shouldn't come as any huge surprise to regular readers if I admit being agnostic with respect to the newly-founded cult of Jobsianism. Agnostic rather than atheist, because I did greatly admire some of Steve Job's qualities, most particularly his taste for great design and his uncanny skill at divining what people would want if they saw it, rather than what they want right now. On the other hand I found Jobs' penchant for proprietory standards, monopoly pricing, patent trolling and "walled-garden" paternalism deeply repugnant - and to judge from Walter Isaacson's new authorised biography I wouldn't have much cared to drink beer or drop acid with him. In my own secular theology Jobs will now be occupying a plateau somewhere on the lower slopes of Mount Nikon (which used to be called Mount Olympus before the Gods dumped the franchise in protest at that accounting scandal) alongside other purveyors of beautiful implements like Leo Fender and Enzo Ferrari.

So which figures from the history of computing would I place higher up the slopes of Mt Nikon? Certainly Dennis Ritchie (father of the C language) and John McCarthy (father of Lisp) both of whom died within a week or so of Jobs and whose work helped lead programming out of its early primitivism - there they could resume old arguments with John Backus (father of Fortran). But on far higher ledge, pretty close to the summit, would be the extraordinarily talented physicist Richard Feynman, whom not very many people would associate with computing at all. I've just finished reading his 1985 book, which I had somehow overlooked, called "QED: The Strange Theory of Light and Matter", a collection of four public lectures he gave in New Zealand and California explaining quantum electrodynamics for a popular audience. The book amply demonstrates Feynman's brilliance as a teacher who applied sly humour and inspired metaphors to explain the most difficult of subject, er, matter. He cleverly imagines a tiny stopwatch whose second hand represents the phase of a travelling photon, and through this simple device explains all the phenomena of optics, from reflection and refraction to the two-slit quantum experiment, more clearly than I've ever seen before. But what does this have to do with computers?

Well, having finished his book in a warm glow of new-found understanding, I was prompted to take down a book I've mentioned here before, by Carver Mead the father of VLSI (very large scale integrated circuit) technology and an ex-student of Feynman's at Caltech. Mead's "Collective Electrodynamics" defends the wave view of sub-atomic physics preferred by Einstein but rejected by Niels Bohr (and the majority of contemporary physicists), using ideas about photon absorption taken directly from Feynman. But, once again, what does this have to do with computers? Quite a lot actually. In his introduction to Collective Electrodynamics Mead offers an anecdote from the early 1960s which includes these remarks:

"My work on [electron] tunnelling was being recognized, and Gordon Moore (then at Fairchild), asked me whether tunnelling would be a major limitation on how small we could make transistors in an integrated circuit. That question took me on a detour that was to last nearly 30 years, but it also lead me into another collaboration with Feynman, this time on the subject of computation." Mead presented their preliminary results in a 1968 lecture and "As I prepared for this event, I began to have serious doubts about my sanity. My calculations were telling me that, contrary to all the current lore in the field, we could scale down the technology such that *everything got better*" In fact by 1971 Mead and Feynman were predicting Moore's Law, from considerations of basic quantum physics.

Now utopian predictions about the potential power of quantum computers are the flavour of this decade, but it's less widely appreciated that our humble PCs already depend upon quantum physics: QED, and its sister discipline QCD (quantum chromodynamics), underly all of physics, all of chemistry, and actually all of everything. The band gap of silicon that makes it a semiconductor and enables chips to work is already a quantum phenomenon. The first three of Feynman's lectures in "QED" are mostly about photons, but his last chapter touches upon "the rest of physics", including Pauli's Exclusion Principle. Electrons are such touchy creatures that at most two of opposite spins can ever occupy the same state, a seemingly abstract principle which determines the ways that atoms can combine, that's to say, all of chemistry, cosmology, even biology. It's why stone is hard and water is wet. Stars, planets, slugs, snails, puppy dog's tails, all here thanks to the Exclusion Principle, which is therefore as good a candidate as any for the bounteous creative principle in my little secular theology. Its dark sworn enemy is of course the 2nd Law of Thermodynamics: in the end entropy or chaos must always prevail.

It seems I've reinvented a polytheistic, materialistic version of Zoroastrianism, a Persian religion from around 600BC. At the centre of my theology stands cloud-capped Mount Nikon, its slopes teeming with great minds who advanced human understanding like Aristotle, Spinoza and Nietschze, with ingenious scientists like Einstein and Feynman, and lower down with talented crazies who gave us beautiful toys, like Steven Jobs.

Tuesday 3 July 2012

VOICE OF MULTITUDES

Dick Pountain/PC Pro/Idealog 208: 15/11/2011

It can't have escaped regular readers of this column that I'm deeply sceptical about several much-hyped areas of progress in IT. To pick just a couple of random examples, I've never really been very impressed by voice input technologies, and I'm extremely doubtful about the claims of so-called "strong" Artificial Intelligence, which insists that if we keep on making computers run faster and store more, then one day they'll become as smart as we are. As if that doesn't make me sound grouchy enough, I've been a solid Windows and PC user for 25 years and have never owned an Apple product. So surely I'm not remotely interested in Apple's new Siri voice system for the iPhone 4S? Wrong. On the contrary I think Siri has an extraordinary potential that goes way beyond the purpose Apple purchased it for, which was to impress peoples' friends in wine bars the world over and thus sell more iPhones. It's not easy to justify such faith at the moment, because it depends upon a single factor - the size of the iPhone's user base - but I'll have a go.

I've been messing around off and on with speech recognition systems since well before the first version of Dragon Dictate, and for many years I tried to keep up with the research papers. I could ramble on about "Hidden Markoff Models" and "power cepstrums" ad nauseam, and was quite excited, for a while, by the stuff that the ill-fated Lernhout & Hauspie was working on in the late 1990s. But I never developed any real enthusiasm for the end results: I'm somewhat taciturn by nature, so having to talk to a totally dumb computer screen was something akin to torture for me ("up, up, up, left, no left you *!£*ing moron...")

This highlights a crucial problem for all such systems, namely the *content* of speech. It's hard enough to get a computer to recognise exactly which words you're saying, but even once it has they won't mean anything to it. Voice recognition is of very little use to ordinary citizens unless it's coupled to natural language understanding, and that's an even harder problem. I've seen plenty of pure voice recognition systems that are extremely effective when given a highly restricted vocabulary of commands - such systems are almost universally employed by the military in warplane and tank cockpits nowadays, and even in some factory machinery. But asking a computer to interpret ordinary human conversations with an unrestricted vocabulary remains a very hard problem indeed.

I've also messed around with natural language systems myself for many years, working first in Turbo Pascal and later in Ruby. I built a framework that embodies Chomskian grammar rules, into which I can plug different vocabularies so that it spews out sentences that are grammatical but totally nonsensical, like god-awful poetry:

    Your son digs and smoothly extracts a gleaming head
          like a squid.
    The boy stinks like a dumb shuttle.

So to recap, in addition to first recognising which words you just said, and then parsing the grammar of your sentence, the computer comes up against a third brick wall, meaning, and meaning is the hardest problem of them all.

However there has been a significant breakthrough on the meaning front during the last year. I'm talking of course about IBM's huge PR coup in having its Watson supercomputer system win the US TV quiz show "Jeopardy" against human competitors, which I discussed here back in Idealog 200. Watson demonstrated how the availability of cheap multi-core CPUs, when combined with software like Hadoop and UIMA capable of interrogating huge distributed databases in real time, can change the rules of the game when it comes to meaning analysis. In the case of the Jeopardy project, that database consisted of all the back issues of the show plus a vast collection of general knowledge stored in the form of web pages. I've said that I'm sceptical of claims for strong AI, that we can program computers to think the way we think - we don't even understand that ourselves and computers lack our bodies and emotions which are vitally involved in the process - but I'm very impressed by a different approach to AI, namely "case based reasoning" or CBR.

This basically says don't try to think like a human, instead look at what a lot of actual humans have said and done, in the form of case studies of solved problems, and then try to extract patterns and rules that will let you solve new instances of the problem. Now to apply a CBR-style approach to understanding human every-day speech would involve collecting a vast database of such speech acts, together with some measure of what they were intended to achieve. But surely collecting such a database would be terribly expensive and time consuming? What you'd need is some sort of pocketable data terminal that zillions of people carry around with them during their daily rounds, and into which they would frequently speak in order to obtain some specific information. Since millions upon millions of these would be needed, somehow you'd have to persuade the studied population to pay for this terminal themselves, but how on earth could *that* happen? Duh.

Collecting and analysing huge amounts of speech data is a function of the Siri system, rendered possible by cloud computing and the enormous commercial success of the iPhone, and such analysis is clearly in Apple's own interest because it incrementally improves the accuracy of Siri's recognition and thus gives it a hard-to-match advantage over any rival system. The big question is, could Apple be persuaded or paid to share this goldmine of data with other researchers in order build a corpus for a more generally available natural language processing service? Perhaps once its current bout of manic patent-trolling subsides a little we might dare to ask...

[Dick Pountain doesn't feel quite so stupid talking to a smartphone as he does to a desktop PC]

OOPS THEY'VE DONE IT AGAIN

Dick Pountain/PC Pro/Idealog 207 16/10/2011

Should there be anyone out there who's been reading me since my very first PC Pro column, I'd like to apologise in advance for revisiting a topic I've covered here no less than four times before (in 1994, 1995, 1997 and 2000). That topic is how Microsoft's OS designers just don't get what an object-oriented user interface (OOUI) ought to look like, and the reason I'm covering it again here is the announcement of the Metro interface for Windows 8, which you'll find described in some detail in Simon Jones' Advanced Office column on pg xxx of this issue. It would appear that, after 17 years, they still don't quite get it, though they're getting pretty close.

A proper OOUI privileges data over applications, so that your computer ceases to be a rats-nest of programs written by people like Microsoft, Adobe and so on and becomes a store of content produced by you: documents, spreadsheets, pictures, videos, tunes, favourites, playlists, whatever. Whenever you locate and select one of these objects, it already knows the sorts of things you might want to do with it, like view it, edit it, play it, and so it invisibly launches the necessary application for you to do that. Metro brings that ability to Windows 8 in the shape of "Tiles" which you select from a tablet's touch screen with your finger, and which cause an app to be launched. The emphasis is still on the app itself (as it has to be since Microsoft intends to sell these to you from its app store), but it is possible to create "secondary" Tiles that are pinned to the desktop and launch some particular data file, site, feed or stream.

It's always been possible to do something like this with Windows, in a half-arsed kind of way, and I've been doing so for 15 years now. It's very, very crude because it's wholly dependent upon fragile and ambiguous filename associations - assign a particular application to open a particular type of file identified by name extension. Ever since Windows 95 my desktop has contained little but icons that open particular folders, and clicking on any file within one of these just opens it in Word, Textpad, Excel or whatever. I need no icons for Office applications, Adobe Reader or whatever, because I never launch them directly.

This was actually a horrid step backwards, because under Windows 3.1 I'd grown used to an add-on OOUI called WinTools that was years ahead of the game. Under WinTools desktop icons represented user data objects, which understood a load of built-in behaviours in addition to the app that opened and edited them. You could schedule them, add scripts to them, and have them talk to each other using DDE messages. It featured a huge scrolling virtual desktop, which on looking back bore an uncanny resemblance to the home screens on my Android phone. Regrettably Tool Technology Publishing, the small outfit that developed WinTools, was unable to afford to port it to Windows 95 and it disappeared, but it kept me using Windows 3.1 for two years after everyone else had moved on to 95.

That resemblance to Android is more than just coincidence because hand-held, touch-screen devices have blazed the trail toward today's object-oriented user interfaces. For most people this trend began with Apple's iPhone and iPod Touch, but to give credit where it's due PalmOS pioneered some of the more important aspects of OOUI. For example on the Palm Pilot you never needed to know about or see actual files: whenever you closed an app it automatically saved its content and resumed where you left off next time, a feature now taken for granted as absolutely natural by users of iPads and other tablets.

Actually though we've barely started to tap the real potential of OOUIs, and that's why Metro/Windows 8 is quite exciting, given Microsoft's expertise in development tools. Processing your data via intelligent objects implies that they should know how to talk to each other, and how to convert their own formats from one app to another without manual intervention. As Simon Jones reports, the hooks to support this are already in place in Metro through its system of "contracts": objects of different kinds that implement Search, Share, Settings and Picker interfaces can contract to find or exchange data with one another seamlessly, which opens up a friendlier route to create automated workflows.

In his Advanced Windows column last month Jon Honeyball sang the praises of Apple's OSX Automator, which enables data files to detect events like being dropped into a particular folder, and perform some action of your choice when they do so. This ability is built right into file system itself, a step beyond Windows where that would require VBA scripts embedded within Office documents (for 10 years I've had to use a utility called Macro Express to implement such inter-file automation). Now tablet-style OSes like Metro ought to make possible graphical automation interfaces: simply draw on-screen "wires" from one tile into an app, then into another, and so on to construct a whole workflow to process, for example, all the photographs loaded from a camera. Whoever cracks that in a sufficiently elegant way will make a lot of money.

FRAGILE WORLD

Dick Pountain/PC Pro/Idealog 206/ 19/09/2011

I'm writing this column in the middle of a huge thunderstorm that possibly marks the end of our smashing Indian Summer in the Morra Valley (I know, sorry). Big storms in these mountains sometimes strike a substation and knock out our mains supply for half a day, but thankfully not this time - without electric power we have no water, which comes from a well via a fully-immersed demand pump. Lightning surges don't fry all the electrical goods in the house thanks to an efficient Siemens super-fast trip-switch, but years ago, before I had broadband, I lost a Thinkpad by leaving its modem plugged in. Lightning hit phone line, big mess, black scorchmarks all the way from the junction box...

Nowadays I have an OK mobile broadband deal from TIM (Telecom Italia Mobile), €24 per month for unlimited internet (plus a bunch of voice and texts I never use), which I don't have to pay when we're not here. It's fast enough to watch BBC live video streams and listen to Spotify, but sometimes it goes "No Service" for a few hours after a bad thunderstorm, as it has tonight. That gives me a sinking feeling in my stomach. I used to get that feeling at the start of every month, until I realised the €24 must be paid on the nail to keep service (and there's no error message that tells you so, just "No Service"). Now I know and I've set up with my Italian bank to top-up via their website - but if I leave it too late I have to try and do that via dial-up. Sinking feeling again.

Of course I use websites to deal with my UK bank, transfer funds to my Italian bank, pay my income tax and my VAT, book airline tickets and on-line check in. A very significant chunk of my life now depends upon having not merely an internet connection, but a broadband internet connection. And in London just as much as in the Umbro-Cortonese. I suspect I'm not alone in being in this condition of dependency. The massive popularity of tablets has seen lots of people using them in place of PCs, but of course an iPad is not much more than a fancy table mat without a 3G or Wi-Fi connection. But then, the internet isn't going to go away is it? Well, er, hopefully not.

After the torrid year of riots, market crashes, uprisings, earthquakes and tsunamis, and near-debt-defaults we've just had, who can say with the same certainty they had 10 years ago that every service we have now will always be available? The only time I ever detected fear in ex-premier Tony Blair's eyes was on the evening news during the 2000 petrol tanker drivers' strike, when it became clear we were just a day or so from finding empty shelves at the supermarket. In Alistair Darling's recent memoirs he comes clean that in 2008 - when he and Gordon were wrestling the financial crisis precipitated by the collapse of Lehman Brothers - it was at one point possible that the world's ATM machines could all go offline the next morning. Try to imagine it. It's not that all your money has gone away (yet), just that you can't get at it. How long would the queues be outside high-street branches, and how long would you be stuck in one? My bank repeatedly offers me a far better interest rate if I switch to an account that's only accessible via the internet, but much as it pains me I always refuse.

Now let's get really scary and start to talk about the Stuxnet Worm and nuclear power stations, Chinese state-sponsored hackers, WikiLeaks and Anonymous and phishing and phone hacking. Is it possible that we haven't thought through the wisdom of permitting our whole lives to become dependent upon networks that no-one can police, and none but a handful of engineers understand or repair? When a landslide blocked the pass between our house and Castiglion Fiorentino a few years back, some men with a bulldozer from the Commune came to clear it, but at a pinch everyone in the upper valley could have pitched in (they all have tractors). Not so with fixing the internet. I might know quite a lot about TCP/IP, but I know bugger-all about cable splicing or the signalling layers of ATM and Frame Relay, or DSLAMs.

What contributes most to the fragility of our brave new connected world though is lack of buffering. Just-in-time manufacturing and distribution mean that no large stocks are kept of most commodities, everyone assuming that there will always be a constant stream from the continuous production line, always a delivery tomorrow. It's efficient, but if it stops you end up with nothing very fast. Our water is like that: shut off mains power and two seconds later, dry. I could fix that by building a water-tank and have the pump keep it full, via a ballcock valve like a big lavatory cistern. Then I could buy a lot of solar panels and a windmill to keep the pump running (plus my laptop). I could even buy a little diesel generator and run it on sunflower oil. I'm not saying I will, but I won't rule it out quite yet...

GRAND THEFT TECHNO

Dick Pountain/PC Pro/Idealog 205/  14/08/2011

Watching CCTV footage of the London riots shot from a high perspective, it was hard not to be reminded of video games like Grand Theft Auto. I don't want to open up that rusting can of worms about whether or not violent games cause imitation - the most I'll say is that these games perhaps provide an aesthetic for violence that happens anyway. The way participants wear their hoods, the way they leap to kick in windows, even the way they run now appears a little choreographed because we've seen the games. But this rather superficial observation underestimates the influence of digital technologies in the riots. The role of Blackberry messaging and Twitter in mustering rioters at their selected targets has been chewed over by the mainstream press ad nauseam, and David Cameron is now rumbling about suspending such services during troubles. This fits in well with my prediction, back in Idealog 197, that governments are now so nervous about the subversive potential of social media that the temptation to control them is becoming irresistible.

The influence of technology goes deeper still. The two categories of goods most looted during the riots were, unsurprisingly, clothes and electronic devices and the reason isn't too hard to deduce - brands have become a crucial expression of identity to several generations of kids. Danny Kruger, a youth worker and former adviser to David Cameron put it like this in the Financial Times: "We have a generation of young people reared on cheap luxuries, especially clothes and technology, but further than ever from the sort of wealth that makes them adults. A career, a home of your own - the things that can be ruined by riots - are out of sight. Reared on a diet of Haribo, who is surprised when they ransack the sweetshop?"

The latest phase of the hi-tech revolution makes this gap feel wider still. Neither PCs nor laptops were ever very widely desired: only nerds could set them up and they were barely usable for the exciting stuff like Facebook or 3D games. Steve Jobs and his trusty designer Jonathon Ive, together with Sony and Nintendo, changed that for ever. Electronic toy fetishism really took off with the iPod (which just about every kid in the UK now possesses) but it reached a new peak over the last year with the iPad, ownership of which has quickly become the badge of middle-class status. These riots weren't about relative poverty, nor unemployment, nor police brutality, nor were they just about grabbing some electronic toys for free. They were a raging (tinged with disgust) against exclusion from full membership of a world where helping yourself to public goods - as MPs and bankers are seen to do - is rife, and where you are judged by the number and quality of your toys. They demonstrated a complete collapse of respect for others' property.

I've been arguing for years that the digital economy is a threat to the very concept of property. Property is not a relationship between persons and things but rather a relationship between persons *about* things. This thing is mine, you can't take it, but I might give it, sell it or rent it to you. This relationship only persists so long as most people respect it and those who don't are punished by law. The property relationship originally derives from two sources: from the labour you put into getting or making a thing, and from that thing's *exclusivity* (either I have or you have it but not both of us). Things like air and seawater that lack such exclusivity have never so far been made into property, and digital goods, for an entirely different reason, fall into this category. Digital goods lack exclusivity because the cost of reproducing them tends toward zero, so both you and I can indeed possess the same game or MP3 tune, and I can give you a copy without losing my own. The artist who originally created that game or tune must call upon the labour aspect of property to protest that you are depriving them of revenue, but to end users copying feels like a victimless crime and what's more one for which punishment has proved very difficult indeed.

I find it quite easy to distinguish between digital and real (that is, exclusive) goods, since most digital goods are merely representations of real things. A computer game represents an adventure which in real life might involve you being shot dead. But I wonder whether recent generations of kids brought up with ubiquitous access to the digital world aren't losing this value distinction. I don't believe that violent games automatically inspire violence, but perhaps the whole experience of ripping, torrents and warez, of permanent instant communication with virtual friends, is as much responsible for destroying respect for property as weak parenting is. Those utopians who believe that the net could be the basis of a "gift economy" need to explain precisely how, if all software is going to be free, its authors are going to put real food on real tables in real houses that are really mortgaged. And politicians of all parties are likely to give the police ever more powers to demonstrate that life is not a computer game.

[Dick Pountain is writing a game about a little moustachioed Italian who steals zucchini from his neighbour's garden, called "Grand Theft Orto"]

UNTANGLED WEB?

Dick Pountain/PC Pro/Idealog 204/14/2011

In my early teens I avidly consumed science-fiction short stories (particularly anthologies edited by Groff Conklin), and one that stuck in my mind was "A Subway Named Moebius", written in 1950 by US astronomer A.J. Deutsch. It concerned the New York subway system, which in some imagined future had been extended and extended until its connectivity exceeded some critical threshold, so that when a new line was opened a train full of passengers disappeared into the fourth dimension where it could be heard circulating but never arrived. The title is an allusion to the Moebius Strip, an object beloved of topologists which is twisted in such a way that it only has a single side.

I've been reminded of this story often in the last few years, as I joined more and more social networks and attempted to optimise the links between them all. My first, and still favourite, is Flickr to which I've been posting photos for five years now. When I created my own website I put a simple link to my Flickr pix on it, but that didn't feel enough. I soon discovered that Google Sites support photogalleries and so placed a feed from my Flickr photostream on a page of my site. Click on one of these pictures and you're whisked across to Flickr.

Then I joined Facebook, and obviously I had to put links to Flickr and to my own site in my profile there. I started my own blog and of course wanted to attract visitors to it, so I started putting its address, along with that of my website, in the signature at the bottom of all my emails. Again that didn't feel like enough, so I scoured the lists of widgets available on Blogger and discovered one that would enable me to put a miniature feed from my blog onto the home page of my website. Visitors could now click on a post and be whisked over to the blog, while blog readers could click a link to go to my website.   

Next along came LibraryThing, a bibliographic site that permits you to list your book collection, share and compare it with other users. You might think this would take months of data entry, but the site is cleverly designed and connected to the librarians' ISBN database, so merely entering "CONRAD DARKNESS" will find all the various editions of The Heart of Darkness, and a single click on the right one enters all its details. I put 800+ books up in a couple of free afternoons. It's an interesting site for bookworms, particularly to find out who else owns some little-read tome. I suppose it was inevitable that LibraryThing would do a deal with Facebook, so I could import part of my library list onto my Facebook page too. 

I've written a lot here recently about my addiction to Spotify, where I appear to have accumulated 76 public playlists containing over 1000 tracks: several friends are also users and we swap playlists occasionally. But then, you guessed it, Spotify did a deal with Facebook, which made it so easy (just press a button) that I couldn't resist. Now down the right-hand edge of my Spotify window appears a list of all my Facebook friends who are also on Spotify - including esteemed editor Barry Collins - and can just click one to hear their playlists.

There are now so many different routes to get from each area of online presence to the others that I've completely lost count, and the chains of links often leave me wondering quite where I've ended up. I haven't even mentioned LinkedIn, because it has so far  refrained from integrating with Facebook (though of course my profile there has links to my Flickr, blog and websites). And this is just the connectivity between my own various sites: there's a whole extra level of complexity concerning content, because just about every web page I visit offers buttons to share it with friends or to Facebook or wherever.

It's all starting to feel like "A Social Network Named Moebius" and I half expect that one day I'll click a link and be flipped into the fourth dimension, where everything becomes dimly visible as through frosted glass and no-one can hear me shouting. That's why my interest was piqued by Kevin Partner's Online Business column in this issue, where he mentions a service called about.me. This site simply offers you a single splash page (free of charge at the moment) onto which you can place buttons and links to all your various forms of web content, so a visitor to this single page can click onto any of them. Now I only need add "about.me/dick.pountain" to each email instead of a long list of blogs and sites. Easy-to-use tools let you design a reasonably attractive page and offer help submitting it to the search engines - ideally it should become the first hit for your name in Google. Built-in analytical tools keep track of visits, though whether it's increased my traffic I couldn't say - I use the page myself, in preference to a dozen Firefox shortcuts.

[Dick Pountain regrets the name "about.me" has a slightly embarassing Californian ring to it - but that's enough about him]

PADDED SELL

Dick Pountain/PC Pro/12/06/2011/Idealog 203 

I'm a child of the personal computer revolution, one who got started in this business back in 1980 without any formal qualifications in computing as such. In fact I'd "used" London University's lone Atlas computer back in the mid 1960s, if by "used" you understand handing a pile of raw scintillation counter tapes to a man in a brown lab coat and receiving the processed results as a wodge of fanfold paper a week later. Everyone was starting out from a position of equal ignorance about these new toys, so it was all a bit like a Wild West land rush.

When Dennis Publishing (or H.Bunch Associates as it was then called) first acquired Personal Computer World magazine, I staked out my claim by writing a column on programmable calculators, which in those days were as personal as you could get, because like today's smartphones they fitted into your shirt-pocket. They were somewhat less powerful though: the Casio FX502 had a stonking 256 *bytes* of memory but I still managed to publish a noughts-and-crosses program for it that played a perfect game.

The Apple II and numerous hobbyist machines from Atari, Dragon, Exidy, Sinclair and others came and went, but eventually the CP/M operating system, soon followed by the IBM PC, enabled personal computers to penetrate the business market. There ensued a couple of decades of grim warfare during which the fleet-footed PC guerilla army gradually drove back the medieval knights-in-armour of the mainframe and minicomputer market, to create today's world of networked business PC computing. And throughout this struggle the basic ideology of the personal computing revolution could be simply expressed as "at least one CPU per user". The days of sharing one hugely-expensive CPU were over and nowadays many of us are running two or more cores each, even on some of the latest phones.

Focussing on the processor was perhaps inevitable because the CPU is a PC's "brain", and we're all besotted by the brainpower at our personal disposal. Nevertheless storage is equally important, perhaps even more so, for the conduct of personal information processing. Throughout the 30 years I've been in this business I've always kept my own data, stored locally on a disk drive that I own. It's a mixed blessing to say the least, and I've lost count of how many of these columns I've devoted to backup strategies, how many hours I've spent messing with backup configurations, how many CDs and DVDs of valuable data are still scattered among my bookshelves and friends' homes. As a result I've never lost any serious amount of data, but the effort has coloured my computing experience a grisly shade of paranoid puce. In fact the whole fraught business of running Windows - image backups, restore disks, reinstalling applications - could be painted in a similar dismal hue. 

In a recent column I confessed that nowadays I entrust my contacts and diary to Google's cloud, and that I'm impressed by the ease of installation and maintenance of Android apps. Messrs Honeyball and Cassidy regularly cover developments in virtualisation, cloud computing and centralised deployment and management that all conspire to reduce the neurotic burden of personal computing. But even with such technological progress it remains both possible and desirable to maintain your own local copy of your own data, and I still practise this by ticking the offline option wherever it's available. It may feel as though Google will be here forever, but you know that *nothing* is forever.

Sharing data between local and cloud storage forces you sharply up against licensing and IP (intellectual property) issues. Do you actually own applications, music and other data you download, even when you've paid for them? Most software EULAs say "no, you don't, you're just renting". The logic of 21st-century capitalism decrees IP to be the most valuable kind of asset (hence all that patent trolling) and the way to maximise profits is to rent your IP rather than sell it - charge by "pay-per-view" for both content and executables. But, despite Microsoft's byzantine licensing experiments, that isn't enforceable so long as people have real local storage because it's hard to grab stuff back from people's hard drives.

Enter Steve Jobs stage left, bearing iPad and wearing forked tail and horns. Following the recent launch of iCloud, iPad owners no longer need to own either a Mac or a Windows PC to sync their music and apps with iTunes. Microsoft is already under great pressure from Apple's phenomenal tablet success, and might just decide to go the same way by allowing Windows phones and tablets to sync directly to the cloud. In that case sales of consumer PCs and laptops are destined to fall, and with them volume hard disk manufacture. The big three disk makers have sidestepped every prediction of their demise for 20 years, but this time it might really be the beginning of the end. Maybe it will take five or ten years, but a world equipped only with flash-memory tablets syncing straight to cloud servers is a world that's ripe for a pay-per-view counter-revolution. Don't say you haven't been warned.

[Dick Pountain can still remember when all his data would fit onto three 5.25" floppy disks]

GO NERDS, GO

Dick Pountain/PC Pro/Idealog 202/11/05/2011

I've been a nerd, and proud of it, more or less since I could speak. I decided I wanted to be scientist at around 9 years old, and loathed sport at school with a deep passion. (During the 1960s I naively believed the counterculture was an alliance of everyone who hated sport, until all my friends came out as closet football fans in 1966). However my true nerdly status was only properly recognised a week ago when,: totally frazzled by wrestling with Windows 7 drivers, for a diversion I clicked on a link to www.nerdtests.com and filled in the questionnaire. It granted me the rank of Uber Cool Nerd King, and no award has ever pleased me more (a Nobel might, but I fear I've left that a bit too late).

So what exactly constitutes nerdhood? To Hollywood a nerd is anyone in thick-rimmed spectacles with a vocabulary of more than 1000 words, some of which have more than three syllables - the opposite of a jock or frat-boy. What a feeble stereotype. In the IT world a nerd is anyone who knows what a .INF file is for and can use a command prompt without their hands shaking, but that's still a bit populist for an Uber Cool Nerd King. "Developers" who know at least four programming languages might be admitted to the lower echelons, but true nerd aristocracy belongs only to those with a deep knowledge and love of programming language *design*. If you've ever arm-wrestled someone to solve a dispute over late versus early binding, you might just be a candidate.

In the late 1980s and early '90s I steeped myself in programming language design. I could write 14 different languages, some leading edge like Oberon, Occam, Prolog and POP-11. I wrote books on object-orientation and coded up my own recursive-descent parsers. I truly believed we were on the verge making programming as easy and fun as building Lego models, if only we could combine Windows' graphical abilities with grown-up languages that supported lists, closures, concurrency, garbage collection and so on. That never happened because along came the Web and the Dot Com boom, and between them they dumbed everything down. HTML was a step backward into the dark ages so far as program modularity and security was concerned, but it was simple and democratic and opened up the closed, esoteric world of the programmer to everybody else. If you'd had to code all websites in C++ then the Web would be about one millionth the size it is today. I could only applaud this democratic revolution and renounce my nerdish elitism, perhaps for ever. Progress did continue in an anaemic sort of way with Java, C#, plus interpreted scripting languages like Python and Ruby that modernised the expressive power of Lisp (though neither ever acquired a tolerable graphics interface).

But over the last few years something wonderful has happened, a rebirth of true nerdhood sparked by genuine practical need. Web programming was originally concerned with page layout (HTML, Javascript) and serving (CGI, Perl, ASP,.NET, ColdFusion), then evolved toward interactivity (Flash, Ajax) and dynamic content from databases (MySQL, PostgreSQL, Drupal, Joomla). This evolution spanned some fifteen-odd years, the same years that new giant corporations like Google, eBay and Amazon began to face data processing problems never encountered before because of their sheer scale. Where once the supercomputer symbolised the bleeding edge of computing - thousands of parallel processors on a super-fast bus racing for the Teraflop trophy - nowadays the problem has become to deploy hundreds of thousands of processors, not close-coupled but scattered around the planet, to retrieve terabytes of distributed data fast enough to satisfy millions of interactive customers. A new class of programming languages is needed, designed to control mind-bogglingly huge networks of distributed processors in efficient and fault-tolerant fashion, and that need has spawned some profoundly nerdish research.

Languages like Erlang and Scala have resurrected the declarative programming style pioneered by Lisp and Haskell, which sidesteps many programming errors by deprecating variable assignment. Google has been working on the Go language to control its own huge processor farms. Designed by father-of-Unix Ken Thompson, Rob Pike and Java-machine specialist Robert Griesemer, Go strips away the bloat that's accumulated around object-oriented languages: it employs strong static typing and compiles to native code, so is extremely efficient for system programming. Go has garbage collection for security against memory leaks but its star feature is concurrency via self-synchronising "channels", derived (like Occam's) from Tony Hoare's seminal work on CSP (Communicating Sequential Processes). Channels are pipes that pass data between lightweight concurrent processes called "goroutines", and because they're first-class objects you can send channels through channels, enabling huge network programs to reconfigure their topology on the fly - for example to route around a failed processor or to balance a sudden load spike. Google has declared Go open-source and recently released a new version of its App Engine that offers a Go runtime in addition to supporting Java and Python. I had sworn that Ruby would be my final conquest, but an itching under my crown signals the approach of a nerd attack...

[Dick Pountain decrees that all readers must wear white gloves and shades while handling his column, and must depart walking backwards upon finishing it.]

FREE TO BROWSE

Dick Pountain/PC Pro/Idealog 201     15/04/2011

Last month I finally succumbed and took out a subscription to Spotify Premium, and the steps that lead me there are rather illuminating about this increasingly popular online music business model. In his RWC Web Applications column this issue, Kevin Partner experiments with free versus paid-for web services, and I believe my experience with Spotify, when contrasted with Apple's iTunes model, complements Kevin's insights rather well.

I've been using Spotify for two and a half years now, almost from its 2008 launch, but for the first two I used only the free, ad-supported, version and stoutly resisted their paid-for service. I soon became hooked on the ability to sample different kinds of unfamiliar music that I would never have thought of buying, and so when I discovered that the free service wasn't available in Italy I struggled for a while using UK proxy servers but eventually caved in and signed for the £5 per month Unlimited service which *is* available in Italy and has no adverts. Then a couple of months ago I discovered that only the £10 per month Premium service is available for mobile phones, and I was sufficiently curious to try Spotify on my Android mobile that I coughed up (I could always cancel if I didn't like it). But the increase in utility was so enormous that suddenly it seemed cheap at the price.

It may just be that I'm the ideal Spotify customer and that this process of luring by degrees won't work on younger users, users who have the iPod habit, or users with particularly fixed and narrow tastes in music. But I'm not so sure. Kevin's conclusion is that any payment at all is an enormous obstacle to new custom, and that offering free trials is more cost effective than even the smallest compulsory subscription. You then need to devise a premium offering that adds sufficient utility to persuade people to pay up, which means that it has to be truly excellent, not just a bit of added fluff. I first encountered, and was captured by, a similar model with the New York Review of Books, whose print edition I've subscribed to for many years. As I spent more and more time working online it became more and more useful to me to be able to search that magazine's archive and download previous articles, especially when abroad where my paper copy wasn't available. The NYRB wisely charges a modest, $20 annual premium for such full archive access, which I gladly pay and feel I get value from.

Contrast with Apple's philosophy of tying closed hardware firmly to tightly-regulated marketplaces is stark. The price of individual tracks from iTunes is sufficiently modest not to deter buyers, but buy them you must - no free roaming around the store. This model is a phenomenal commercial success, to the point where almost the whole print publishing (and soon film) industry is grovelling to Steve Jobs to save them from digital apocalypse. The problem for me with this model is that it's brand dependent: you need to know what you want, whereupon Apple will supply it in unbeatably slick and transparent fashion. My problem is that I don't consume music (or books) that way.

I spend a lot of time reading and walking and in neither case do I want a laptop or tablet in my hand. I've never really learned to love the iPod though I have used MP3 players. I possess a large, ecletic record collection on vinyl and CD which  ranges from opera and classical through jazz, blues, soul, old R&B to country and bluegrass. I've ripped some of my CDs but could never be bothered to digitize all that vinyl, nor pay thousands of pounds to those services that do it for you. My tastes are highly volatile and unpredictable so even a few thousand tracks on an MP3 player aren't enough, and access is too clumsy.

Whole evenings on Spotify are devoted to hearing every conceivable version of "These Foolish Things", or comparing half a dozen performances of the Goldbergs, or exploring some wholly new genre. Last week I went to modern guitar recital at the Wigmore Hall and afterwards spent days exploring composers like Brouwer, Bogdanovic and Duarte. I'd never do that if I had to explicitly pay even 50p to download each track (even ones I might reject after 10 seconds). The freedom to browse is worth £120 per year to me because Android phone plugged into hi-fi or earphones has become my sole music source, replacing both CDs and downloads.

Not all record companies and managements have yet signed up with Spotify - notable exceptions being Bob Dylan and The Beatles - but I have what I want of them in my collection (and if the service succeeds in the USA I believe they'll come knocking). Such free, ad-supported services that lead you toward value-added premium services will eventually prove more effective at extracting payment than Apple's walled-garden approach, for books, film and TV as well as music. Encouraging people to explore and broaden their tastes rather than reinforcing brand loyalties also spreads the proceeds to artists outside of the Top 10, which could be why some larger companies are resisting...

JUST HOW SMART?

Dick Pountain/15 March 2011 14:05/Idealog 200

The PR boost that IBM gleaned from winning the US quiz show Jeopardy, against two expert human opponents, couldn't have come at a better time. We in the PC business barely register the company's existence since it stopped making PCs and flogged off its laptop business to Lenovo a few years ago, while the public knows it only from those completely incomprehensible black-and-blue TV adverts. But just how smart is Watson, the massively parallel Power 7-based supercomputer that won this famous victory?

It's probably smart enough to pass a restricted version of the Turing Test. Slightly reconfigure the Jeopardy scenario so a human proxy delivers Watson's answers and it's unlikely that anyone would tell the difference. Certainly Jeopardy is a very constrained linguistic environment compared to free-form conversation, but the most impressive aspect of Watson's performance was its natural language skill. Jeopardy involves guessing what the question was when supplied with the answer, which may contain puns, jokes and other forms of word play that tax average human beings (otherwise Jeopardy would be easy). For example a question "what clothing a young girl might wear on an operatic ship" has the answer "a pinafore", the connection - which Watson found - being Gilbert and Sullivan's opera H.M.S. Pinafore.

Now I'm a hardened sceptic about "strong" AI claims concerning the reasoning power of computers. It's quite clear that Watson doesn't "understand" either that question or that answer the way that we do, but its performance impressed in several respects. Firstly its natural language processing  (NLP) powers go way beyond any previous demonstration: it wasn't merely parsing questions into nouns, verbs and so on, and their grammatical relationships, but also some *semantic* relationships which it used as entry points into vast trees of related objects (called ontologies in  NLP jargon) to create numerous diverging paths to explore in looking for answers. Secondly it determined its confidence in the various combinations generated by such exploration using probabilistic algorithms. And thirdly it did all this within the three seconds allowed in Jeopardy.  

Watson retrieves data from a vast unstructured text database that contains huge quantities of general knowledge info as well as details of all previous Jeopardy games to provide clues to the sort of word-games it employs. 90 rack-mounted IBM servers, running 2,880 Power 7 cores, present over 15 Terabytes of text equivalent to 200,000,000 pages (all stored locally for fairness, since the human contestants weren't allowed Google) and all accessible at 500GB/sec. This retrieval process is managed by two open-source software frameworks. The first,  developed by IBM and donated to the Apache project is UIMA (Unstructured Information Management Architecture) which sets up multiple tasks called annotators to analyse pieces of text, create assertions about them and assign probabilities to these assertions.

The second is Hadoop, which Ian Wrigley has covered recently in our Real World Open Source column. This massively parallel distributed database - employed by the likes of Google and Amazon - is used to place the annotators onto Watson's 2,880 processors in an optimal way so that work on each line of inquiry happens close to its relevant data. In effect the UIMA/Hadoop combination tags just those parts of this vast knowledge base that might be relevant to a particular query, on the fly. This may not be the way our brains work (we know rather little about low-level memory mechanisms) but it's quite like the way that we work on our computers via Google: search for a couple of keywords, click further links in the top listed documents to build an ad hoc path through the vast ocean of data.

Web optimists describe this as using Google as an extension of our brains, while web pessimists like Nicholas Carr see it as an insidious process of intellectual decay. In his book "The Shallows: How the Internet is Changing the Way We Think, Read and Remember", Carr suggests that the phenomenon called neuroplasticity allows excessive net surfing to remodel our brain structure, reducing our attention span and destroying our capacity for deep reading. On the other hand some people think this is a good thing. David Brooks in the New York Times said that "I had thought that the magic of the information age was that it allows us to know more, but then I realised that the magic is... that it allows us to know less". You don't need to remember it because you can always Google it.

It's unlikely such disputes can be resolved by experimental evidence because everything we do may remodel our brain: cabbies who've taken "the knowledge" have enlarged hippocampuses while pianists have enlarged areas of cortex devoted to the fingers. Is a paddle in the pool of Google more or less useful/pleasurable to you than a deep-dive into Heidegger's "Being and Time"? Jim Holt, reviewing Carr's book in the London Review of Books, came to a more nuanced conclusion. The web isn't making us less intelligent, nor is it making us less happy, but it might make us less creative. That would be because creativity arises through the sorts of illogical and accidental connection (short circuits if you like) that just don't happen in the stepwise semantic chains of either a Google search or a Watson lookup. In fact, the sort of imaginative leaps we expect from a Holmes rather than a Watson...

WRITE ON

Dick Pountain/17 February 2011 14:53/Idealog 199

Around a year ago in issue 186 I pronounced that the iPad's touch interface was the future of personal computing, which was greeted with a certain scepticism by colleagues who thought perhaps I'd become an Apple fanboi or else had a mild stroke, the symptoms being very similar. Events since have made me surer than ever: the touch interface feels right in exactly same way the windows/icons/mouse/pull-down interface felt right the first time I used it. The number of people who got iPads this Christmas was further evidence, and most convincing for me was a good friend who hates computers (but likes iPods and email) who bought himself one for Christmas. When I visited a few weeks ago he had it on a perspex stand with one of those slim aluminium keyboards underneath, and told me that he'd junked his hated laptop completely: the iPad now did everything he needed (apart from a raging addiction to Angry Birds). Evidence is piling up that 2011 will be the year of the tablet, and even Microsoft will be only one year late to the party this time around, with a Windows 8 tablet by 2012 according to PC Pro's news desk. HP's recent announcement of its WebOS tablet perhaps offers real competition for Apple, in a way that cheaply thrown-together Android tablets can't do until some future version of that OS arrives. 

I don't have an iPad myself, partly because I won't pay the sort of money Apple is asking, partly because I've never been an Apple person, partly because I hate iTunes. Also I *don't* hate laptops, I have a gorgeous Sony Vaio that weighs no more than an iPad, and I'm in no hurry. I do now use an Android phone and am using that to familiarise myself with the quirks of touch-based computer interaction, and so far I love it except for one aspect, and that is entering text.

I often just find myself staring paralysed inside some application when I need to enter a word but nothing is visible except a truly tiny slot and the keyboard hasn't popped up yet (yes Wikipedia, that means you). My phone's screen is smaller than an iPhone's and typing at any speed on its on-screen keyboard is hard, even with vibrating "haptic feedback" on. My Palm Treo had Blackberry-style hard keys which makes the contrast greater still. I've eventually settled on CooTek's Touchpal soft keyboard which puts two letters on each enlarged keytop (sideswipe your thumb to get the second) and has a powerful predictive text capability.

I appreciate that the larger screen of a tablet makes using an on-screen keyboard less finicky, but there's another mental factor at work because on generations of Palms I was an enthusiastic and expert user of Grafitti handwriting recognition. I'm totally used to being able to roughly scribble a letter with my fingertip onto the screen in the Contacts application and have it go straight to someone's name, invaluable on dark nights, in the rain, when you've lost your specs and so on. The reason I got so good at Grafitti was no thanks to Palm - which completely screwed up with Grafitti 2 - but thanks to Tealpoint Software whose wonderful Tealscript app enabled me to use the whole screen area inside any application, to get caps and numbers by shifting over to the right, and to customise the strokes for particular glyphs I found difficult.

Now handwriting recognition just isn't feasible on Android or iPhones as their screens are too small and the capacitive technology they employ lacks positional precision. However it ought to be possible, using a fingertip rather than a stylus, on the larger screen of a tablet, and indeed there's already a handwriting application for the iPad called WritePad, mostly aimed at children, which lets you scribble with a finger over a wide screen area. It seems to me that by combining such a recognition engine with two other existing software techniques, handwriting could become a major input method for tablets. The first technique is predictive text, as employed in TouchPal, which gradually analyses your personal vocabulary into a user dictionary and so improves its guesses. The second is the "tag cloud", a graphical UI trick familiar from social networking sites like Flickr, De.licio.us and Technorati, in which a collection of user tags gets displayed in a random clump in font sizes proportional to their frequency of use.

I can imagine a tablet interface in which you scribble with your finger onto the screen, leaving a semi-transparent trace for feedback, and the predictive text engine generates a tag cloud - also semi-transparent - of the most likely candidates, in which both the size and the proximity to your current finger position of each candidate word is proportional to its probability. This cloud would need to be animated and  swirl gently as your finger moved, which should be possible with next generation mobile CPUs. When you see the right word in the cloud, a quick tap inserts it into the growing text stream. From what I've seen of the development tools for iPad and Android, I'm well past writing such a beast myself, so I bequeath the idea to some bright spark out there.

[Dick Pountain is impressed that a Palm Pilot could decipher his handwriting, given that most humans can't.]

TEXT TWEAKER

Dick Pountain/16 January 2011 12:54/Idealog 198

A couple of years back you'd still hear arguments about whether or not electronic readers could ever take over from print-on-paper. That already feels like a long time ago. I found myself in a mid-market hotel just before Christmas, and when I came down for breakfast in the morning at *every* table was someone (or a whole family) reading news from an iPad, except for the one that had a Kindle. I had to make do with my Android phone and felt a bit out of place.

I've written here before about my Sony Reader, but it hasn't make the grade and is now gathering dust. Its page turning is just too slow and it's too much of a fag to download content to it, but the final straw was the way it handles different ebook formats, unpredictably and far from gracefully. The problem is simply that I'm not in the market for commercial ebooks and never buy novels from Amazon or publishers' websites. What novels I read, I still read on paper (possibly decades or centuries old) and the rest of the time I read non-fiction that's rarely if ever available as an ebook. Commercial books properly formatted in ePub look fine on the Sony - cover, contents and navigation - but I rarely read them. Perhaps a third of my reading is nowadays done on screen, but almost always laptop or phone and off a web page: the Guardian website, Open Democracy, Arts & Letters, various blogs, and white papers from numerous tech sites.

I have however collected an extensive library of classic texts and reference works that I use a lot, stored on my laptop to be always available off-line, and it's there the Sony really fell down. I get most of these books from the Internet Archive where they're typically available in several formats: PDF and PDF facsimile (scanned page images), ePub, Kindle, Daisy, plain text and DjVu (an online reading format). However the Internet Archive is a non-profit organisation that relies on voluntary, mostly student, labour to scan works in, so inevitably most documents are raw OCRed output that hasn't been cleaned up manually. Really old books set in lovely letterpress typefaces like Garamond and Bodoni are the saddest, because OCR sees certain characters as numerals so the texts are peppered with errors like "ne7er" and "a8solute". Many such books also contain a lot of page furniture - repeating book and chapter titles in headers or footers for example - that scanning leaves embedded throughout the text, extremely irritating if you consult them often. 

One solution is to download a facsimile version, but that's glacially slow to read on the Sony Reader, taking ten seconds to turn each page and looking crap in black-and-white: on laptop or iPad in colour it's a fine way to read (it even preserves pencilled margin notes) but it isn't searchable which defeats half the purpose, so I always have to download a text-based PDF or plain text version too. Unfortunately the Sony Reader displays PDFs unpredictably: it only has three text sizes and if you're unlucky none of them will look right, either being too huge or too tiny.

I started cleaning up certain books myself, downloading a plain text version and using Microsoft Word (of all things), which actually has powerful regular expression and replacement expression facilities, though well hidden and with lamentably poor Help. I soon learned how to quickly bulk-remove all page numbers and titles, auto locate and reformat subheads, and even cull improbable digits-in-the-middle words like "ne7er". However outputting the cleaned up result as  PDFs proved a lottery on the Sony as regards text sizing, contents page and preserving embedded bookmarks (you need one per chapter for navigation purposes). For many books I found that an RTF version actually looks and works better than a PDF.

Someone tipped me off to try Calibre (http://calibre-ebook.com/), a shareware ebook library manager that converts between different ebook formats, and in particular can output in Sony's own LRF file format which proved more reliable than PDF. It was already too late for me though. Calibre works well but is quite techie to use and, like Sony's proprietory Reader software, it maintains its own book database, so yet another file system to deal with. Eventually I just couldn't be bothered. I've checked Google Books offer of a million free-to-download public domain titles, only to discover that they are of course the same hastily-scanned copies I already have from archive.org (which stands to reason as once some public-spirited volunteer has scanned an obscurity like Santayana's "Egotism in German Philosophy", no-one else is ever going to do it).

My own gut feeling is that, Kindle notwithstanding, none of the current ebook formats will be the eventual winner and that plain old HTML, in its 5 incarnation, will become the way we all read stuff on our tablets in a couple of years time. Perhaps PDF too if Adobe puts its house in order in time. And we'll need to recruit a whole second generation of volunteer labour to clean up all those documents scanned by the first generation once the Google book project gets into its full stride: the bookworms' equivalent of toiling in the cane fields...

PENTACLE OF CONFLICT

Dick Pountain/16 December 2010 13:39/Idealog 197b

Last month I confessed that I've abandoned the Palm platform, after 14 years of devotion, for an Android phone and Google online storage of my personal information. One important side-effect of this move that's almost invisible to me, is a huge leap in my consumption of IP bandwidth. I use the phone at home, on Wi-Fi, as a constant reference source while I'm away from my PC reading books, and my time connected to BT Broadband must have at least doubled, though that doesn't cost me a penny more under a flat-rate tariff. And I'm far from alone in this altered consumption pattern: a report by network specialist Arieso recently analysed data consumption of latest generation smartphones and found their users staying connected for longer, and downloading twice as much data as earlier models.

Android users were hungriest, with iPhone4 and others close behind (and though they didn't even include iPad users, you just know those stay connected pretty well all the time). It's partly the nature of the content - faster CPUs make movie and TV viewing practical - and also that smarter devices soon lead you not even to know whether you're looking at local or networked content. The long-term implications for the net, both wired and 3G, are starting to become apparent, and they're rather alarming. It's not that we'll actually run out of bandwidth so much as the powerful political and industrial forces being stirred up to grouch about its unfair distribution.

The same week as that Arieso report, the Web '10 conference in Paris heard European telecom companies demanding a levy on vendors of bandwidth-guzzling hardware and services like Google, Yahoo!, Facebook and Apple. These firms currently make mega-profits without contributing anything to the massive infrastructure upgrades needed to support the demand they create. Content providers at the conference responded "sure, as soon as you telcos start sharing your subscription revenues with us". It's shaping up to be an historic conflict of interest between giant industries, on a par with cattle versus sheep farmers or the pro and anti-Corn Law lobbies.

But of course there are more parties involved than just telcos versus web vendors. Us users, for a start. Then there are the world's governments, and the content-providing media industries. In today's earnest debates about Whither The Webbed-Up Society, no two journalists seem to agree how many parties need to be considered, so I'll put in my own bid, which is five. My five players are Users, Web Vendors, Governments, ISPs and Telcos, each of whose interests conflict with every other, which connects them in a "pentacle of conflict" so complex it defies easy prediction. The distinction is basically this: users own or create content and consume bandwidth; web vendors own storage (think Amazon servers and warehouses, Google datacenters) and consume bandwidth; telcos own wired and wireless fabrics and bandwidth; and poor old ISPs are the middle-men, brokering deals between the other four. Note that I lump in content providers, even huge ones like Murdoch's News International, among users because they own no infrastructure and merely consume bandwidth. And they're already girded for war, for example in the various trademark law-suits against Google's AdWords.   

What will actually happen, as always in politics, depends on how these players team up against each other, and that's where it starts to look ominous. At exactly the same time as these arguments are surfacing, the Wikileaks affair has horrified all the world's governments and almost certainly tipped them over into seriously considering regulating the internet. Now it's one of the great cliches of net journalism that the 'net can't be regulated - it's self-organising, it re-routes around obstacles etcetera, etcetera - but the fact is that governments can do more or less anything, up to and including dropping a hydrogen bomb on you (except where the Rule of Law has failed, where they can do nothing). For example they can impose taxes that completely alter the viability of business models, or stringent licensing conditions, especially on vulnerable middle-men like ISPs.

Before Wikileaks the US government saw the free Web as one more choice fruit in its basket of "goodies of democracy", to be flaunted in the face of authoritarian regimes like China. After Wikileaks, my bet is that there are plenty of folk in the US government who'd like to find out more about how China keeps the lid on. The EU is more concerned about monopolistic business practices and has a track record of wielding swingeing fines and taxes to adjust business models to its own moral perspective. All these factors point towards rapidly increasing pressure for effective regulation of the net over the next few years, and an end to the favourable conditions we presently enjoy where you can get most content for free if you know where to look, and can get free or non-volume-related net access too. The coming trade war could very well see telcos side with governments (they were best buddies for almost a century) against users and web vendors, extracting more money from both through some sort of two-tier Web that offers lots of bandwidth to good payers but a mere trickle to free riders. And ISPs are likely to get it in the neck from both sides, God help 'em. 



ELECTRIC SHEEP

Dick Pountain/15 December 2010 11:45/Idealog 197

Regular readers will know that it's normal policy for successive columns to skip from subject to subject like a meth-head cricket with ADHD (that is, without any visible continuity) but last month's column, about abandoning the Palm platform, represents such a major life change that I feel obliged to follow it up immediately. It's 14 years to the day since I first mentioned Palm (actually then the US Robotics Pilot) in this column, and my address book, notes and appointments have persisted inside Palm products ever since. My leaving Palm for an Android phone plus Google cloud storage certainly struck a chord with other be-Palmed readers.

Richard, who is following the same path, emailed to tell me he needs to transfer all his old Palm archived appointments to Google, but Palm Desktop won't export them in any useful format. Within 24 hours another reader, Mike, had pointed us to a solution at
http://hepunx.rl.ac.uk/~adye/software/palm/palm2ical/, an app that exports them in iCal format. Mike also told me several other interesting things, including:

1) A jail-broken iPod Touch can run Palm apps via a third-party emulator called Style Tap. I'm glad I didn't know this as it might have kept me stuck in my groove.
2) While the Orange San Francisco phone I bought has a lovely AMOLED screen, ones on sale now are rumoured to have reverted to TFT which makes them somewhat less of a bargain.
3) Rooting the San Francisco to a non-Orange ROM is not something I want to get involved in just yet awhiles.

So how am I coping with Android? Actually I like it much better than expected. I use my phone mostly at home via Wi-Fi, which makes web browsing and downloading apps from the Market fast, easy and cheap. I'm pretty impressed by the stability and multi-tasking of Android 2.1. Most dying apps do so gracefully via a "bye-bye" dialogue and you can always get into Task Manager to kill them, unlike the Palm Treo which was forcing me to pull its battery and reboot about once a week towards the end. I haven't really explored third-party task managers that automatically shut down unused background apps yet, and still do a manual Kill All from time to time.

The quality of Android apps is extremely variable, since they're not vetted the way Apple's AppStore is, but since they take only seconds to download and install I just try 'em and chuck 'em till I find one I like. And there usually is one, eventually. My main requirements, beyond phone calls and web browsing, are for contacts, calendar, note-taking, document reading, photo viewing and music playing. Google's own apps for mail, contacts and calendar sync perfectly with their online counterparts without any fuss (I just love clicking the location for an appointment and being whisked straight into a Google map).

It took me quite a few rejections to find a plain text editor I can live with, Txtpad Lite, and the same for PDF viewers. I've actually ended up with three of the latter because no one does everything I want. Adobe's own is feeble, lacking both search function and bookmarks: only kept for reference. I paid for Documents To Go, part of which is PDF To Go which has both, but I also use a free one called ezPDF Reader whose UI is easier to use one-thumbed, and which remembers your document place when you switch away (unlike PDFTG which maddeningly returns to the cover page). That's essential for reading a manual while programming, but it's slower on pictures.

Did I just say programming? That's right, there's an excellent Ruby interpreter for Android called Ruboto which ran all my text-based Ruby apps unchanged, to my ecstatic amazement. It has a rudimentary integrated editor but I can write longer scripts in Txtpad and load them at the IRB prompt. Now I just need an Android-based graphical Ruby API, equivalent to Shoes under Windows. 

The built-in music player is plenty good enough for me, not merely sucking up all the MP3s from my laptop but automatically organising them far better, complete with track names I didn't even realise were in there (you can tell I'm not of the iPod generation). Ditto for photos and videos where the built-in apps suffice: I mostly use Flickr and a real camera anyway.

That just leaves reference data. I couldn't transfer dictionaries from my Palm and so had to pay for some new ones. The Oxford English cost me £12 and is excellent, with a better UI than the Palm version. I plumped for SlovoEd's language dictionaries since TrueTerm, which I've used for years, doesn't appear to have made the move to Android. I paid for $10 for SlovoEd's full Italian, and live with its free versions for French and German. Thanks to Orange's five-page home screen layout I can have these as icons all on one screen, for rapid one-thumb access. I'm training myself to like the optional CoolTek T+ keyboard layout with two characters per key which you select via a sidewise thumb slide - it's fast once you get the hang. All in all my Android experience so far has been deeply pleasurable, and I do indeed dream of Electric Sheep, being chased by back-flipping Toucans...

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...