Monday 4 December 2017

TALKS ABOUT TORX

Dick Pountain/Idealog 276/06 July 2017 14:44

I've decided that I would like you all to refer to me as an Essayist rather than a Columnist from now on. After attending a stimulating talk by Brian Dillon at the London Review bookshop the other day, I was lead to realise that, in a certain light, the stuff with which I populate this column looks quite a bit like an essay, and that essays are way cooler than columns. So I would very much like to be added somewhere near the bottom of a long list that starts with Montaigne, Hazlitt, Orwell, Sontag and Woolf.

Dillon's definition of an essay was a  kind of text that starts by chronicling some everyday triumph or defeat, then draws from this some conclusion of universal validity. So, here goes. We were giving a dinner party for some old friends, when sometime between the espressos and the petit fours all our lights went out. Flipping the circuit breaker restored the power, so there was no continuing short anywhere, but these outages recurred several times over the next week. Our ancient electrics box contains five separately fused circuits, sequential removal of which enabled me to isolate the fault to one quadrant of the house, namely the living room, and soon after we identified and replaced the offending wall socket, which had been sporadically arcing, thanks to drops of moisture coming from a blocked gutter (now unblocked).

But before that happy day, one of these outages had caused a secondary problem of far more relevance to this column/essay. The morning after, my Lenovo Yoga laptop wouldn't switch on. Stone dead, no amount of button pressing would cause it to boot or even to flicker a light. I first of all wondered whether the power spikes had fried its battery, but after some Googling I discovered that it's a well-known problem not only for the Yoga but also for many other modern laptops, including some iMacs. It's due to some sort of dumb-ass design decisions in their internal charging circuitry and BIOS firmware, and the cure is to disconnect and then reconnect the battery. Except that on my Yoga (and various iMacs, etc) the batteries are no longer user-accessible via a trapdoor as they used to be not all that long ago.

Instead the bottom of my Yoga presented a seamless expanse of smooth black alloy, held down by a dozen tiny screws bearing fiendishly oriental star symbols instead of slots. These I learned are called Torx T5 fasteners, so off I went the the local Maplins and bought a Torx 5 screwdriver. Brian Dillon had remarked that the structure of an essay often starts from a very narrow topic and then widens out toward its conclusion. Few household utensils are narrower than a Torx 5 screwdriver.

I discovered that a Torx screwdriver does work rather beautifully - far more positive and grippy than Philips or Hex - and soon I had dish full of the little buggers and an open laptop whose PCB is a thing of greater beauty still. Mediterranean Blue in colour, thickly covered in the tiniest imaginable components, connected by shimmering silver freeways like a relief map of some futuristic city. The battery was completely embedded in this stunning landscape, and its plug was one of those tiny components.

By using two pairs of medical tweezers and a magnifying glass I was able to unplug the battery, only breaking off one of the two plastic locking lugs in the process. Then I pressed the reset button and the BIOS messages did indeed appear briefly. Then I replugged the battery (adding a judicious patch of gaffer tape) and rebooted, and there was my working world again, wholly intact. And since I'm already on blood pressure medication, with no major ill-effects on my health.

So, there's a small quotidian triumph, and now to widen it out into conclusions of universal validity. Firstly, designers of electronic equipment who believe their design so bullet-proof that they don't need to provide user access need to be disabused of this belief, perhaps using some kinds of medieval persuasion technology. Secondly, do buy a whole set of Torx screwdrivers, only a few quid in Maplins. Thirdly, don't be afraid to fiddle inside your laptop: it's only a human invention after all, and not in any way magic or supernatural (though it is terribly fragile). Fourthly, perhaps you *should* be afraid to fiddle with your mains supply - having once survived a 9000 volt shock, I'm unusually free of this extremely useful and protective fear. And finally, among that illustrious list of essayists to which I aspire to belong, among Montaigne, Hazlitt, Orwell, Sontag and Woolf, not one of them could tell a Torx screwdriver from a bare bodkin...

Wednesday 1 November 2017

FANTASY FOOT-PEDAL

Dick Pountain/Idealog 275/01 June 2017 15:44

I'm sitting at my desk typing the sentence "I've never really been a gadget person" with a waist-high pile of old gadget boxes glowering in my peripheral vision. Nevertheless I stand by my assertion: many of them are years old and relate to photographic rather than computer equipment. I don't see a therapist so can't answer why I keep them: initially rational (might be faulty and need returning), eventually quasi-rational (may become collectors' item so keep the box). By and large though I'm really not. I upgrade my mobile and laptop around once a decade. I have all the camera gear I'll ever need. I don't go in for smart watches, fitbits, digital recorders or the like. Except. I just bought two gadgets to do with music making. Sorry.

In my last column I explored the way modern digital tools affect various creative endeavours, and proposed that a human being always needs to be in the iterative enhancement loop to make aesthetic judgements about when to stop. Well, since then I've taken this argument a few steps further, assisted by my two new gadgets, namely a guitar looper pedal and an Akai MPKmini keyboard. To take the latter first, this keyboard connects to my Lenovo laptop which runs Ableton Live.

Anyone who creates EDM (electronic dance music) will have heard of Ableton, a production-and-performance system in which, as well composing tracks, you can also arrange and perform them like a DJ. I'm not producing EDM myself, but as I've described here before, my interest lies in experimental algorithmic music. Playing the MIDI files my system generates through the Microsoft GS Wavetable synthesiser bundled with Windows is good enough during composition, but its instrument voices aren't really good enough for public consumption, so my more successful efforts I post-compose in Ableton using its superior instruments, and also some AWE32 SoundFonts.

Ableton is a very clever program, developed partly at the University of Berlin using advanced "granular synthesis" theory: it's enormously flexible, mixing sampled audio seamlessly with MIDI, changing tempos while maintaining pitch, its "warp"engine syncs different clips automatically, and much, much more. However it's also one of those mega-programs like PhotoShop, or the 3D animation program Maya, for which a mere human lifespan is too short to find all its (literal) bells and whistles, let alone use them all. I originally chose it because its user interface does *not* imitate a rack full of brushed aluminium knobs and dials: I'm not a sound engineer so that stuff isn't home to me.

My algorithmic music is getting steadily more capable, now displaying far greater rhythmic and dynamic complexity. Shortish passages can sound alarmingly like some six-armed Indian deity playing the piano, but it still has difficulty with long-range structure: pieces longer than three or four minutes tend to betray their mechanical origin. Dividing the code into shorter movements, generated as separate MIDI files, is one way around this, but another is to intervene myself. So thank you FaceBook for showing me that Akai mini ad. For £60 this little beauty, with its 25 keys, 8 pads and 8 knobs is just perfect. I can play a MIDI file in Ableton and overdub parts of it using the keyboard, but better still, map the whole piece onto the Akai so that each key I press sounds a whole multi-instrument phrase in perfect time. When combined with the Akai's built-in arpeggiator this makes for terrifying displays of orchestral pseudo-virtuosity that I may one day inflict on the unsuspecting public if they don't behave themselves.

And what about my other gadget? That belongs in the curious domain between the analog and the digital, a tiny Donner looper pedal that records two separate tracks of guitar of 10 minutes duration via a foot button. The guitar rig I've built up over the years incorporates a 10W Marshall practice amp with a variety of effects, a vintage Zoom digital drum machine and a micro-mixer, all feeding into this new looper. I can now generate some unfeasibly complicated algorithmic/orchestral backing track in Ableton, output it to a WAV file on my tablet and plug that into the mixer, then play live guitar over it. Or I can snitch a few seconds of bass from my favourite Dave Holland album on Spotify, into the looper, and put some drums onto it played with my fingers on the Zoom's pads. The looper has built-in reverse and speed-up effects so I can pretend to be in one of Bill Frisell's great trios, if only for a few minutes. Virtual Reality isn't the only way of using digital technology to live in a make-believe world, and you don't bump into things so often in a sound world...

WHEN TO STOP

Dick Pountain/Idealog 274/05 May 2017 11:06

You're probably familiar with the expression "Beauty is in the eye of the beholder", but what exactly do you think it means? The  obvious interpretation is relativist: what I find beautiful need not be what you find beautiful. But we could also take it literally, namely that beauty is property of, or a process within, the human eye (or visual cortex, or brain in general) rather than a property of external objects.

My two main "hobbies", regularly chronicled in this column, are fiddling with photographs in PhotoShop Elements 5, and composing algorithmic music using my own software recently rewritten in Python. What these activities have in common is firstly that they are both pursued on a computer, and secondly that they are both iterative in nature: I make a change, look at it or listen to it, decide whether or not it's for the better or the worse, accept or reject it, then make another, and so on until I'm sufficiently satisfied to stop and call it a finished "work".

You may have already spotted that this is the very same process practiced by artists of all eras, on all materials and in all styles - with one  exception made possible by the computer and the digital nature of its material. That exception is that in most real-world materials, the step changes are irreversible. That's most clearly true for a sculptor who chisels off chips of marble that can't be replaced, but partially so for an oil painter who applies daubs that can't be removed, only overpainted, or a water colourist who lacks even that option. Writers, before the advent of word-processing, had recourse to crossing-out, the india rubber, Tippex or the waste-paper basket. None of that matters so much as does the end point of the process, knowing when to stop, and in this the computer hinders rather than helps.

That tempting Undo icon means I could go on fiddling with the same piece until death, or the heat-death of the universe (whichever comes first). There is a defence against such runaway fiddling though, and that's the number of parameters involved. When processing a photo I have available 120+ filters, 23 blend modes and 20 other adjustments that, when applied even to just two layers, makes for 3 billion possible combinations per iteration, and I make dozens of iterations. Documenting all the parameters for each step, to make the whole process reproducable, would be impossibly tedious, but in any case I don't want to. I'm happy to play on the same level field as Michelangelo and Beethoven, even if my game is far less impressive.

Playing on the same field is another way of saying that it's our eyes, ears, arms, legs, brains that decide when to stop, the decision that creates what we call beauty. (Actually it's noses and tongues too, if you're prepared, as I am, to include cooking and perfumery as arts). This ability to know when to stop may have been hard-wired into our brains by evolution. Animal brains contain many pattern analysing circuits, some imparted by evolution, some further trained by early experience. On the evolutionary side it's been suggested that colour vision offered a dietary advantage in distinguishing ripe from unripe fruit (many carnivores lack it), and it's also deeply involved in sexual display, and hence selection, particularly among birds and insects.

When Pablo Picasso drew a freehand squiggle it always looked better than my own squiggles, and I propose that's a matter of radii of curvature: some curves are right and some are wrong (too acute, jerky). In other words some curves are just sexier than others, and I'll leave the evolutionary "explanation" to you. We could analyse this mathematically using line integrals, div, grad and curl, but that would be pointless. Our pattern circuits are connected to our limbic system, source of emotions, that is, of dopamine reward. No amount of maths can substitute for looking, then feeling good at what you see.

Another possible evolutionary advantage of curve sensing is for landscape recognition. I used to spend time in a small croft near the sea in northern Scotland, where the seaward skyscape was dominated by seven distinctive humps of hills that we, of course, called the Seven Sisters. I can still imagine that shape if I close my eyes, leading me to hypothesise that this ability to recognise your home landscape may have had great survival advantage in the epochs before roads, church steeples or GPS. To zealots of strong AI, I'll just say this: we might analyse all such sensual recognitions and turn them into algorithms, but so far there's no way that a computer can *feel* good, and without that it can't have any real understanding of what beauty is. 

A GLINT OF CHROME

Dick Pountain/Idealog 273/07 April 2017 11:19

Taking on a new computing platform was furthest from my mind, but it just happened anyway. I'm still basically wedded to Windows (albeit in v8.1) which I've wrestled to a stalemate that works well enough, only needing a reboot around once a quarter. For mobile I'm equally wedded to Android, via my 7" Asus tablet and HTC smartphone, with Google providing the interface between the three devices via Gmail, Contacts, Calendar and the beloved Keep, plus a little help from Dropbox. But now they are four...

PC Pro's publisher, Dennis Publishing Ltd, is on the last lap of preparing to move office, after 20+ years, from one 1930s block in London's Fitzrovia to a smaller one nearby. The new offices are extensively refurbished, very much in Silicon Valley style with lots of wide-open meeting and relaxation spaces and almost no horrid "open-plan" partitions. The CEO of Dennis Publishing is James Tye, whom regular readers may remember was once Editor of this very magazine and is hence very IT-savvy. When deciding the IT infrastructure for the new building, James chose one as radical as the interior design: as many staff as possible would be issued with Chromebooks in place of PCs, and the company LAN mostly replaced by The Cloud. There are necessary exceptions, like the accounts department which still needs its servers and software suite, and the designers who still need Macs to run Indesign, but the LAN will cease to be the main data store for everyone else, and most internal communication will be via Slack, Gmail and Google Drive.

They're buying upwards of 200 HP Chromebooks, whose 12+ hour battery life makes entirely wireless working feasible (the building has designed-in Wi-Fi with three incoming fibre lines for redundant backup) so people can work in places other than their desks. During a tour I expressed admiration for this brave leap into The Cloud, but though curious about Chromebooks, I stressed my Android/Windows devotion. James immediately and generously offered to loan me a Chromebook to see for myself whether they do everything I need.

What arrived wasn't an HP but a Dell 13, purchased during their evaluation phase, with a smart illuminated keyboard and magnesium chassis. Spec isn't everything in a Chromebook since much of the oomph is supplied at Google's end, but this one does look and feel good, with Gorilla Glass screen and trackpad that are noticeably superior to my own Lenovo Yoga. More disconcerting was turning it on to a practically blank screen. On closer inspection there was an unlabelled round icon in the lower left-hand corner that brought up the Google app launcher, familiar from my other devices, containing the Chrome browser, Gmail, Maps, Calendar , Translate, Keep and all. More surprisingly there were also several of my mobile apps - Guitarists Reference, GIF Maker, The Guardian and Pocket - that I'd assumed that as Android apps I'd have to replace. Of course all my calendar, contacts and Google documents were there already, which is the whole point.

After a couple of weeks I'm writing this on the Chromebook, in a Google Docs editor that’s just as responsive as the LibreOffice Word or TextPad I normally use. What still drives me mad though is the absence of a Delete key: you have to use Alt-Backspace which my brain knows but my fingers don’t (and perhaps never will). My brain on the other hand has difficulties with the Chrome OS file system, whose use of "download" and "upload" is the opposite of what I mean and deeply counterintuitive. I've put up a lot of my data on Google Drive, but still find it harder to navigate than Windows. I know I should just create and store all new stuff in the Cloud, but as a crusty product of the PC revolution I need to know stuff is on my local hard drive too.

What I do most, apart from writing, is process my photographs and program in Python, so how do these fare on Chromebook? After trying dozens of photo editor extensions, Sumo Paint has about 75% of the Photoshop Elements features that I need, though it's unfashionably Flash-based. As for Python, I can run small modules on the Skulpt interpreter Chrome extension, but it won't import all the modules of my large music project. For that I'd need a full Python 3.4 implementation which means either installing a Linux distro, or running Android apps (QPython) which the Dell won't do by default so I'd need to switch to unstable beta-developers' mode. Both are tech hassles I can do without. Google is really missing a trick with its failure properly to integrate Chrome OS with Android, because that combination could really give Apple and Microsoft sleepness nights. 













Sunday 6 August 2017

TRUST ME, I'M NERD

Dick Pountain/Idealog 272/15 March 2017 13:54

A few months ago (issue 269 to be exact) I wrote a column about the way the so-called 'alt-right' in the USA had built an alarmingly effective alternative web of sites that pumped a continual stream of 'fake news' stories into the mainstream social and news media during the 2016 presidential election campaign. A US professor of communications, Jonathan Albright, mapped the topography of this dark web, which he dubbed a 'micro-propaganda machine', and the Guardian printed his map. This network employs advanced SEO and link tweaking tricks to stay hidden.

Turns out though that this was barely half the story, and after I found out some of the other half I rather wished I hadn't, because I've been feeling slightly queasy ever since. I found out through a highly entertaining blog post by Dale Beran, a writer and comic artist who had the distinction of being an early user of the 4chan.org website (where his comic work was admired). Now I've personally only been on 4chan.org once - by accident when following some obscure search - and I began to feel uncomfortable after about three minutes, fled after five and spent the next half hour scrubbing against malware. Bit like a jungle, bit like a locked-ward, bit like a circus, good test of your anti-virus solutions...

According to Beran's extraordinary piece (https://medium.com/@d1gi/the-election2016-micro-propaganda-machine-383449cc1fba#.gp86cg9ns) 4chan was the breeding ground for the Anonymous hackers network, the 'Gamergate' scandal, Bitcoin, and more recently for the alt-right micro-propaganda network. Its denizens tend to be nihilistic, misogynistic, game-playing nerds with an extremely dark (and often very funny) sense of humour. Beran explains why they swung their considerable online skills behind Donald Trump, not because he was any good, not because they agreed with his politics (they don't really do politics) but precisely because he's so awful - the ultimate prank, elect a nutjob as president of the world's sole superpower. They also supplied the Trump team with its mascot, that unpleasant cartoon frog Pepe.

I recommend you read Beran's piece for yourself, with couple of big Solpadeine and a glass of water to hand. And I have to admit being jealous of the deft way he wove a Charles Bukowski novel about horse racing into his narrative. But what I intend to pursue here is the likely effect of this apotheosis of fakery on the future of our affairs, which might go way beyond mere prankery. The overall effect of all this fake news and contempt for rational argument - which has started popping up not only in Whitehouse press briefings but everywhere from Sweden to Turkey to China - is to erode trust, and trust is perhaps the most valuable commodity in the whole world.

Surely a slight exaggeration? Everyone tells a white lie now and again don't they? No need to get so worked up about it. Actually I'm not worked up at all, because I don't see this from a moral perspective. No, what terrifies me is that debt, credit, markets and even money itself, are all  just forms of materialised trust. If I hand you a fiver, you not only trust that I didn't forge it but more importantly you also trust everyone else to recognise its value and give you the equivalent goods in exchange for it. The very word 'credit' is defined by Collins as 'the quality of being believable or trustworthy'. Undermining trust is like putting sand into the engine-oil of the world economy.

To be sure the relationship between money, credit and trust is rather more complex than I've just suggested, as extensively analysed by everyone from Adam Smith to Maynard Keynes. My favourite recent study is 'Debt: The First 5000 Years', a lively and groundbreaking work by David Graeber (one of the masterminds behind Occupy) in which he says:

'A debt is, by definition, a record, as well as a relation of trust. Someone accepting gold or silver in exchange for merchandise, on the other hand, need trust nothing more than the accuracy of the scales, the quality of the metal, and the likelihood that someone else will be willing to accept it.'

Which brings me to another dangerous online lifeform, the gold-bug. You'll have seen those websites where folk that argue that all our bank-created 'fiat money' is actually worthless, bits on hard disk, and that gold is the only real source of value. A few years ago this seemed either mildly eccentric or mere gold-salesman hucksterism, but keep undermining trust in everything and these arguments start to be believed. That way eventually leads to Northern Rock. Once you believe nothing, in practice you'll believe anything: say, that insulting China will resurrect America's rusting factories, or that leaving the EU will save the NHS from collapse.

ANALOG REVOLT?

Dick Pountain/Idealog 271/16 February 2017 14:50

Our industry is notorious for its love of buzzwords (think "agile", or "responsive", or "passionate") but currently the most egregious one, the one that makes me reach for my imaginary Luger, is "creative". It's one of those words like "love" or "cool" that's so radically unabusable, so self-evidently good and desirable, that it's guaranteed it will be continually abused. I of course consider myself creative, like everyone else in the friggin' world, but I've recently been prompted to examine more closely exactly what that means. What prompted me to this reflection was a Bill McKibben review in the New York Review of Books, of "The Revenge of Analog: Real Things and Why They Matter" by David Sax.

Sax’s thesis is that we're witnessing more and more islands of analog refuge among the foaming waves of digital media and communications: places where we can relax and think as opposed to clicking, and where we touch actual physical objects instead of bitmaps. One obvious example is the re-discovery of vinyl records by young people, in sufficient numbers to warrant the reopening of several pressing plants. Another is the recent fashion for carrying a Moleskine notebook, even among folk who clutch the latest iPhone in their *other* hand. (This actually sent me off down a timewasting diversion about pronunciation: Brits call it Mole-skin, Europeans and ex-Europeans like me say Mol-ess-kinay). The company recently went public valued at a positively digital €490 million, for a quintessentially analog product. Sax talked to architectural firms and software companies who hand out these notebooks and forbid their designers to turn on their computer until they've brainstormed an initial design on paper. The electronic whiteboard utterly failed to displace paper pads and marker pens, as used by this very magazine in our own brainstorming sessions...

Sax continues though to push this argument a step too far for me. Moleskine's ads claim their notebooks helped Picasso and Hemingway to success:

"Creativity and innovation are driven by imagination, and imagination withers when it is standardized, which is exactly what digital technology requires—codifying everything into 1s and 0s, within the accepted limits of software."

Well, perhaps, but remember paper notebooks were all they had. We can't know whether Picasso would have taken to Procreate or Zen Brush on an iPad, but my guess is he'd have loved it. The 1s and 0s objection is a 10ad 0f 01d b0110x in the era of GUIs.

I've written many columns about my quest for useability in pocketable devices, and this book review played right into my latest discovery, namely that Google Keep has just added a sketching facility. I can now knock out line and tone drawings with my finger on phone or tablet as fast as I could with Moleskine and a pen, and they automatically appear on all my devices without having to scan them in. If I need more features Autodesk's brilliant Sketchbook is installed on my tablet too, and Android's ubiquitous Share menu shuttles pix between them in seconds.

I've also written here about the thousand-odd processed photographs that I keep on Flickr, and about my Python-based music composition system. In both these media I do actually respect Sax's aversion to "standardization": for example when I apply dozens of successive filters and blend-modes to a photo, I deliberately refrain from writing down the sequence so that image is unique and unrepeatable - though of course it is easily *copyable*, which is a principal joy of digital versus analog. I do the same when composing tunes: while I keep the Python source code for each family of tunes, I don't record every parameter (for example random ones) for each particular instance, so these tunes have the same uniqueness as my photos.

A deep attachment to matter is both desirable and laudable, given that we're all made of it, but it can also stray into the sentimental or romantic. It also cuts two ways: while matter has a certain permanence and leaves a historical trace - we still have pottery and statues made thousands of years ago - most people believe that digital data is impermanent, volatile, easily lost (which can be very true if you have a sloppy backup regime). But that very volatility is also a strength of the digital realm. Art is indeed all about essences, representations and images, and handling these digitally produces far less waste of both materials and time. Check out the price of oil paint and canvas nowadays: it's an expensive way to learn. Best overcome any superstitious aversion to 1s and 0s, design and edit your work in the digital domain and only finally turn it into matter when it's good enough. After all that's what 3D printers and record pressing plants are there for.

Thursday 8 June 2017

PYTHON RHYTHM

Dick Pountain/ Idealog 270/10 January 2017 10:54

I may have mentioned here a few dozen times how deeply I'm into music. I play guitar, bass and dobro (for amusement rather than reward); I regularly attend chamber and orchestral concerts at South Bank and Wigmore Hall; I listen to classical, jazz, blues, rock, bluegrass, country, dubstep, reggae, EDM and much, much  more on Spotify, both at home and on Bluetooth earbuds while walking on heath or park. But on top of all that, for the last 20-odd years I've been working on my own system of computerised composition.

It started in the mid-1990s, using Turbo Pascal to write my own music library that let me generate MIDI files from Pascal programs, and play them on any General MIDI synth. Before too long the memory management limitations of PC-DOS became an obstacle to writing big programs though. I've picked up and dropped this project many times over the intervening years, planned and failed to rewrite it in Ruby, planned and succeeded in rewriting it in the blessed Python. And I recently cracked a couple of remaining knotty problems (both related to the grim unfriendliness of the MIDI protocol) to produce a system that automatically generates tunes which sometimes sound convincingly like human music. A sort of musical Turing Test - not intended as yet another tool to help you write pop or dance music, because there are plenty of good ones already like Ableton and Sibelius.

Instead my system lets me dissect music into its atomic parts, then reassemble them to generate tunes few humans could play. One of the few real regrets of my life is that as a child I turned down the offer of piano lessons, and therefore didn't learn to sight-read musical notation at the best age. Instead I took up guitar in my teens and taught myself blues and ragtime by listening to records. This has had a profound effect on my understanding of music because the guitar is a fundamentally chromatic instrument: each fret is one semitone so the difference between white and black notes means nothing. I did painfully teach myself to read notation eventually but I'm far from fluent, and more to the point it actively irritates me. I now think chromatic.

Hence when designing my system I decided not to make the 'note' its fundamental data structure, but rather - like MIDI itself - I treat pitch, time, duration and volume as separate musical 'atoms'. Of course it's trivial in Python to strap these atoms together as (pitch, time, duration, volume) tuples, then manipulate those as units, and it enables me to translate ASCII strings into sequences of these atoms. The process of 'composing' a tune then becomes a matter of writing strings that define melody, rhythm and dynamics, each of which which I can alter separately. More significantly, the *program* can manipulate them by splicing and chopping, applying functions, adding randomness, say to change the rhythm or the timing of the same melodic fragment.

All the compositional tricks like fugues, canons, rondos, arpeggios are easy to achieve and automate. Play the same fragment in a different scale - major, minor, cycles of thirds, fourths, fifths, wholetone, Bartok's Golden Section pentatony - by choosing from a tuple of scale functions. I can even create new scales on the fly using  lambda function parameters. It's strictly a system for programmers, interfaced via a single Python function called 'phrase' that takes five ASCII strings as parameters. Each invocation, like:

S.phrase(1, 1, scale[mx], Key[2], Acoustic_Bass, S.nSeq(inv(p),t,d,v,m))

writes a sequence of notes, length defined by those strings, to one MIDI track. Programs tend to be short, forty or fifty lines with lots of loops. Using ASCII permits silly tricks like turning a name, let's say Donald Trump, into a tune (and yes, I have, as a sinister bass riff). MIDI is quite limiting of course, the available instruments aren't that great, so any tunes I like I whisk into Abelton and play them with proper samples into an MP3 file.

Though I do like some electronic dance music (especially Brandt, Brauer, Frick) that's really not what this system is meant for. I'm more into sort of jazz/classical fusions that mess with tricksy rhythms and harmonies that would be hard to play on real intruments. Everyone worries nowadays that robots are going to put them out of a job, but neither Adele nor Coldplay need have any fear on my account. In any case my competitors would more likely be software suites developed at France's IRCAM and INRIA, and they needn't panic either. And I don't intend to build a graphical user interface, because for me writing program code is just as much fun as playing with music.

[Dick Pountain would like to offer you a jaunty little tune based on the words BREXIT and TRUMP, by way of demonstration: file brexit bounce.mp3]

CTRL-ALT-DERIDE

Dick Pountain/Idealog 269/05 December 2016 10:47

I got my first inkling of the potential political power of social media back in 2007 on receiving a Facebook friend request from Barack Obama. I was momentarily surprised since I'd never met the man and at that time barely knew who he was. I quickly realised though that it wasn't from Barack himself but rather a bunch of young IT smart-asses at his campaign headquarters who'd learned how to scrape the content of FB posts to compile a list of people who might be sympathetic to his cause (but clearly didn't tell them whether said people had a vote in the USA). It seemed to me then entirely predictable that it would be Democrats, young and "progressive", who first learned how to play the social media, rather than those old, rigid, luddite, perhaps religious Republicans. Oh boy, how things change...

I expended quite a few typing-finger joules during the autumn of 2016 on Facebook, pointing out to certain American friends that the news stories (almost all involving Hillary Clinton) they were commenting on were bogus - affiliated to a number of Far-Right websites that they could have easily checked on Google - and that commenting merely promoted this bogosity to the top of their friends' news feeds. I'd already noticed, over  the previous year or so, that while loitering among the Guardian's Comment is Free forums (another bad habit) that whenever the subject of Putin's Russia was mentioned, a small army of trolls with slightly odd English syntax would appear within minutes, as if from a hollow tree trunk. It began to feel as though something organised was going on.

Turns out my gut feeling was right. Since Donald Trump's surprise election win (not that surprising to me in fact) the UK newspapers have been frothing about the Post-Truth era, Macedonian teenage lie-mongers and the rise of Fake News sites, but embedded among all this froth was an alarming nugget of real information. On December 4th 2016 an Observer story by Carole Cadwalladr alerted me to a research project by Jonathan Albright, assistant professor of communications at Elon University in North Carolina, into the interconnectivity of Far-Right websites which he calls the "The #Election2016 Micro-Propaganda Machine" (https://medium.com/@d1gi/the-election2016-micro-propaganda-machine-383449cc1fba#.gp86cg9ns).

Albright described this phenomenon as an "influence network that can tailor people’s opinions, emotional reactions, and create “viral” sharing (LOL/haha/RAGE) episodes around what should be serious or contemplative issues". Or more succinctly "data-driven 'psyops'" He crawled and indexed 117 sites known to be associated with the propagation of fake news, and produced a mind-boggling diagram of the outgoing links from them which create a vast alternative web that feeds back into the mainstream web via YouTube, Facebook, Twitter, Google and others (diagram at https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook#img-4 if the whole article is TLDNR for you). The builders of this network appear to have a masterly grasp of SEO and tweaking site linkages, which to judge from my own experience is now way in excess of anything Democrats or Independents are able to muster.

The size and effectiveness of this network raises some important questions for the future of our industry (and indeed whole society). Back in Idealog 267 I commented on the fact that the big five IT companies, through their sheer size and reluctance to pay taxes, may soon find themselves at odds with the federal government of the USA. That possibility is now magnified many times with the Republicans in command of both houses and well aware of the Democratic leanings of most Silicon Valley moguls. How much are you prepared to bet on whether the big five will resist all attempts to curb their independence, or will *reluctantly* bend the knee? And what happens if this new government with deep connections to the Alt Right gets a hold of that mighty untruth network?

Politicians have of course been telling lies ever since the Trojan War, but persistent and systematic lying is a relatively recent phenomenon which you might date from Karl Rove's famous quote about the Iraq War: "We're an empire now, and when we act, we create our own reality". In my own Naturalistic philosophy (read all about it at https://www.amazon.co.uk/dp/B00CORF62O) the world is made of atoms but we can't see them directly, just information about them sampled by our various sense organs. This information isn't always "correct": we can mistake things, imagine things, hallucinate, dream, play computer games. The internet is a perfect conduit for such information, for pictures of pizzas rather than pizzas you can actually eat. My definition of a Truth is information that actually corresponds to the action of some clumps of atoms in the real world at a certain time and place, and Albright's Micro-Propaganda Machine, once in malign hands, poses a real threat to the dissemination of such Truths.

[Dick Pountain is finding it very hard to imagine the lunchtime conversation between Donald Trump and Mark Zuckerberg]

BITS, ATOMS AND ELECTRONS

Dick Pountain/Idealog 268/08 November 2016 11:13

In a recent review of Amazon's Echo I read that "voice assistants have been on our phones for a long time, but they haven't really taken off" (until Alexa that is). The part of this claim that stopped me dead was that phrase "long time". Apple's Siri arrived with iPhone 4S in 2011 while Microsoft's Cortana was rolled out in 2015, so it would seem that five years is a "long time" nowadays. And it's true. Technology progresses, if perhaps not exponentially then according to some power law, so five years really can now be a whole generation. Leaps from CDs to downloads, downloads to streaming, keyboards to touch screens to voice: for millions of youngsters those "older" techs, if known at all, seem prehistoric.

Many commentators would proceed from there to discuss the effect of such rapid change on the human psyche, but I propose to skip all that (except for a throw-away aperçu that this crazy pace is partly to blame for that widespread disgruntlement displayed via Brexit and Trumpism). I'm actually more interested in how sustainable this pace is. There is of course one popular strand of opinion that answers this question with "for ever and ever", until we have robots smarter than us, we live in virtual worlds, and all our minds are connected together into some kind of singularity. I don't believe a word of that, perhaps because I bored myself of science fiction back in the '60s. I'd argue instead that although progress will continue, it's already diverting into a different direction, from bits back toward atoms.

I first put Bits v Atoms in a column title over 20 years ago (PC Pro issue 22), when observing that you can order a pizza via the web but you can't eat the picture of a pizza on the web. A platitude that remains true, though you can nowadays order a pizza and have it delivered to your door by Deliveroo, Just-Eat, Hungryhouse, GrubHub or a dozen similar sites. We're made of atoms so we need to eat atoms, and though we pay for those atoms nowadays in electrons (credit card transactions), those electrons only have value thanks to the atoms they can buy. Cars, houses, boats, yachts, private jets these are the things the owners of our social media get out of the game, not pictures on screens (and most of us would like some too). This inexorable logic means that over the long-term, information, bits, pictures on screens, can only decrease in value compared to atoms, and we're already beginning to see the effect of this on the internet giants. IT and the internet may have reduced the cost of distributing bits almost to zero, but they've barely started on reducing the cost of distributing atoms/things.

On the one hand Twitter - which is still losing *billions* rather than millions - just had to axe its video-sharing service Vine. On the other hand Netflix and Amazon have both started creating their own original digital content, which surely contradicts my thesis. But does it really? Already I'm reading in the business sections that both companies are getting nervous about the enormous cost of producing this content, and I'd guess that both are doing it only as a long-term strategic attack on Hollywood and the over-air and cable TV companies. Once (and if) they manage to slay those, pillage their audiences and archives, it will surely be more profitable to revert to recycling that vast archive, rather than pay for expensive new content. The makers of the great movies of the 20th century had no choice but to spend all that money: given a choice, a canny modern investor won't.

There is a way though, admittedly a wildly eccentric way, that I can see for the internet of the far future to remain economically viable. Some rather loud hiccups notwithstanding, BitCoin has demonstrated that it can fulfill the role of paying for stuff in the atom world. Also, the notion of a Universal Basic Income is starting to be taken seriously in some rather surprising political quarters. Now BitCoin imparts value to its electrons through scarcity, by the process of "mining" them using a horrendously intractable algorithm running on a very fast computer. Fine, let's invent a smallish computer that will be fitted into every household like a utility meter, which contains a whole stack of fast GPUs that are continuously mining something resembling BitCoin - the amount yours uncovers becomes your Universal Basic Income. What's more, these GPUs are water-cooled, so it's also your central heating boiler! VoilĂ , energy and money integrated into the same system that's distributed via existing infrastructure. OK, so the banks and electricity companies won't like it much. Boo hoo.  


CLOUDS ON THE HORIZON?

Dick Pountain/Idealog 267/05 October 2016 11:05

Last month PC Pro ed Tim Danton asked us for masthead quotes on "Which technology brand have you recommended most over the years, and why?" My reply was "Google, since their cloudy goodness now serves most of my computing needs". At the time that felt like a rather dull answer, but ever since Google cluster-bombed the industry with new products on 5th October I feel pretty damned smug (and prepared to pretend I had inside knowledge). 

That answer was based on the fact that Google has been handling my email, calendar and contacts in its cloud for many years now, thus permitting me to access them from any device and platform. More recently I've found that the ever-increasing capabilities of Google Keep let me use it for much of my data storage too. Don't be fooled by its Tonka Toy appearance, its combination of full-text search, colour coding and labelling makes it the most flexible text database I've found, plus it's multiplatform and its voice recognition is effective enough to enable dictation of notes. When shopping in Sainsbury's my grocery list is a Keep widget on my Android phone's home screen, and all the outlines and notes for this column go into Keep via my Lenovo Yoga laptop. I don't use Google Docs much myself, preferring Dropbox, but I often receive long documents that way from others I work with (including PC Pro).  

Google could in theory let me realise the perennial dream of a single box that does everything, given a large enough phone (I refuse to use the ph**let word) or top-end Chromebook, but actually it's achieved the opposite. Since I can access my data from anywhere I use five different boxes: Lenovo Windows laptop, Asus Android tablet, HTC Android phone, Amazon Kindle 4 and a Sony WX350 compact camera (for me phones make lousy cameras, regardless of resolution). As for those other four boxes, it's all about screens. I use the tablet most, to read and answer email, run Citymapper when I need to plan a trip across London, but mostly for searching Wikipedia and Google while I'm reading. And my reading is done either in paper books or on Kindle - I deliberately keep a vintage, non-backlit, non-touchscreen version 4 because I prefer to read by natural light and only want pages to turn when I say so. Nowadays I tend to request review books in Kindle format so I can make notes and highlight quotes as I read, then cut-and-paste those from Windows Kindle Reader on my Yoga straight into Libre Office. 

Since so much of my work and personal data now resides in Google's ecosystem, doesn't that dependency worry me? In some ways yes, but perhaps not ways you'd expect. I feel safer with Google than I would with Apple for all kinds of reasons: Apple has an even worse record than Google for high-handed treatment of its users, and I'm no devotee of its cult of shiny things. Both Amazon (via Kindle and Fire tablets) and Facebook would love to create ecosystems as rich as Google's, but they will be a long time coming and I don't trust either company that much. So the question for me really is, what's the future for *all* of these giant IT empires? 

A snippet from The Week's US business pages caught my eye recently, reporting that for the first time ever the five most valuable companies in the world by market capitalization are precisely these US tech companies: Apple, Alphabet (ie. Google), Microsoft, Amazon and Facebook. The amount of US corporation tax they avoid every year and the trillions of dollars they have stashed in overseas tax havens would transform the Federal budget if repatriated and taxed properly, providing on an ongoing basis enough to reform both the US health and educational systems. They all have turnovers comparable to the GDPs of many small countries. They possess competences that would be invaluable if applied to government. They are, in many respects, like mini-states themselves, the most notable missing component being that they don't have armies. This being the case, my extensive reading (especially of Machiavelli and Hobbes) suggests that the real state, that is the US Federal Government, cannot forever tolerate their current behaviour, nor resist the rich pickings that they flaunt. Sooner or later they'll cross some invisible line - Apple nearly got there by refusing the FBI request to crack that terrorist iPhone - a serious confrontation will arise, and they will discover what all such aristocracies throughout history have eventually learned, namely that you really do need an army. I don't pretend to know when or how it will happen, or with what result. I'm 71, I'll just keep using Google anyway...







Friday 17 February 2017

A PROGRAM FOR LIFE

Dick Pountain/Idealog 266/06 September 2016 12:41

Long-term readers of this column, assuming that any remain, may have become aware of a small number of themes to which I return fairly regularly. I don't count computing itself as one of these as all my columns are supposed to be about that. The top three of those themes are biology-versus-AI, parallel processing and object-oriented programming but recently, for the first time in 20+ years, I've begun to see a way to combine them all into one column. This one...

Last month I described how I plucked up the courage to catch up on Evo Devo (Evolutionary Developmental Biology), the recent science of the way DNA actually gets turned into the multitude of forms of living creatures. To relate Evo Devo to computing I knocked together a rather clunky analogy with 3D Printing, but since then it has occurred to me there's a far stronger potential connection which has yet to be realised. The aspect of modern genetics that lay people are most familiar with is the Human Genome Project, and in particular the claims that were made for it as a route to curing genetic diseases. Those cures have so far been slow in arriving, and Evo Devo tells us why: because the relationship between individual genes and properties of organisms is nowhere near so simple as was believed even 20 years ago. DNA-to-RNA-to-protein-to-physical effect isn't even close to what happens. Instead there's a small group of genes shared by almost all multi-cellular creatures that get used over and again for many different purposes, in many different places, at many different times, controlled by a mind-boggling network of DNA switches contained in that "junk" DNA that doesn't code for proteins. In short, genes are the almost static data inputs to a complex biological computer, contained in the same DNA, which executes programs that can build a mouse or a tiger from mostly the same few genes.

The Human Genome Project of course relied heavily on actual silicon computing power, not merely to store the results for each organism it sequenced - around 200GB per creature - but also to operate the guts of those automated sequencing machines that made it possible at all. However the data structures it worked with were fairly simple, mainly lists of pairs of the bases A, C, G, T. But let's suppose that the next generation project ought to be to simulate Evo Devo, in other words to mimic the way those lists of bases actually get turned into critters. Then you'd need some very fancy data structures indeed, ones that aren't merely static data but include active processes, conditional execution, spatial coordinates and evolutionary hierarchies. And of course all of these components already exist and are well understood in the world of computer science.

The first step would involve object-oriented programming: decompose those long lists (around 3 billion bases in each strand of your DNA) into individual genes and switch sequences, then put each one into an object whose data is the ACGT sequence and whose methods set up links to other genes and switches. You'd have to incorporate embryological findings about where in relative space (measured in cells within the developing embryo) and time (relative to fertilisation) each method is to be executed, and since millions of genes are doing their thing at the same, meticulously choreographed time, the description language would need to support synchronised parallelism. Having built such a description for many creatures, you could then arrange all their object trees into an inheritance hierarchy that accorded with the latest findings of evolutionary biology. If you were feeling mischievous you could call the base class of this vast tree "God".

Imagine that someone has made me Director of this project and promised me a few tens of billions of dollars, then I'd propose a new programming language be created as a hybrid of Python - which has excellent sequence handling, in addition to objects - and Occam, whose concept of self-syncing communication channels the IT business is only just catching up with after 30 years (witness Nvidia's Tesla P100 architecture). Naming it would be a problem as neither Pytham nor Octhon appeals and Darwin (the obvious choice) is already taken. The rest of the dosh would go towards a truly colossal multiprocessor computer, bigger than those used for weather forecasting, simulating nuclear explosions or the EU's Human Brain Project. Distribute the object tree for some creature over its millions of cores, connect up to a state-of-the-art visualisation system and you'd be in the business of Virtual Creation. And once it could tell mice from tigers, you'd perhaps be in the business of curing human diseases too. Unfortunately the people who have this sort of cash to spend would rather spend it on one-way trips to Mars...



CICÁDAMON GO?

Dick Pountain/Idealog 265/08 August 2016 09:54

Just back from a holiday in Southern France where I spent much of my time reclining in a cheap, aluminium-framed lounger under an olive tree in the 39° heat, reading a very good book. One day I went down to my tree and found a bizarre creature sitting on the back of my lounger: it was however not any member of the PokĂŠmon family but "merely" a humble cicada. The contrast between a grid of white polythene strips woven on an aluminium frame and the fantastic detail of the creature's eyes and wings could have been an illustration from the book I was reading, "Endless Forms Most Beautiful" by Sean B. Carroll.

Endless Forms is about Evo Devo (Evolutionary Developmental Biology), written by one of its pioneers, making it the most important popular science book since Dawkin's Blind Watchmaker, and in effect a sequel. Dawkins explained the "Modern Synthesis" of evolutionary theory with molecular biology: the structure of DNA, genes and biological information. Carroll explains how embryology was added to this mix, finally revealing the precise mechanisms via which DNA gets transcribed into the structure of actual plants and animals. It's all quite recent - the Nobel (for Medicine) was awarded only in 1995, to Wieschaus, Lewis and NĂźsslein-Volhard - and Carroll's book was published in 2006. That I waited 10 years to read it was sheer cowardice, now bitterly regretted because Carroll makes mind-bendingly complex events marvellously comprehensible.  And I'm writing about it here because Evo Devo is all about information, real-time computation and Nature's own 3D-printer.

I've written in previous columns about how DNA stores information, ribosomes transcribe some of it into the proteins from which we're built, and how much of it (once thought "junk") is about control rather than substance. Evo Devo reveals just what all that junk really does, and Charles Darwin should have been alive to see it. DNA does indeed encode the information to create our bodies, but genes that encode structural proteins are only a small part of it: most is encoded in switches that get set at "runtime", that is, not at fertilisation of the egg but during the course of embryonic development. These switches get flipped and flopped in real time, by proteins that are expressed only for precise time periods, in precise locations, enabling 3D structures to  be built up. Imagine some staggeringly-complex meta-3D printer in which the plastic wire is itself continuously generated by another print-head, whose wire is generated by another, down to ten or more levels: and all this choreographed in real time, so that what gets printed varies from second to second. My cowardice did have some basis.

However the even more stunning conclusion of Evo Devo is that, thanks to such deep indirection and parameterisation, the entire diversity of life on this planet can be generated by a few common genes, shared by everything from bacteria to us. Biologists worried for years that there just isn't enough DNA to encode all the different shapes, sizes, colours of life via different genes, and sure enough there isn't. Instead it's encoded in relatively few common genes that generate proteins that set switches in the DNA during embryonic development, like self-modifying computer code. And evolutionary change mostly comes from mutations in these switches rather than the protein-coding genes. Life is actually like a 3D printer controlled by FPGAs (Field Programmable Gate Arrays) that are being configured by self-modifying code in real time. Those of you with experience of software or hardware design should be boggling uncontrollably by now.

Carroll explains, with copious examples from fruit flies to chickens, frogs and humans, how mutations in a single switch can make the difference between legs and wings, mice and chickens, geese and centipedes. He christens the set of a few hundred common body-building genes, preserved for over 500 million years, the Tool Kit, and the mutable set of switches that make up so much of DNA are the operating system for it. I'll not lie to you: his book is more challenging than Blind Watchmaker (most copies of which remain unread) but if you're at all curious about where we come from it's essential.

But please forgive me if I can't raise much excitement for PokĂŠmon Go. Superimposing two-dimensional cartoon figures onto the real world is a bit sad when you're confronted by real cicadas. Some of these species have evolved varying hibernation cycle times to prevent predators relying on them as a food source. To imagine that our technology even approaches the amazing complexity of biological life is purest hubris, an idolatry that mistakes pictures of things for things themselves, and logic for embodied conciousness. If the urge to "collect 'em all" is irresistible, why not become an entomologist?

Thursday 12 January 2017

NONSENSE AND NONSENSIBILITY

Dick Pountain/Idealog 264/06 July 2016 10:38

When Joshua Brown's Tesla Model S smashed right through a trailer-truck at full speed, back in May in Florida, he paid a terrible price for a delusion that, while it may have arisen first in Silicon Valley, is now rapidly spreading among the world's more credulous techno-optimists. The delusion is that AI is now sufficiently advanced for it to be permitted to drive motor vehicles on public roads without human intervention. It's a delusion I don't suffer from, not because I'm smarter than everyone else, but simply because I live in Central London and ride a Vespa. Here, the thought that any combination of algorithms and sensors short of a full-scale human brain could possibly cope with the torrent of dangerous contigencies one faces is too ludicrous to entertain for even a second - but on long, straight American freeways it could be entertained for a while, to Joshua's awful misfortune.

The theory behind driverless vehicles is superficially plausible, and fits well with current economic orthodoxies: human beings are fallible, distractable creatures whereas computers are Spock-like, unflappable and wholly rational entities that will drive us more safely and hence save a lot of the colossal sums that road accidents cost each year. And perhaps more significant still, they offer to ordinary mortals one great privilege of the super-rich, namely to be driven about effortlessly by a chauffeur.

The theory is however deeply flawed because it inherits the principal delusion of almost all current AI research, namely that human intelligence is based mostly on reason, and that emotion is an error condition to be eradicated as far as possible. This kind of rationalism arises quite naturally in the computer business, because it tends to select mathematically-oriented, nerdish character types (like me) and because computers are so spectacularly good, and billions of times faster than us, at logical operations. It is however totally refuted by recent findings in both cognitive science and neuroscience. From the former, best expressed by Nobel laureate Daniel Kahneman, we learn that the human mind mostly operates via quick, lazy, often systematically-wrong assumptions, and it has to be forced, kicking and screaming, to apply reason to any problem. Despite this we cope fairly well and the world goes on. When we do apply reason it as often as not achieves the opposite of our intention, because of the sheer complexity of the environment and our lack of knowledge of all its boundary conditions.

That makes me a crazy irrationalist who believes we're powerless to predict anything and despises scientific truth then? On the contrary. Neuroscience offers explanations for Kahneman's findings (which were themselves the result of decades of rigorous experiment). Our mental processes are indeed split, not between logic and emotion as hippy gurus once had it, but between novelty and habit. Serious new problems can indeed invoke reason, perhaps even with recourse to written science, but when a problem recurs often enough we eventually store an heuristic approximation of its solution as "habit" which doesn't require fresh thought every time. It's like a stored database procedure, a powerful kind of time-saving compression without which civilisation could never have arisen. Throwing a javelin, riding a horse, driving a car, greeting a colleague, all habits, all fit for purpose most of the time.

Affective neuroscience, by studying the limbic system, seat of the emotions, throws more light still. Properly understood, emotions are automatic brain subsystems which evolved to deal rapidly with external threats and opportunities by modifying our nervous system and body chemistry (think fight-and-flight, mating, bonding). What we call emotions are better called feelings, our perceptions of these bodily changes rather than the chemical processes that caused them. Evidence is emerging, from the work of Antonio Damasio and others, that our brains tag each memory they deposit with the emotional state prevailing at the time. Memories aren't neutral but have "good" or "bad" labels, which get weighed in the frontal cortex whenever memories are recalled to assist in solving a new problem. In other words, reason and emotion are completely, inextricably entangled at a physiological level. This mechanism is deeply involved in learning (reward and punishment, dopamine and adrenalin), and even perception itself. We don't see the plain, unvarnished world but rather a continually-updated model in our brain that attaches values to every object and area we encounter.

This is what makes me balk before squeezing my Vespa between that particular dump-truck and that particular double-decker bus, and what would normally tell you not to watch Harry Potter while travelling at full speed on the freeway. But it's something no current AI system can duplicate and perhaps never will: that would involve driverless vehicles being trained for economically-unviable periods using value-aware memory systems that don't yet exist.

ASSAULT AND BATTERY

Dick Pountain/Idealog 263/07 June 2016 11:26

Batteries, doncha just hate them? For the ten thousandth time I forgot to plug in my phone last night so that when I grabbed it to go out it was dead as the proverbial and I had to leave it behind on charge. My HTC phone's battery will actually last over two days if I turn off various transceivers but life is too short to remember which ones. And phones are only the worst example, to the extent that I now find myself consciously trying to avoid buying any gadget that requires batteries. I do have self-winding wristwatches, but as a non-jogger I'm too sedentary to keep them wound and they sometimes stop at midnight on Sunday (tried to train myself to swing my arms more when out walking, to no effect). I don't care for smartwatches but I did recently go back to quartz with a Bauhaus-stylish Braun BN0024 (design by Dietrich Lubs) along with a whole card-full of those irritating button batteries bought off Amazon that may last out my remaining years.

It's not just personal gadgets that suffer from the inadequacy of present batteries: witness the nightmarish problems that airliner manufacturers have had in recent years with in-flight fires caused by the use of lithium-ion cells. It's all about energy density, as I wrote in another recent column (issue 260). We demand ever more power while away from home, and that means deploying batteries that rely on ever more energetic chemistries, which begin to approach the status of explosives. I'm sure it's not just me who feels a frisson of anxiety when I feel how hot my almost-discharged tablet sometimes becomes.

Wholly new battery technologies look likely in future, perhaps non-chemical ones that merely store power drawn from the mains into hyper-capacitors fabricated using graphenes. Energy is still energy, but such ideas raise the possibility of lowering energy *density* by spreading charge over larger volumes - for example by building the storage medium into the actual casing of a gadget using graphene/plastic composites. Or perhaps hyper-capacitors might constantly trickle-charge themselves on the move by combining kinetic, solar and induction sources.

As always Nature found its own solution to this problem, from which we may be able to learn something, and it turns out that distributing the load is indeed it. Nature had an unfair advantage in that its design and development department has employed every living creature that's ever existed, working on the task for around 4 billion years, but intriguingly that colossal effort came up with a single solution very early on that is still repeated almost everywhere: the mitochondrion.

Virtually all the cells of living things above the level of bacteria contain both a nucleus (the cell's database of DNA blueprints from which it reproduces and maintains itself) and a number of mitochondria, the cell's battery chargers which power all its processes by burning glucose to create (ATP) adenosine triphosphate, the cellular energy fuel. Mitochondria contain their own DNA, separate from that in the nucleus, leading evolutionary biologists to postulate that billions of years ago they were independent single-celled creatures who "came in from the cold" and became symbiotic components of all other cells. Some cells like red blood cells, simple containers for haemoglobin, contain no mitochondria while others, like liver cells which are chemical factories, contain thousands. Every cell is in effect its own battery, constantly recharged by consuming oxygen from the air you breath and glucose from the food you eat to drive these self-replicating chargers, the mitochondria.

So has Nature also solved the problems of limited battery lifespan and loss of efficiency (the "memory effect")? No it hasn't, which is why we all eventually die. However longevity research is quite as popular among the Silicon Valley billionaire digerati as are driverless cars and Mars colonies, and recent years have seen significant advances in our understanding of mitochondrial aging. Enzymes called sirtuins stimulate production of new mitochondria and maintain existing ones, while each cell's nucleus continually sends "watchdog" signals to its mitochondria to keep them switched on. The sirtuin SIRT1 is crucial to this signalling, and in turn requires NAD (nicotinamide adenine dinucleotide) for its effect, but NAD levels tend to fall with age. Many of the tricks shown to slow aging in lab animals - calorie-restricted diets, dietary components like resveratrol (red wine) and pterostilbene (blueberries) - may work by encouraging the production of more NAD.

Now imagine synthetic mitochondria, fabricated from silicon and graphene by nano-engineering, millions of them charging a hyper-capacitor shell by burning a hydrocarbon fuel with atmospheric oxygen. Yes, you'll simply use your phone to stir your tea, with at least one sugar. I await thanks from the sugar industry for this solution to its current travails...

ALGORITHMOPHOBIA

Dick Pountain/Idealog 262/05 May 2016 11:48

The ability to explain algorithms has always been a badge of nerdhood, the sort of trick people would occasionally ask you to perform when conversation flagged at a party. Nowadays however everyone thinks they know what an algorithm is, and many people don't like them much. Algorithms seem to have achieved this new familiarity/notoriety because of their use by social media, especially Google, Facebook and Instagram. To many people an algorithm implies the computer getting a bit too smart, knowing who you are and hence treating you differently from everyone else - which is fair enough as that's mostly what they are supposed to be for in this context. However what kind of distinction we're talking about does matter: is it showing you a different advert for trainers from your dad, or is it selecting you as a target for a Hellfire missile?

Some newspapers are having a ball with algorithm as synonym for the inhumane objectivity of computers, liable to crush our privacy or worse. Here are two sample headlines from the Guardian over the last few weeks: "Do we want our children taught by humans or algorithms?", and "Has a rampaging AI algorithm really killed thousands in Pakistan?" Even the sober New York Times deemed it newsworthy when Instagram adopted an algorithm-based personalized feed in place of its previous reverse-chronological feed (a move made last year by its parent Facebook).

I'm not algorithmophobic myself, for the obvious reason that I've spent years using, analysing, even writing a column for Byte, about the darned things, but this experience grants me a more-than-average awareness of what algorithms can and can't do, where they are appropriate and what the alternatives are. What algorithms can and can't do is the subject of Algorithmic Complexity Theory, and it's only at the most devastatingly boring party that one is likely to be asked to explain that. ACT can tell you about whole classes of problem for which algorithms that run in managable time aren't available. As for alternatives to algorithms, the most important is permitting raw data to train a neural network, which is the way the human brain appears to work: the distinction being that writing an algorithm requires you to understand a phenomenon sufficiently to model it with algebraic functions, whereas a neural net sifts structures from the data stream in an ever-changing fashion, producing no human-understandable theory of how that phenomenon works.  

Some of the more important "algorithms" that are coming to affect our lives are actually more akin to the latter, applying learning networks to big data sets like bank transactions and supermarket purchases to determine your credit rating or your special offers. However those algorithms that worry people most tend not to be of that sort, but are algebraically based, measuring multiple variables and applying multiple weightings to them to achieve ever more appearance of "intelligence". They might even contain a learning component that explicitly alters weightings on the fly, Google's famous PageRank algorithm being an example .

The advantage of such algorithms is that they can be tweaked by human programmers to improve them, though this too can be a source of unpopularity: every time Google modifies PageRank a host of small online businesses catch it in the neck. Another disadvantage of such algorithms is that they can "rot" by decreasing rather than increasing in performance over time, prime examples being Amazon's you-might-also-like and Twitter's people-you-might-want-to-follow. A few years ago I was spooked by the accuracy of Amazon's recommendations, but that spooking ceased after it offered me a Jeffrey Archer novel: likewise when Twitter thought I might wish to follow Jimmy Carr, Fearne Cotton, Jeremy Clarkson and Chris Moyles.

Flickr too employs a secret algorithm to measure the "Interestingness" of my photographs: number of views is one small component, as is the status of people who favourited it (not unlike PageRank's incoming links) but many more variables remain a source of speculation in the forums. I recently viewed my Top 200 pictures by Interestingness for the first time in ages and was pleasantly surprised to find the algorithm much improved. My Top 200 now contains more manipulated than straight-from-camera, pictures; three of my top twenty are from recent months and most from the last year; all 200 are pix I'd have chosen myself; their order is quite different from "Top 200 ranked by Views", that is, what other users prefer. As someone who takes snapshots mostly as raw material for manipulation, the algorithm is now suggesting that I'm improving rather than stagnating, and closely approximates my own taste, which I find both remarkable and encouraging. The lesson? Good algorithms in appropriate contexts are good, bad algorithms in inappropriate contexts are bad. But you already knew that didn't you...  

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...