Sunday 4 October 2020

HEAD IN THE CLOUDS

 Dick Pountain/Idealog 310/13:53 9 May 2020

When a couple of issues ago Editor Tim asked for tips for newly-working-at-home readers, mine was 'buy a Chromebook', which forced me to face up to how far I've drifted from the original Personal Computer Revolution. That was about everyone having their own CPU and their own data, but I've sold my soul to Google and I can't say I miss it. When first turned on, my Asus C301 Chromebook sucked all my personal data down automatically within five minutes, because it was all in Google Keep or on Google Drive. I do still have a Lenovo laptop but rarely use it, except via those same Google apps, and I don't miss the excitement of Windows updates one bit.  

My love for Google Keep isn't a secret to readers of this column, and it only grows stronger as new features like flawless voice dictation and pen annotations get added. Remember I'm someone who spent 30+ years looking for a viable free-form database to hold all the research data - magazine articles, pictures, diagrams, books, papers, web pages, links - that my work makes me accumulate. The task proved beyond any of the database products I tried, with Idealist, AskSam and the Firefox add-on Scrapbook lasting longer than most. Those with long memories might remember how Microsoft promised to put the retrieval abilities I need right into Windows itself, via an object-oriented file-system that they eventually chickened-out from. 

Keep's combination of categories, labels, colour coding and free text search gives me the flexible retrieval system I've been seeking, though it still isn't quite enough on its own: while it can hold pictures and clickable links they're not so convenient as  actual web pages. For a couple of decades I religiously bookmarked web pages, until my bookmarks tree structure became just a unwieldy as my on-disk folders. Nowadays I just save pages to Pocket, which is by far the most useful gadget I have after Keep. A single click on Pocket's icon on the Chrome toolbar grabs a page, fully formatted complete with pictures and a button to go to the original if needed, so making bookmarks redundant. I use the free version which supports tags similar to Keep's labels, but there's a paid-for Premium version with a raft of extra archival features for professional use. And like Keep, Pocket is cross platform so I can see my page library from Windows or a phone. 

Does the cloud make stuff easier to find? Within reason, yes.  Save too many pages to Pocket and and, as with bookmarks, you've merely shifted the complexity rather than removing it. Sometimes I fail to save something that didn't feel important at the time, then discover months later that it was, and Chrome's history function comes in handy then. I use it most to re-open recent tabs closed by mistake (I have an itchy trigger-finger) but by going to https://myactivity.google.com/ I can review searches years into the past, if I can remember at least one key word. Failing that, it's plain Google Search or the Internet Archive's Wayback Machine, recently released as a Chrome extension.

My music nowadays comes entirely from Spotify, end of. My own photographs remain the main problem. I take thousands and store them both on cloud and local hard disk, organised by camera (eg. Sony A58, Minolta, Lumix), then location (eg. Park, Italy, Scotland). I've tried those dedicated photo databases that organise by date, but find them of very little help: place reminds me far more effectively than time. My best pictures still go onto Flickr, tagged very thoroughly to exploit its rather superior search functions (it can even search by dominant colour!) Pictures I rate less Flickr-worthy I sometimes put on Facebook in themed Albums which also helps to find them The technology does now exist to search by image-matching, but that's mostly used by pros who need to spot theft or plagiarism. I can only express what I'm looking for in words, like 'Pip fixing the Gardner diesel engine'.  

What's required is a deep-AI analysis tool that can facially identify humans from their mugshots in my Contacts, recognise objects like tables, chairs or engines, can OCR any text in a picture (like 'Gardner' embossed on a cylinder block) and then output its findings as searchable text tags. It wouldn't surprise me if some Google lab is working on it. I do realise that were Google to go bust, or the internet close down then I'd be stuck with local data again, but if things get that bad then foraging for rats and cats to eat will probably be a higher priority. Right now my tip would still be, keep your feet on the ground and your data in the cloud(s)...

 





 





Saturday 12 September 2020

A BATTLE OF OLOGIES

 Dick Pountain/ Idealog309/ 4th April 2020 08:01:58


Stewart Brand’s famous epigram “Information wants to be free” has been our collective motto for the last three decades, but few of us remember that it wasn’t an isolated phrase and was accompanied by, for example, “The right information in the right place just changes your life”. During our current nightmare we’re learning that here in the world of matter many other things want to be free that shouldn’t, like serial killers and mosquitoes and viruses, and that controlling information about them has become critical.


Across the world governments and their health organisations are trying to cope with the COVID-19 pandemic but discovering that in the age of TV news and social media it’s impossible to hide anything. We’re witnessing a war develop between two ‘ologies’, epidemiology and social psychology. The coronavirus has very particular characteristics that seem designed by some malevolent deity to test the mettle of pampered citizens of post-modern information societies. There’s a malign cascade of statistical percentages. Some experts estimate that if we do nothing, around 60% of the world population would catch it – roughly the odds of a coin toss, Bad News. But if you do catch it 80% will experience nothing worse than a cold – Good News. But of the 20% who do have worse symptoms, anywhere from 1 to 5% will die, roughly the odds of Russian Roulette – Bad, Bad News. The virus is highly contagious and thus prone to spread exponentially, but not so lethal as to be self-limiting like Ebola or Marburg.


These facts have turned two issues into political dynamite, namely ‘herd immunity’ and virus testing. An exponential virus epidemic spreads rather like a nuclear fission chain reaction, and the way to control it is the same – by introducing a moderator that can reduce the flow of neutrons or virus particles between successive targets. In a viral pandemic herd immunity – that is, many people getting it, surviving and becoming immune – is the best such moderator, and is what happens most often. An effective vaccine is a catalyst that spreads such immunity more quickly and reliably. The problem is that unlike uranium atoms, human beings have minds, and the effect on those minds is nowadays more important than cold percentages.


The measures that are being taken by most governments are self-quarantine and social distancing (avoiding contact with other people) which act to moderate the rate of spread, in order to avoid swamping health systems with too many critical cases at once. In most countries these measures depend upon the voluntary cooperation of citizens. There are already tests for the presence of live virus, and there will soon be reliable tests for antibodies in survivors, but there’s controversy over how widely to apply them.


Epidemiologists need good data to check whether isolation measures are working, to get an accurate picture of the lethality rate, to model the spread and calculate how best to allocate finite palliative care resources. And, as American professor Zeynep Tufekci points out in The Atlantic magazine (https://www.theatlantic.com/technology/archive/2020/04/coronavirus-models-arent-supposed-be-right/609271/), whenever a government acts upon the recommendations of the modellers, those actions change the model.


But citizens would very much like to know whether they have caught the virus and need urgent treatment, or had the mild form and are immune. It would technically be possible to test the whole population, but it’s neither economically nor politically sensible. The cost would be enormous, and it would conflict with the principle of isolation if people had to travel to test centres but impractical if testing vans had to visit every cottage in the Hebrides. Also no tests are perfect, and both false positives and negatives could have unpleasant consequences.So mass testing isn’t feasible and would be a poor use of scarce resources – but even if it were possible it might still be counter productive. Once everyone who’s had mild COVID-19 and achieved 'herd immunity’ knows that for sure, their incentive to continue with isolation might fade away, and worse still they might come to resent those who require it to be continued.


Masks are another psychological issue: medical opinion is that cheap ones aren’t effective and that good ones are only needed by those who deal directly with infected patients. But wearing even a cheap ineffective mask makes a social statement: actually two statements, “I care about me” and “I care about you” (which predominates in each case becoming obvious from other body language).


Perhaps the best we can conclude is that total freedom of information isn’t always a good thing in emergencies like this, but that social media make it hard to avoid. We’re slipping into the realm of Game Theory, not epidemiology.


Monday 24 August 2020

VIRTUALLY USELESS

Dick Pountain/ Idealog308/ 6th March 2020 10:58:31


Online booking is one of the more noticeable ways computer technology has changed our lives for the better. See an advert for a concert, play or movie – maybe on paper, maybe on Facebook – and in a few keystrokes you can have e-tickets, removing even the terrible chore of having to collect them from the box-office (and, yes, I do know just barking at Alexa to do it is quicker still, but have decided not to go there). However online booking has also shown me some of the limitations of the virtual world. For example – admittedly ten years ago – EasyJet’s website was once so laggy that I thought I was clicking August 6th, but the drop-down hadn’t updated so the tickets were for the 5th. Ouch.  

More recently I booked a ticket for a favourite concert this March, only to discover that it’s actually in March 2021, not 2020. OK, that’s my fault for misreading, but the ad was embedded among a bunch of other concerts that are in March 2020. Another example: last week I read a Facebook share from a close friend that appeared to be a foul-mouthed diatribe against atheism. I was somewhat surprised, even shocked by this, but fortunately I clicked it to reveal a longer meme that, further down, refuted the diatribe. Facebook wouldn’t scroll down because it was graphic, not a text post. 

These are all symptoms of the wider cognitive weakness of two-dimensional online interfaces, that’s leading some people to call for a return to paper for certain activities, including education. The problem is all about attention. Visual UIs employ the metaphor of buttons you press to perform actions, directing your attention to their important areas. But this has a perverse side-effect: the more you use them and the more expert you become, the less you notice their less-important areas (like that year, which wasn’t on a button).

Virtualising actions devalues them. Pressing a physical button – whether to buy a bar of chocolate or to launch a nuclear missile – used to produce a bodily experience of pressure and resistance, which lead to an anticipation of the effect, which lead to motivation. Pressing on-screen buttons creates no such effect, so one may aimlessly press them in reflex fashion (hence the success of some kinds of phishing attack). 

It’s more than coincidence that 'Cancel Culture' has arisen in the age of social media and smartphones. This refers to a new style of boycott where some celebrity who’s expressed an unpopular opinion on social media gets "cancelled", that is dropped by most of their followers, which can lead to a steep decline in their careers. But of course ‘Cancel’ is the name of that ubiquitous on-screen button you hit without thinking when something goes wrong, hence this extension to remove real people by merely saying the word. 

Reading text on a screen versus paper also reveals weakness: that less goes in and less is remembered has been demonstrated by cognitive science experiments, and this is truer still when questions are asked or problems set in school or college. Receiving such requests from a human being produces a motivation to answer that’s quite absent in onscreen versions: according to cognitive psychologist Daniel Willingham “It’s different when you’re learning from a person and you have a relationship with that person. That makes you care a little bit more about what they think, and it makes you a little bit more willing to put forth effort”.

Virtualisation also encourages excessive abstraction – trying to abstract 'skills' from particular content tends to make content seem arbitrary. Cognitive scientists have long known that what’s most important for reading comprehension isn’t some generally applicable skill but rather how much background knowledge and vocabulary the reader has relating to the topic. Content does matter and can’t be abstracted away, and ironically-enough computers are the perfect tools for locating relevant content quickly, rather than tools to train you in abstract comprehension skills. Whenever I read mention of some famous painting I go straight to Google Images to see it, ditto with Wikipedia for some historical event. We’re getting pretty close to Alan Kay's vision of the Dynabook.

It will be interesting to see what these cognitive researchers make of voice-operated interfaces like Alexa. Are they Artificially Intelligent enough to form believable relationships that inspire motivation? Sure, people who’re used to Alexa (or SatNav actors) do sort of relate to them, but it’s still mostly one-way, like “Show me the Eiffel Tower” – they’re no good at reasoning or deducing. And voice as a delivery mechanism would feel like a step backward into my own college days, trying frantically to scribble notes from a lecturer who gabbled…

[Dick Pountain always notices the ‘Remember Me’ box three milliseconds after he’s hit Return]















    

THE NUCLEAR OPTION

 Dick Pountain/ Idealog307/ 8th February 2020 14:49:23


Those horrific wild-fires in Australia may prove to be the tipping point that gets people to start taking the threat of climate change seriously. Perhaps IT isn’t, at the moment, the industry most responsible for CO₂ emissions, but that’s no reason for complacency. On the plus side IT can save fossil fuel usage, when people email or teleconference rather than travelling: on the minus side, the electrical power consumed by all the world’s social media data centres is very significant and growing (not to mention what’s scoffed up mining cryptocurrencies). IT, along with carbon-reducing measures like switching to electric vehicles, vastly increases the demand for electricity, and I’m not confident that all this demand can realistically be met by renewable solar, wind and tidal sources, which may have now become cheap enough but remain intermittent. 

That means that either storage, or some alternative back-up source, is needed to smooth out supply. A gigantic increase in the capacity of battery technologies could bridge that gap, but nothing on a big enough scale looks likely (for reasons I’ve discussed in a previous column). For that reason, and unpopular though it may be, I believe we must keep some nuclear power. It doesn’t mean I admire the current generation of fission reactors, which became unpopular for very good reasons: the huge cost of building them; the huge problem of disposing of their waste; and worst of all, because we’ve realised that human beings just aren’t diligent enough to be put in charge of machines that fail so unsafely. There are other nuclear technologies though that don’t share these drawbacks, but haven’t yet been sufficiently researched to get into production.

For about 50 years I’ve been hopeful for nuclear fusion (and like all fusion fans have been perennially disappointed). However things now really are looking up, thanks to two new lines of research: self-stable magnetic confinement and alpha emission. The first dispenses with those big metal doughnuts and their superconducting external magnets, and replaces them with smoke-rings - rapidly spinning plasma vortices that generate their own confining magnet field. The second, pioneered by Californian company TAE Technologies, seeks to fuse ordinary hydrogen with boron to generate alpha particles (helium nuclei), instead of fusing deuterium and tritium to produce neutrons. Since alpha particles, unlike neutrons, are electrically charged, they can directly induce current in an external conductor without leaving the apparatus. Neutrons must be absorbed into an external fluid to generate heat, which then drives a turbine, but in the process they render the fabric of the whole reactor radio-active, which alpha does not.

The most promising future fission technology is the thorium reactor, in which fission takes place in a molten fluoride salt. Such reactors can be far smaller than uranium ones, small enough to be air-cooled, they produce almost no waste, and they fail safe because fission fizzles out rather than runs wild if anything goes wrong. Distributed widely as local power stations, they could replace the current big central behemoths. That they haven’t caught on is partly due to industry inertia, but also because they currently still need a small amount of uranium 233 as a neutron source, which gets recycled like a catalyst. But now a team of Russian researchers are proposing a hybrid reactor design in which a deuterium-tritium fusion plasma, far too small to generate power itself, is employed instead of uranium to generate the neutrons to drive thorium fission.

A third technology I find encouraging isn’t a power source, but might just revolutionise power transmission. The new field of ‘twistronics’ began in 2018 when an MIT team lead by Pablo Jarillo-Herrero announced a device consisting of two layers of graphene stacked one upon the other, which becomes superconducting if those layers are very slightly twisted to create a moirĂ© pattern between their regular grids of carbon atoms. When you rotate the top layer by exactly 1.1° from the one below, it seems that electrons travelling between the layers are slowed down sufficiently that they pair-up to form the superconducting ‘fluid’, and this happens at around 140°K, way warmer than liquid helium and around halfway to room temperature. Twisted graphene promises a new generation of tools for studying the basis of superconduction: you’ll be able to tweak a system’s properties more or less by turning a knob, rather than having to synthesise a whole new chemical. Such tools should help speed the search for the ultimate prize, a room-temperature superconductor. That’s what we need to pipe electricity generated by solar arrays erected in the world’s hot deserts into our population centres with almost no loss. Graphene itself is unlikely to be such a conductor, but it may be what helps to discover one.  

[ Dick Pountain ain’t scared of no nucular radiashun]      





      


  





   

TO A DIFFERENT DRUMMER

 Dick Pountain/ Idealog 306/  January 6th 2020

My Christmas present to myself this year was a guitar, an Ibanez AS73 Artcore. This isn't meant to replace my vintage MIJ Strat but rather to complement it in a jazzier direction. 50-odd years ago I fell in love with blues, ragtime and country finger-picking, then slowly gravitated toward jazz via Jim Hall and Joe Pass, then to current Americana-fusionists like Bill Frisell, Charlie Hunter and Julian Lage (none of whom I'm anywhere near skillwise). It's a long time since I was in a band and I play mostly for amusement, but can't escape the fact that all those idols all work best in a trio format, with drums and bass. My rig does include Hofner violin bass, drum machine and looper pedal to record and replay accompaniments, and I have toyed with apps like Band-in-a-Box, or playing along to Spotify tracks, but find none of these really satisfactory -- too rigid, no feedback. Well, I've mentioned before in this column my project to create human-sounding music by wholly programmatic means. The latest version, which I've named  'Algorhythmics', is written in Python and is getting pretty powerful. I wonder, could I use it to write myself a robot trio?

Algorhythmics starts out using native MIDI format, by treating pitch, time, duration and volume data as four seperate streams, each represented by a list of ASCII characters. In raw form this data just sounds like a hellish digital musical-box, and the challenge is to devise algorithms that inject structure, texture, variation and expression. I've had to employ five levels of quasi-random variation to achieve something that sounds remotely human. The first level composes the data lists themselves by manipulating, duplicating, reversing, reflecting and seeding with randomness. The second level employs two variables I call 'arp' (for arpeggio) and 'exp' (for expression) that alter the way notes from different MIDI tracks overlap to control legato and staccato. A third level produces tune structure by writing functions called 'motifs' to encapsulate short tune fragments, which can then be assembled like Lego blocks into bigger tunes with noticeably repeating themes. Motifs alone aren't enough though: if you stare at wallpaper with a seemingly random pattern, you'll invariably notice where it starts to repeat, and the ear has this same ability to spot (and become bored by) literal repetition. Level four has a function called 'vary' that subtly alters the motifs inside a loop at each pass, and applies tables of chord/scale relations (gleaned from online jazz tutorials and a book on Bartok's composing methods) to harmonise the fragments. Level five is the outer loop that generates the MIDI output, in which blocks of motifs are switched on and off under algorithmic control, like genes being expressed in a string of DNA.

So my robot jazz trio is a Python program called TriBot that generates improvised MIDI accompaniments -- for Acoustic Bass and General MIDI drum kit -- and plays them into my Marshall amplifier. The third player is of course me, plugged in on guitar. The General MIDI drum kit feels a bit too sparse, so I introduced an extra drum track using ethnic instruments like Woodblock, Taiko Drum and Melodic Tom. Tribot lets me choose tempo, key, and scale (major, minor, bop, blues, chromatic, various modes) through an Android menu interface, and my two robot colleagues will improvise away until halted. QPython lets me save new Tribot versions as clickable Android apps, so I can fiddle with its internal works as ongoing research.

It's still only a partial solution, because although drummer and bass player 'listen' to one another -- they have access to the same pitch and rhythm data -- they can't 'hear' me and I can only follow them. In one sense this is fair enough as it's what I'd experience playing alongside much better live musicians. At brisk tempos Tribot sounds like a Weather Report tribute band on crystal meth, which makes for a good workout. But my ideal would be what Bill Frisell described in this 1996 interview with a Japanese magazine (https://youtu.be/tKn5VeLAz4Y, at 47:27), a trio that improvise all together, leaving 'space' for each other. That's possible in theory, using a MIDI guitar like a Parker or a MIDI pickup for my Artcore. I'd need to make Tribot work in real-time -- it currently saves MIDI to an intermediate file -- then merge in my guitar's output translated back into Algorhythmic data format, so drummer and bass could 'hear' me too and adjust their playing to fit. A final magnificent fantasy would be to extend TriBot so it controlled an animated video of cartoon musicians. I won't have sufficient steam left to do either, maybe I'll learn more just trying to keep up with my robots... 

[ Dick Pountain recommends you watch this 5 minute video, https://youtu.be/t-ReVx3QttA, before reading this column ]

 

   


   

FIRST CATCH YOUR GOAT

Dick Pountain/ Idealog 305/Dec 5th 2019

In a previous column I’ve confessed my addiction to exotic food videos on YouTube, and one I watched recently sparked off thoughts that went way beyond the culinary. Set in the steppes of Outer Mongolia, a charming-but-innocent young Canadian couple travel to witness an ancient, now rare, cooking event, they call 'Boodog' - though it doesn't contain any dog and is probably a poor transliteration. The Mongolian chef invited the young man to catch a goat, by hand, on foot, which he then dispatched quickly and efficiently (to the visible discomfort of the young woman), skinned, taking care to keep all four legs intact, sewed up all the orifices including (to evident amusement of same young woman) the 'butthole', and finally inflated it like a balloon.

A very small fire of twigs and brushwood was lit in which to heat smooth river pebbles; goat carcase was chopped up on the bone then stuffed back into skin, along with hot pebbles. Chef then produced a very untraditional propane torch, burned off all the fur and crisped the skin, and the end result, looking like some sinister black modernist sculpture, was carried on a litter into his yurt where they poured out several litres of goat soup and ate the grey, unappetising meat.

Puzzled by the complication of boodogging I was hardly surprised it's become rare - but then a lightbulb popped on in my head. This wasn’t to do with gastronomy but with energetics. Mongolian steppe soil is only a few inches deep, supporting grass to feed goats and yaks but no trees, hence the tiny fire and hot stones to maximise storage of its heat, and the anachronistic propane torch (which could hardly have roasted the goat). Hot stone cooking is common enough around the Pacific but always in a covered pit, which is impossible to dig in the steppe. These Mongolians had ingeniously adapted to the severe energetic limitations of their environment, sensibly submitting to the Second Law of Thermodynamics by making maximum use of every calorie.

Having just written a column about quantum computing, this little anthropological lesson sparked a most unexpected connection. We all live in a world ruled by the Second Law, and are ourselves, looked at from one viewpoint, simply heat engines. Our planet is roughly in thermal equilibrium, receiving energy as white light and UV from the sun and reradiating it back out into space in the infrared: our current climate panic shows just how delicate this equilibrium is. Incoming sunlight has lower entropy than the outgoing infrared, and on this difference all life depends: plants exploit the entropy gradient to build complex carbohydrates out of CO₂ and water, animals eat plants to further exploit the difference by building proteins, we eat animals and plants and use the difference to make art and science, movies and space shuttles. And when we die the Second Law cheerfully turns us back into CO₂ and ammonia (sometimes accelerated by a crematorium).   

The difficulty of building quantum computers arises from this fact, that quantum computation takes place in a different world that’s governed by the notoriously different rules of quantum mechanics. The tyranny of the Second Law continually tries to disrupt it, in the guise of thermal noise, because any actual quantum computing device must unavoidably be of our world, made of steel and copper and glass, and all the liquid helium in the world can’t entirely hide that fact. 

What also occurred to me is that if you work with quantum systems, it must become terribly attractive to fantasise about living in the quantum world, free from the tyranny of thermodynamics. Is that perhaps why the Multiverse interpretation of quantum mechanics is so unaccountably popular? Indeed, to go much further, is an unconscious awareness of this dichotomy actually rather ancient? People have always chafed at the restrictions imposed by gravity and thermodynamics, have invented imaginary worlds in which they could fly, or live forever, or grow new limbs, or shape-shift, or travel instantaneously, or become invisible at will. Magic, religion, science fiction, in a sense are all reactions against our physical limits that exist because of scale: we’re made of matter that’s made of atoms that obey Pauli’s Exclusion Principle, which prevents us from walking through walls or actually creating rabbits out of hats. Those atoms are themselves made from particles subject to different, looser rules, but we’re stuck up here, only capable of imagining such freedom. 

And that perhaps is why, alongside their impressively pragmatic adaptability, those Mongolian nomads - who move their flocks with the seasons as they’ve done for centuries, but send their children to university in Ulan Bator and enthusiastically adopt mobile phones and wi-fi - also retain an animistic, shamanistic religion with a belief in guardian spirits. 



     



A QUANTUM OF SOLACE?

 Dick Pountain/ Idealog304/ 3rd Nov 2019

When Google announced, on Oct 24th, that it has achieved 'quantum supremacy' -- that is, has performed a calculation on a quantum computer faster than any conventional computer could ever do -- I was forcefully reminded that quantum computing is a subject I've been avoiding in this column for 25 years. That prompted a further realisation that it's because I'm sceptical of the claims that have been made. I should hasten to add that I'm not sceptical about quantum mechanics per se (though I do veer closer to Einstein than to Bohr, am more impressed by Carver Mead's Collective Electrodynamics  than by Copenhagen, and find 'many worlds' frankly ludicrous). Nor am I sceptical of the theory of quantum computation itself, though the last time I wrote about it was in Byte in 1997.  No, what I'm sceptical of are the pragmatic engineering prospects for its timely implementation. 

The last 60 years saw our world transformed by a new industrial revolution in electronics, gifting us the internet, the smartphone, Google searches and Wikipedia, Alexa and Oyster cards. The pace of that revolution was never uniform but accelerated to a fantastic extent from the early 1960s thanks to the invention of CMOS, the Complementary Metal-Oxide-Semiconductor fabrication process. CMOS had a property shared by few other technologies, namely that it became much, much cheaper and faster the smaller you made it, resulting in 'Moore's Law', that doubling of power and halving of cost every two years that's only now showing any sign of levelling off.  That's how you got a smartphone as powerful as a '90s supercomputer in your pocket.  CMOS is a solid-state process where electrons whizz around metal tracks deposited on treated silicon, which makes it amenable to easy duplication by what amounts to a form of printing. 

You'll have seen pictures of Google's Sycamore quantum computer that may have achieved 'supremacy' (though IBM is disputing it). It looks more like a microbrewery than a computer. Its 56 quantum bits are indeed solid state, but they're superconductors that work at microwave frequencies and near absolute zero immersed in liquid helium. The quantum superpositions upon which computation depends collapse at higher temperatures and in the presence of radio noise, and there's no prospect that such an implementation could ever achieve the benign scaling properties of CMOS. Admittedly a single qubit can in theory do the work of millions of CMOS bits, but the algorithms that need to be devised to exploit that advantage are non-intuitive and opaque, the results of computation are difficult to extract correctly and will require novel error-correction techniques that are as yet unknown and may not exist. It's not years but decades, or more, from practicality.

Given this enormous difficulty, why is so much investment going into quantum computing right now? Thanks to two classes of problem that are provenly intractable on conventional computers, but of great interest to extremely wealthy sponsors. The first is the cracking of public-key encryption, a high priority for the world's intelligence agencies which therefore receives defence funds.  The second is the protein-folding problem in biochemistry. Chains of hundreds of amino-acids that constitute enzymes can fold and link to themselves in a myriad different ways, only one of which will produce the proper behaviour of that enzyme, and that behaviour is the target for synthetic drugs. Big Pharma would love a quantum computer that could simulate such folding in real time, like a CAD/CAM system for designing monoclonal antibodies. 

What worries me is that the hype surrounding quantum computing is of just the sort that's guaranteed to bewitch technologically-illerate politicians, and it may be resulting in poor allocation of computer science funding. The protein folding problem is an extreme example of the class of optimisation problems -- others are involved in banking, transport routing, storage allocation, product pricing and so on -- all of which are of enormous commercial importance and have been subject to much research effort. For example twenty years ago constraint solving was one very promising line of study. When faced with an intractably large number of possibilities, apply and propagate constraints to severely prune the tree of possibilities rather than trying to traverse it all. The promise of quantum computers is precisely that, assuming you could assemble enough qubits, they could indeed just test all the branches, thanks to superposition. In recent years the flow of constraint satisfaction papers seems to have dwindled: is this because the field has struck an actual impass, or because the chimera of imminent quantum computers is diverting effort? Perhaps a hybrid approach to these sorts of problem might be more productive, say hardware assistance for constraint solving, plus deep learning, plus analog architectures, and anticipating shared quantum servers as one, fairly distant, prospect rather than the only bet.    


THE SKINNER BOX

 Dick Pountain/ Idealog 303/ 4th October 2019 10:27:48

We live in paranoid times, and at least part of that paranoia is being provoked by advances in technology. New techniques of surveillance and prediction cut two ways: they can be used to prevent crime and to predict illness, but they can also be abused for social control and political repression – which of these one sees as more important is becoming a matter of high controversy. Those recent street demonstrations in Hong Kong highlighted the way that sophisticated facial recognition tech, when combined with CCTV built into special lamp-posts can enable a state to track and arrest individuals at will. 

But the potential problems go way further than this, which is merely an extension of current law-enforcement technology. Huge advances in AI and Deep Learning are making it possible to refine thise more subtle means of social control often referred to as ‘nudging’. To nudge means getting people to do what you want them to do, or what is deemed good for them, not by direct coercion but by clever choice of defaults that exploit people’s natural biases and laziness (both of which we understand better than ever before thanks to the ground-breaking psychological research of Daniel Kahneman and Amos Tversky).   

The arguments for and against nudging involve some subtle philosophical principles, which I’ll try to explain as painlessly as possible. Getting people to do “what’s good for them” raises several questions: who decides what’s good; is their decision correct; even if it is, do we have the right to impose it, what about free will? Liberal democracy (which is what we still do just about have, certainly compared to Russia or China) depends upon citizens being capable of making free decisions about matters important to the conduct of their own lives, but what if advertising, or addiction, or those intrinsic defects of human reasoning that Kahneman uncovered, so distort their reckoning as to make them no longer meaningfully free – what if they’re behaving in ways contrary to their own expressed interests and injurious to their health? Examples of such behaviours, and the success with which we’ve dealt with them, might be compulsory seat belts in cars (success), crash helmets for motorcyclists (success), smoking bans (partial success), US gun control (total failure).


 


Such control is called “paternalism”, and some degree of it is necessary to the operation of the state in complex modern societies, wherever the stakes are sufficiently high (as with smoking) and the costs of imposition, in both money and offended freedom, are sufficiently low. However there are libertarian critics who reject any sort of paternalism at all, while an in-between position, "libertarian paternalism", claims that the state has no right to impose but may only nudge people toward correct decisions, for example over opting-in versus opting-out of various kinds of agreement – mobile phone contracts, warranties, mortgages, privacy agreements. People are lazy and will usually go with the default option, careful choice of which can nudge rather than compel them to the desired decision. 



The thing is, advances in AI are already enormously amplifying the opportunities for nudging, to a paranoia-inducing degree. The nastiest thing I saw at the recent AI conference in King’s Cross was an app that reads shoppers’ emotional states using facial analysis and then 


raises or lowers the price of items offered to them on-the-fly! Or how about Ctrl-Lab’s app that non-invasively reads your intention to move a cursor (last week Facebook bought the firm). Since vocal chords are muscles too, that non-invasive approach might conceivably be extended with even deeper learning to predict your speech intentions, the voice in your head, your thoughts…

I avoid both extremes in such arguments about paternalism. I do believe that climate crisis is real and that we’ll need to modify human behaviour a lot in order to survive, so any help will be useful. On the other hand I was once an editor at Oz magazine and something of a libertarian rebel-rouser in the ‘60s. In a recent Guardian interview, the acerbic comedy writer Chris Morris (‘Brass Eye’, ‘Four Lions’) described meeting an AA man who showed him the monitoring kit in his van that recorded his driving habits. Morris asked “Isn’t that creepy?” but the man replied “Not really. My daughter’s just passed her driving test and I’ve got half-price insurance for her. A black box recorder in her car and camera on the dashboard measures exactly how she drives and her facial movements. As long as she stays within the parameters set by the insurance company, her premium stays low.” This sort of super-nudge comes uncomfortably close to China’s punitive Social Credit system: Morris called it a “Skinner Box”, after the American behaviourist BF Skinner who used one to condition his rats…



Tuesday 14 April 2020

I SECOND THAT EMOTION

Dick Pountain/ Idealog302/ 2nd September 2019 10:24:10

Regular readers might have been surprised when I devoted my previous two columns to AI, a topic about which I’ve often expressed scepticism here. There’s no reason to be, because it’s quite possible to be impressed by the latest advances in AI hardware and software while remaining a total sceptic about the field’s more hubristic claims. And a book that reinforces my scepticism has arrived at precisely the right time, ‘The Strange Order of Things’ by Antonio Damasio. He’s a Portuguese-American professor of neuroscience who made his name researching the neurology of the emotions, and the critical role they play in high-level cognition (contrary to current orthodoxy).

Damasio also has a deep interest in the philosophy of consciousness, to which this book is a remarkable contribution. Damasio admires the 17th-century Dutch philosopher Baruch Spinoza, who among his many other contributions made a crucial observation about biology – that the essence of all living creatures is to create a boundary between themselves and the outside world within which they try their hardest to maintain a steady state, the failure of which results in death.  
Spinoza called this urge to persevere ‘conatus’, but Damasio prefers the more modern name ‘homeostasis’.

The first living creatures – whose precise details we don’t, and may never, know – must have emerged by enclosing a tiny volume of the surrounding sea water with a membrane of protein or lipid, then controlling the osmotic pressure inside to prevent shrivelling or bursting. And they became able to split this container in two to reproduce themselves. Whether one calls this contatus or homeostasis, it is the original source of all value. When you’re trying to preserve your existence by maintaining your internal environment, it becomes necessary to perceive threats and benefits to it and to act upon them, so mechanisms that distinguish ‘good’ from ‘bad’ are hard-wired not merely into the first single celled creatures, but into every cell of multicellular creatures – ourselves included – that evolved from them. The simplest bacteria possess ‘senses’ that cause them to swim toward food or away from harmful chemicals, or to and from light, or whatever.

Damasio’s work traces the way that, as multicellular creatures evolved, not only does this evaluation mechanism persist in every individual cell, but as more complex body forms evolved into separate organs, then nervous systems, then brains, this evaluation became expressed in ever higher-level compound systems. In our case it’s the duty of the limbic system within our brain, which controls what we call ‘emotions’ using many parallel networks of electrical nerve impulses and chemical hormone signals. And Damasio believes that our advanced abilities to remember past events, to predict future events, and to describe things and events through language, are intimately connected into this evaluatory system. There’s no such thing as a neutral memory or word, they’re always tagged with some emotional connotation, whether or not that ever gets expressed in consciousness.

And so at last I arrive at what this has to do with AI, and my scepticism towards its strongest claims. Modern AI is getting extraordinarily good at emulating, even exceeding, our own abilities to remember, to predict and describe, to listen and to speak, to recognise patterns and more. All these are functions of higher consciousness, but that isn’t the same thing as intelligence. AI systems don’t and can’t have any idea what they are for, whereas in our body every individual cell knows what it’s for, what it needs (usually glucose), and when to quit. You have five fingers because while in your mother’s womb, the cells in between them killed themselves (look up ‘apoptosis’) for the sake of your hand.

It’s not so much the question of whether robots could ever feel – which obsesses sci-fi authors – but of whether any AI machine could ever truly reproduce itself. Every cell of a living creature contains its own battery (the mitochondrion), its own blueprint (the DNA) and its own constructor (the ribosomes), it’s an ultimate distributed self-replicating machine honed by 3.5 billion years of evolution. When you look at an AI processor chip under the microscope it may superficially resemble the cellular structure of a plant or animal with millions of transistors for cells, but they don’t contain their own power sources, nor the blueprints to make themselves, and they don’t ‘know’ what they’re for. The chip requires an external wafer fab the size of an aircraft hangar to make it, the robot’s arm requires an external 3D printer to make it. As Damasio shows so brilliantly, the homeostatic drive of cells permeates our entire bodies and minds, and those emotional forces that rationalists reject as ‘irrational’ are just our body trying to look after itself.

INTELLIGENCE ON A CHIP

Dick Pountain/ Idealog 301/ 4th August 2019 12:59:28

I used to write a lot about interesting hardware back in the 1980s. I won a prize for my explanation of HP’s PA-RISC pipeline, and a bottle of fine Scotch from Sir Robin Saxby for a piece on ARM’s object-oriented memory manager. Then along came x486, the CPU world settled into a rut/groove, and Byte closed. I never could summon the enthusiasm to follow Intel’s cache and multi-core shenanigans.

Last month’s column was about the CogX AI festival in King’s Cross, but confined to software matters: this month I’m talking hardware, which to me looks as exciting as in that ‘80s RISC revolution. Various hardware vendors explained the AI problems their new designs are intended to solve, which are often about ‘edge computing’. The practical AI revolution that’s going on all around us is less about robots and self-driven cars than about phone apps.

You’ll have noticed how stunningly effective Google Translate has become nowadays, and you may also have Alexa (or one of its rivals) on your table to select your breakfast playlist. Google Translate performs the stupendous amounts of machine learning and computation it needs on remote servers, accessed via The Cloud. On the other hand stuff like voice or face recognition on your phone are done by the local chipset. Neither running Google Translate on your phone, nor running face recognition in the cloud make any sense because bandwidth isn’t free, and latency is more crucial in some applications than in others. The problem domain has split into two – the centre and the edge of the network.

Deep learning is mostly done using convolutional neural networks that require masses of small arithmetic operations to be done very fast in parallel. GPU chips are currently favoured for this job since, although originally designed for mashing pixels, they’re closer to what’s needed than conventional CPUs. Computational loads for centre and edge AI apps are similar in kind but differ enormously in data volumes and power availability, so it makes sense to design different processors for the two domains. While IBM, Intel and other big boys are working to this end, I heard two smaller outfits presenting innovatory solutions at CogX.

Nigel Toon, CEO of Cambridge-based Graphcore, aims at the centre with the IPU-Pod, a 42U rack-mount board that delivers 16 PetaFLOPs of mixed-precision convolving power. This board holds 128 of Graphcore’s Colossus GC2 IPUs (Intelligence Processor Unit) each delivering 500TFlops and running up to 30,000 independent program threads in parallel in 1.2GB of on-chip memory. This is precisely what’s needed for huge knowledge models, keeping the data as close as possible to the compute power to reduce bus latency – sheer crunch rather than low power is the goal.

Mike Henry, CEO of US outfit Mythic was instead focussed on edge processors, where low power consumption is absolutely crucial. Mythic has adopted the most radical solution imaginable, analog computing: their IPU chip contains a large analog compute array, local SRAM memory to stream data between the network’s nodes and a single-instruction multiple-data (SIMD) unit for processing operations the analog array can’t handle. Analog processing is less precise than digital – which is why the computer revolution was digital – but for certain applications that doesn’t matter. Mythic’s IPU memory cells are tunable resistors in which computation happens in-place as input voltage turns to output current according to Ohm’s Law, in effect multiplying an input vector by a weight matrix. Chip size and power consumption are greatly reduced by keeping data and compute in the same place, hence wasting less energy and real-estate in A-to-D conversion. (This architecture may be more like the way biological nerves work too).

To effectively use the extra intelligence these chips promise we’ll need more transparent user interfaces, and the most exciting presentation I saw at CogX was ‘Neural Interfaces and the Future of Control’ by Thomas Reardon (who once developed Internet Explorer for Microsoft, but has since redeemed himself). Reardon is a pragmatist who understands that requiring cranial surgery to interface to a computer is a bit of a turn-off (we need it like, er, a hole in the head) and his firm Ctrl-Labs has found a less painful way.

Whenever you perform an action your brain sends signals via motor neurons to the requisite muscles, but when you merely think about that action your brain rehearses it by sending both the command and an inhibitory signal. Ctrl-Labs uses a non-invasive electromyographic wristband that captures these motor neuron signals from the outside and feeds them to a deep-learning network – their software then lets you control a computer by merely thinking the mouse or touchscreen actions without actually lifting a finger. Couch-potato-dom just moved up a level.

Wednesday 8 January 2020

TIME SERVED

Dick Pountain/ Idealog 300/ 5th July 2019 07:51:08

My 300th column is a milestone that deserves a meaty subject, and luckily the recent CogX 2019 festival provides a perfect one. I reckoned I’d served my time in the purgatory that is the computer show: for 15 years I spent whole weeks at CeBIT in Hannover, checking out new tech for Byte magazine; I’ve done Comdex in Vegas (stayed in Bugsy Siegel’s horrendous Flamingo where you walked through a mile of slots to get to your room); I’ve flown steerage to Taipei to help judge product awards at Computex. I thought I was done, but CogX had two irresistible attractions: it brought together the world’s top AI researchers for three days of serious talk, and it was in King’s Cross, a short walk down the canal from my house, in what’s now called ‘The Knowledge Quarter’ - British Library, Francis Crick Institute, Arts University (Central and St Martins rolled into one brand-new campus), YouTube HQ, with Google’s still under construction.

CogX was the first proper show to be held in the futuristic Coal Drops Yard development and it was big – 500 speakers on 12 stages, some in large geodesic tents. It was also silly expensive at £2000 for three-days-all-events or £575 per day (naturally I had a press pass). Rain poured down on the first day causing one wag to call it ‘Glastonbury for nerds’, but it was packed with standing-room-only at every talk I attended. A strikingly young, trendy and diverse crowd, most I imagine being paid for by red-hot Old Street startups. Smart young people who don’t aspire to be DJs or film stars now seem to aim at AI instead. This made me feel like an historic relic, doffing my cap on the way out, but in a nice way.

Perhaps you suspect, as I did beforehand, that the content would be all hype and bullshit, but you’d be very wrong. It was tough to choose a dozen from the 500. I went mostly for the highly techy or political, skipping the marketing and entreprenurial ones, and the standard of talks, panel discussions and organisation was very impressive. This wasn’t a conference about how Machine Learning (ML) or deep learning work, that’s now sorted. These folk have their supercomputers and ML tools that work, it’s about what they’re doing with them, and whether they should be and who’s going to tell them.

David Ferrucci (formerly IBM Watson, now Elemental Cognition) works on natural language processing and making ML decisions more transparent, using a strategy that combines deep-learning and database search with interactive tuition. Two women buy mint plants: one puts hers on her windowsill where it thrives, the other in a dark room where it doesn't. His system threw out guesses and questions until it understood that plants need light, and that light comes through windows. Second story: two people buy wine, one stores it in the fridge, the other on the windowsill where it spoils. More guesses, more questions, his system remembers what it learned from the mint story, deduces that light is good for plants but bad for wine. To make machines really smart, teach them like kids.

Professor Maja Pantić (Affective and Behavioural Computing at Imperial College, head of Samsung AI lab in Cambridge) told a nice story about autism and a nasty one about supermarkets. They’ve found that autistic children lack the ability to integrate human facial signals (mouth, eyes, voice etc), become overwhelmed and terrified. An AI robotic face can separate these signals into a format the child can cope with. On the other hand supermarkets can now use facial recognition software to divine the emotional state of shoppers and so change the prices charged to them on the fly. Deep creepiness.

Eric Beinhofer (Oxford Martin School) and CĂ©sar Hidalgo (MIT Media Lab) gave mind-boggling presentations on the way AI is now used to build colossal arrays of virtual environments – rat mazes, economic simulations, war games – on which to train other ML systems, thus exponentially reducing training times and improving accuracy. Stephen Hsu (Michigan State) described how ML is now learning from hundreds of thousands of actual human genomes to identify disease mutations with great accuracy, while Helen O’Neill (Reproductive and Molecular Genetics, UCL) said combining this with CRISPR will permit choosing not merely the gender but many others traits of unborn babies, maybe within five years. A theme that emerged everywhere was, ‘we are already doing unprecedented things that are morally ambiguous, even God-like, but the law and the politicians haven’t a clue. Please regulate us, please tell us what you want done with this stuff’. But CogX contained way too much for one column, more next month about extraordinary AI hardware developments and what it’s all going to mean.

Link to YouTube videos of my choices: https://www.youtube.com/playlist?list=PLL4ypMaasjt-_K14PNE6YQRQIlmBhHoZE

[Dick Pountain found the food in King’s Cross, especially the coffee, more interesting than Hannover Messe or Vegas, somewhat less so than snake wine in Taipei ]


ENERGY DRINKERS

Dick Pountain/ Idealog 299/ 7th June 2019 09:52:38

If you enjoy programming as much as I do, you're likely to have encountered the scenario I'm about to describe. You're tackling a really knotty problem that involves using novel data types and frequent forays into the manuals: you eventually crack it, then glance at the clock and realise that hours have passed in total concentration, oblivious to all other influences. You feel suddenly thirsty, rise to make a cup of tea and feel slightly weak at the knees, as if you'd just run a mile. That's because using your brain so intensively is, energetically, every bit as demanding as running: it uses up your blood glucose at a ferocious rate. Our brains burn glucose faster than any of our other organs, up to 20% of our total consumption rate.

How come? The brain contains no muscles, doesn't do any heavy lifting or moving: all it does is shift around images, ideas and intentions which surely must be weightless? Not so. I'll admit that a constant refrain of mine in this column has been that the brain isn't a digital computer in the way the more naive AI enthusiasts believe, but that's not to say that it therefore isn't a computing device of a very different (and not fully understood) kind - it most certainly is, and computing consumes energy. Even though a bit would appear to weigh nothing, the switching of silicon gates or salty-water neurons requires energy to perform and is less than 100% efficient. Rather a lot of energy actually if a lot of bits or neurons are involved, and their very tiny nature makes it easy to assemble very, very large numbers of them in a smallish space.

This was brought home to me yesterday in a most dramatic way via the MIT Technology Review (https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes) which describes recent research into the energy consumption of state-of-the-art Natural Language Processing (NLP) systems, of the sort deployed online and behind gadgets like Alexa. Training a single really large deep-learning system consumes colossal amounts of energy, generating up to five times the CO2 emitted by a car (including fuel burned) over its whole lifetime. How could that be possible? The answer is that it doesn't all happen on the same computer, which would be vaporised in a millisecond. It happens in The Cloud, distributed all across the world in massively-parallel virtual machine arrays working on truly humongous databases, over-and-over-again as the system tweaks and optimises itself.

We've already had a small glimpse of this fact through the mining of BitCoins, where the outfits that profit from this weirdly pathological activity have to balance the millions they mine against equally enormous electricity bills, and must increasingly resort to basing their servers in the Arctic or sinking them into lakes to water cool them. Yes indeed, computing can consume a lot of energy when you have to do a lot of it, and the deceptive lightness of a graphical display hides this fact from us: even live-action-real-shoot-em-up games nowadays demand multiple supercomputer-grade GPUs.

It was a hero of mine, Carver Mead, who first made me think about the energetics of computing in his seminal 1980 book 'Introduction to VLSI Systems'. Chapter 9 on the 'Physics of Computational Systems' not only explains, in thermodynamic terms, how logic gates operate as heat engines, but also employs the 2nd Law to uncover the constraints on consumption for any conceivable future computing technology. In particular he demolished the hope of some quantum computing enthusiasts for 'reversible computation' which recovers the energy used: he expected that would use more still.

The slice of total energy usage that goes into running the brains of the eight billion of us on the planet is way less than is used for transport or heating, thanks to six billion years of biological evolution that forged our brain into a remarkably compact computing device. That evolution changed our whole anatomy, from the bulging brain-case to the wide female pelvis needed to deliver it, and it also drove us to agriculture - extracting glucose fast enough to run it reliably forced us to invent cooking and to domesticate grass seeds.

Now AI devices are becoming important to our economies, and the Alexa on your table makes that feel feasible, but vast networks of energy-guzzling servers lie behind that little tube. Silicon technology just can't squeeze such power into the space our fat-and-protein brains occupy. Alongside the introduction of the electric car, we're about to learn some unpleasant lessons concerning the limits of our energy generation infrastructure.

[Dick Pountain's favourite energy drink is currently Meantime London Pale Ale]


SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...