Monday 24 August 2020

VIRTUALLY USELESS

Dick Pountain/ Idealog308/ 6th March 2020 10:58:31


Online booking is one of the more noticeable ways computer technology has changed our lives for the better. See an advert for a concert, play or movie – maybe on paper, maybe on Facebook – and in a few keystrokes you can have e-tickets, removing even the terrible chore of having to collect them from the box-office (and, yes, I do know just barking at Alexa to do it is quicker still, but have decided not to go there). However online booking has also shown me some of the limitations of the virtual world. For example – admittedly ten years ago – EasyJet’s website was once so laggy that I thought I was clicking August 6th, but the drop-down hadn’t updated so the tickets were for the 5th. Ouch.  

More recently I booked a ticket for a favourite concert this March, only to discover that it’s actually in March 2021, not 2020. OK, that’s my fault for misreading, but the ad was embedded among a bunch of other concerts that are in March 2020. Another example: last week I read a Facebook share from a close friend that appeared to be a foul-mouthed diatribe against atheism. I was somewhat surprised, even shocked by this, but fortunately I clicked it to reveal a longer meme that, further down, refuted the diatribe. Facebook wouldn’t scroll down because it was graphic, not a text post. 

These are all symptoms of the wider cognitive weakness of two-dimensional online interfaces, that’s leading some people to call for a return to paper for certain activities, including education. The problem is all about attention. Visual UIs employ the metaphor of buttons you press to perform actions, directing your attention to their important areas. But this has a perverse side-effect: the more you use them and the more expert you become, the less you notice their less-important areas (like that year, which wasn’t on a button).

Virtualising actions devalues them. Pressing a physical button – whether to buy a bar of chocolate or to launch a nuclear missile – used to produce a bodily experience of pressure and resistance, which lead to an anticipation of the effect, which lead to motivation. Pressing on-screen buttons creates no such effect, so one may aimlessly press them in reflex fashion (hence the success of some kinds of phishing attack). 

It’s more than coincidence that 'Cancel Culture' has arisen in the age of social media and smartphones. This refers to a new style of boycott where some celebrity who’s expressed an unpopular opinion on social media gets "cancelled", that is dropped by most of their followers, which can lead to a steep decline in their careers. But of course ‘Cancel’ is the name of that ubiquitous on-screen button you hit without thinking when something goes wrong, hence this extension to remove real people by merely saying the word. 

Reading text on a screen versus paper also reveals weakness: that less goes in and less is remembered has been demonstrated by cognitive science experiments, and this is truer still when questions are asked or problems set in school or college. Receiving such requests from a human being produces a motivation to answer that’s quite absent in onscreen versions: according to cognitive psychologist Daniel Willingham “It’s different when you’re learning from a person and you have a relationship with that person. That makes you care a little bit more about what they think, and it makes you a little bit more willing to put forth effort”.

Virtualisation also encourages excessive abstraction – trying to abstract 'skills' from particular content tends to make content seem arbitrary. Cognitive scientists have long known that what’s most important for reading comprehension isn’t some generally applicable skill but rather how much background knowledge and vocabulary the reader has relating to the topic. Content does matter and can’t be abstracted away, and ironically-enough computers are the perfect tools for locating relevant content quickly, rather than tools to train you in abstract comprehension skills. Whenever I read mention of some famous painting I go straight to Google Images to see it, ditto with Wikipedia for some historical event. We’re getting pretty close to Alan Kay's vision of the Dynabook.

It will be interesting to see what these cognitive researchers make of voice-operated interfaces like Alexa. Are they Artificially Intelligent enough to form believable relationships that inspire motivation? Sure, people who’re used to Alexa (or SatNav actors) do sort of relate to them, but it’s still mostly one-way, like “Show me the Eiffel Tower” – they’re no good at reasoning or deducing. And voice as a delivery mechanism would feel like a step backward into my own college days, trying frantically to scribble notes from a lecturer who gabbled…

[Dick Pountain always notices the ‘Remember Me’ box three milliseconds after he’s hit Return]















    

THE NUCLEAR OPTION

 Dick Pountain/ Idealog307/ 8th February 2020 14:49:23


Those horrific wild-fires in Australia may prove to be the tipping point that gets people to start taking the threat of climate change seriously. Perhaps IT isn’t, at the moment, the industry most responsible for CO₂ emissions, but that’s no reason for complacency. On the plus side IT can save fossil fuel usage, when people email or teleconference rather than travelling: on the minus side, the electrical power consumed by all the world’s social media data centres is very significant and growing (not to mention what’s scoffed up mining cryptocurrencies). IT, along with carbon-reducing measures like switching to electric vehicles, vastly increases the demand for electricity, and I’m not confident that all this demand can realistically be met by renewable solar, wind and tidal sources, which may have now become cheap enough but remain intermittent. 

That means that either storage, or some alternative back-up source, is needed to smooth out supply. A gigantic increase in the capacity of battery technologies could bridge that gap, but nothing on a big enough scale looks likely (for reasons I’ve discussed in a previous column). For that reason, and unpopular though it may be, I believe we must keep some nuclear power. It doesn’t mean I admire the current generation of fission reactors, which became unpopular for very good reasons: the huge cost of building them; the huge problem of disposing of their waste; and worst of all, because we’ve realised that human beings just aren’t diligent enough to be put in charge of machines that fail so unsafely. There are other nuclear technologies though that don’t share these drawbacks, but haven’t yet been sufficiently researched to get into production.

For about 50 years I’ve been hopeful for nuclear fusion (and like all fusion fans have been perennially disappointed). However things now really are looking up, thanks to two new lines of research: self-stable magnetic confinement and alpha emission. The first dispenses with those big metal doughnuts and their superconducting external magnets, and replaces them with smoke-rings - rapidly spinning plasma vortices that generate their own confining magnet field. The second, pioneered by Californian company TAE Technologies, seeks to fuse ordinary hydrogen with boron to generate alpha particles (helium nuclei), instead of fusing deuterium and tritium to produce neutrons. Since alpha particles, unlike neutrons, are electrically charged, they can directly induce current in an external conductor without leaving the apparatus. Neutrons must be absorbed into an external fluid to generate heat, which then drives a turbine, but in the process they render the fabric of the whole reactor radio-active, which alpha does not.

The most promising future fission technology is the thorium reactor, in which fission takes place in a molten fluoride salt. Such reactors can be far smaller than uranium ones, small enough to be air-cooled, they produce almost no waste, and they fail safe because fission fizzles out rather than runs wild if anything goes wrong. Distributed widely as local power stations, they could replace the current big central behemoths. That they haven’t caught on is partly due to industry inertia, but also because they currently still need a small amount of uranium 233 as a neutron source, which gets recycled like a catalyst. But now a team of Russian researchers are proposing a hybrid reactor design in which a deuterium-tritium fusion plasma, far too small to generate power itself, is employed instead of uranium to generate the neutrons to drive thorium fission.

A third technology I find encouraging isn’t a power source, but might just revolutionise power transmission. The new field of ‘twistronics’ began in 2018 when an MIT team lead by Pablo Jarillo-Herrero announced a device consisting of two layers of graphene stacked one upon the other, which becomes superconducting if those layers are very slightly twisted to create a moirĂ© pattern between their regular grids of carbon atoms. When you rotate the top layer by exactly 1.1° from the one below, it seems that electrons travelling between the layers are slowed down sufficiently that they pair-up to form the superconducting ‘fluid’, and this happens at around 140°K, way warmer than liquid helium and around halfway to room temperature. Twisted graphene promises a new generation of tools for studying the basis of superconduction: you’ll be able to tweak a system’s properties more or less by turning a knob, rather than having to synthesise a whole new chemical. Such tools should help speed the search for the ultimate prize, a room-temperature superconductor. That’s what we need to pipe electricity generated by solar arrays erected in the world’s hot deserts into our population centres with almost no loss. Graphene itself is unlikely to be such a conductor, but it may be what helps to discover one.  

[ Dick Pountain ain’t scared of no nucular radiashun]      





      


  





   

TO A DIFFERENT DRUMMER

 Dick Pountain/ Idealog 306/  January 6th 2020

My Christmas present to myself this year was a guitar, an Ibanez AS73 Artcore. This isn't meant to replace my vintage MIJ Strat but rather to complement it in a jazzier direction. 50-odd years ago I fell in love with blues, ragtime and country finger-picking, then slowly gravitated toward jazz via Jim Hall and Joe Pass, then to current Americana-fusionists like Bill Frisell, Charlie Hunter and Julian Lage (none of whom I'm anywhere near skillwise). It's a long time since I was in a band and I play mostly for amusement, but can't escape the fact that all those idols all work best in a trio format, with drums and bass. My rig does include Hofner violin bass, drum machine and looper pedal to record and replay accompaniments, and I have toyed with apps like Band-in-a-Box, or playing along to Spotify tracks, but find none of these really satisfactory -- too rigid, no feedback. Well, I've mentioned before in this column my project to create human-sounding music by wholly programmatic means. The latest version, which I've named  'Algorhythmics', is written in Python and is getting pretty powerful. I wonder, could I use it to write myself a robot trio?

Algorhythmics starts out using native MIDI format, by treating pitch, time, duration and volume data as four seperate streams, each represented by a list of ASCII characters. In raw form this data just sounds like a hellish digital musical-box, and the challenge is to devise algorithms that inject structure, texture, variation and expression. I've had to employ five levels of quasi-random variation to achieve something that sounds remotely human. The first level composes the data lists themselves by manipulating, duplicating, reversing, reflecting and seeding with randomness. The second level employs two variables I call 'arp' (for arpeggio) and 'exp' (for expression) that alter the way notes from different MIDI tracks overlap to control legato and staccato. A third level produces tune structure by writing functions called 'motifs' to encapsulate short tune fragments, which can then be assembled like Lego blocks into bigger tunes with noticeably repeating themes. Motifs alone aren't enough though: if you stare at wallpaper with a seemingly random pattern, you'll invariably notice where it starts to repeat, and the ear has this same ability to spot (and become bored by) literal repetition. Level four has a function called 'vary' that subtly alters the motifs inside a loop at each pass, and applies tables of chord/scale relations (gleaned from online jazz tutorials and a book on Bartok's composing methods) to harmonise the fragments. Level five is the outer loop that generates the MIDI output, in which blocks of motifs are switched on and off under algorithmic control, like genes being expressed in a string of DNA.

So my robot jazz trio is a Python program called TriBot that generates improvised MIDI accompaniments -- for Acoustic Bass and General MIDI drum kit -- and plays them into my Marshall amplifier. The third player is of course me, plugged in on guitar. The General MIDI drum kit feels a bit too sparse, so I introduced an extra drum track using ethnic instruments like Woodblock, Taiko Drum and Melodic Tom. Tribot lets me choose tempo, key, and scale (major, minor, bop, blues, chromatic, various modes) through an Android menu interface, and my two robot colleagues will improvise away until halted. QPython lets me save new Tribot versions as clickable Android apps, so I can fiddle with its internal works as ongoing research.

It's still only a partial solution, because although drummer and bass player 'listen' to one another -- they have access to the same pitch and rhythm data -- they can't 'hear' me and I can only follow them. In one sense this is fair enough as it's what I'd experience playing alongside much better live musicians. At brisk tempos Tribot sounds like a Weather Report tribute band on crystal meth, which makes for a good workout. But my ideal would be what Bill Frisell described in this 1996 interview with a Japanese magazine (https://youtu.be/tKn5VeLAz4Y, at 47:27), a trio that improvise all together, leaving 'space' for each other. That's possible in theory, using a MIDI guitar like a Parker or a MIDI pickup for my Artcore. I'd need to make Tribot work in real-time -- it currently saves MIDI to an intermediate file -- then merge in my guitar's output translated back into Algorhythmic data format, so drummer and bass could 'hear' me too and adjust their playing to fit. A final magnificent fantasy would be to extend TriBot so it controlled an animated video of cartoon musicians. I won't have sufficient steam left to do either, maybe I'll learn more just trying to keep up with my robots... 

[ Dick Pountain recommends you watch this 5 minute video, https://youtu.be/t-ReVx3QttA, before reading this column ]

 

   


   

FIRST CATCH YOUR GOAT

Dick Pountain/ Idealog 305/Dec 5th 2019

In a previous column I’ve confessed my addiction to exotic food videos on YouTube, and one I watched recently sparked off thoughts that went way beyond the culinary. Set in the steppes of Outer Mongolia, a charming-but-innocent young Canadian couple travel to witness an ancient, now rare, cooking event, they call 'Boodog' - though it doesn't contain any dog and is probably a poor transliteration. The Mongolian chef invited the young man to catch a goat, by hand, on foot, which he then dispatched quickly and efficiently (to the visible discomfort of the young woman), skinned, taking care to keep all four legs intact, sewed up all the orifices including (to evident amusement of same young woman) the 'butthole', and finally inflated it like a balloon.

A very small fire of twigs and brushwood was lit in which to heat smooth river pebbles; goat carcase was chopped up on the bone then stuffed back into skin, along with hot pebbles. Chef then produced a very untraditional propane torch, burned off all the fur and crisped the skin, and the end result, looking like some sinister black modernist sculpture, was carried on a litter into his yurt where they poured out several litres of goat soup and ate the grey, unappetising meat.

Puzzled by the complication of boodogging I was hardly surprised it's become rare - but then a lightbulb popped on in my head. This wasn’t to do with gastronomy but with energetics. Mongolian steppe soil is only a few inches deep, supporting grass to feed goats and yaks but no trees, hence the tiny fire and hot stones to maximise storage of its heat, and the anachronistic propane torch (which could hardly have roasted the goat). Hot stone cooking is common enough around the Pacific but always in a covered pit, which is impossible to dig in the steppe. These Mongolians had ingeniously adapted to the severe energetic limitations of their environment, sensibly submitting to the Second Law of Thermodynamics by making maximum use of every calorie.

Having just written a column about quantum computing, this little anthropological lesson sparked a most unexpected connection. We all live in a world ruled by the Second Law, and are ourselves, looked at from one viewpoint, simply heat engines. Our planet is roughly in thermal equilibrium, receiving energy as white light and UV from the sun and reradiating it back out into space in the infrared: our current climate panic shows just how delicate this equilibrium is. Incoming sunlight has lower entropy than the outgoing infrared, and on this difference all life depends: plants exploit the entropy gradient to build complex carbohydrates out of CO₂ and water, animals eat plants to further exploit the difference by building proteins, we eat animals and plants and use the difference to make art and science, movies and space shuttles. And when we die the Second Law cheerfully turns us back into CO₂ and ammonia (sometimes accelerated by a crematorium).   

The difficulty of building quantum computers arises from this fact, that quantum computation takes place in a different world that’s governed by the notoriously different rules of quantum mechanics. The tyranny of the Second Law continually tries to disrupt it, in the guise of thermal noise, because any actual quantum computing device must unavoidably be of our world, made of steel and copper and glass, and all the liquid helium in the world can’t entirely hide that fact. 

What also occurred to me is that if you work with quantum systems, it must become terribly attractive to fantasise about living in the quantum world, free from the tyranny of thermodynamics. Is that perhaps why the Multiverse interpretation of quantum mechanics is so unaccountably popular? Indeed, to go much further, is an unconscious awareness of this dichotomy actually rather ancient? People have always chafed at the restrictions imposed by gravity and thermodynamics, have invented imaginary worlds in which they could fly, or live forever, or grow new limbs, or shape-shift, or travel instantaneously, or become invisible at will. Magic, religion, science fiction, in a sense are all reactions against our physical limits that exist because of scale: we’re made of matter that’s made of atoms that obey Pauli’s Exclusion Principle, which prevents us from walking through walls or actually creating rabbits out of hats. Those atoms are themselves made from particles subject to different, looser rules, but we’re stuck up here, only capable of imagining such freedom. 

And that perhaps is why, alongside their impressively pragmatic adaptability, those Mongolian nomads - who move their flocks with the seasons as they’ve done for centuries, but send their children to university in Ulan Bator and enthusiastically adopt mobile phones and wi-fi - also retain an animistic, shamanistic religion with a belief in guardian spirits. 



     



A QUANTUM OF SOLACE?

 Dick Pountain/ Idealog304/ 3rd Nov 2019

When Google announced, on Oct 24th, that it has achieved 'quantum supremacy' -- that is, has performed a calculation on a quantum computer faster than any conventional computer could ever do -- I was forcefully reminded that quantum computing is a subject I've been avoiding in this column for 25 years. That prompted a further realisation that it's because I'm sceptical of the claims that have been made. I should hasten to add that I'm not sceptical about quantum mechanics per se (though I do veer closer to Einstein than to Bohr, am more impressed by Carver Mead's Collective Electrodynamics  than by Copenhagen, and find 'many worlds' frankly ludicrous). Nor am I sceptical of the theory of quantum computation itself, though the last time I wrote about it was in Byte in 1997.  No, what I'm sceptical of are the pragmatic engineering prospects for its timely implementation. 

The last 60 years saw our world transformed by a new industrial revolution in electronics, gifting us the internet, the smartphone, Google searches and Wikipedia, Alexa and Oyster cards. The pace of that revolution was never uniform but accelerated to a fantastic extent from the early 1960s thanks to the invention of CMOS, the Complementary Metal-Oxide-Semiconductor fabrication process. CMOS had a property shared by few other technologies, namely that it became much, much cheaper and faster the smaller you made it, resulting in 'Moore's Law', that doubling of power and halving of cost every two years that's only now showing any sign of levelling off.  That's how you got a smartphone as powerful as a '90s supercomputer in your pocket.  CMOS is a solid-state process where electrons whizz around metal tracks deposited on treated silicon, which makes it amenable to easy duplication by what amounts to a form of printing. 

You'll have seen pictures of Google's Sycamore quantum computer that may have achieved 'supremacy' (though IBM is disputing it). It looks more like a microbrewery than a computer. Its 56 quantum bits are indeed solid state, but they're superconductors that work at microwave frequencies and near absolute zero immersed in liquid helium. The quantum superpositions upon which computation depends collapse at higher temperatures and in the presence of radio noise, and there's no prospect that such an implementation could ever achieve the benign scaling properties of CMOS. Admittedly a single qubit can in theory do the work of millions of CMOS bits, but the algorithms that need to be devised to exploit that advantage are non-intuitive and opaque, the results of computation are difficult to extract correctly and will require novel error-correction techniques that are as yet unknown and may not exist. It's not years but decades, or more, from practicality.

Given this enormous difficulty, why is so much investment going into quantum computing right now? Thanks to two classes of problem that are provenly intractable on conventional computers, but of great interest to extremely wealthy sponsors. The first is the cracking of public-key encryption, a high priority for the world's intelligence agencies which therefore receives defence funds.  The second is the protein-folding problem in biochemistry. Chains of hundreds of amino-acids that constitute enzymes can fold and link to themselves in a myriad different ways, only one of which will produce the proper behaviour of that enzyme, and that behaviour is the target for synthetic drugs. Big Pharma would love a quantum computer that could simulate such folding in real time, like a CAD/CAM system for designing monoclonal antibodies. 

What worries me is that the hype surrounding quantum computing is of just the sort that's guaranteed to bewitch technologically-illerate politicians, and it may be resulting in poor allocation of computer science funding. The protein folding problem is an extreme example of the class of optimisation problems -- others are involved in banking, transport routing, storage allocation, product pricing and so on -- all of which are of enormous commercial importance and have been subject to much research effort. For example twenty years ago constraint solving was one very promising line of study. When faced with an intractably large number of possibilities, apply and propagate constraints to severely prune the tree of possibilities rather than trying to traverse it all. The promise of quantum computers is precisely that, assuming you could assemble enough qubits, they could indeed just test all the branches, thanks to superposition. In recent years the flow of constraint satisfaction papers seems to have dwindled: is this because the field has struck an actual impass, or because the chimera of imminent quantum computers is diverting effort? Perhaps a hybrid approach to these sorts of problem might be more productive, say hardware assistance for constraint solving, plus deep learning, plus analog architectures, and anticipating shared quantum servers as one, fairly distant, prospect rather than the only bet.    


THE SKINNER BOX

 Dick Pountain/ Idealog 303/ 4th October 2019 10:27:48

We live in paranoid times, and at least part of that paranoia is being provoked by advances in technology. New techniques of surveillance and prediction cut two ways: they can be used to prevent crime and to predict illness, but they can also be abused for social control and political repression – which of these one sees as more important is becoming a matter of high controversy. Those recent street demonstrations in Hong Kong highlighted the way that sophisticated facial recognition tech, when combined with CCTV built into special lamp-posts can enable a state to track and arrest individuals at will. 

But the potential problems go way further than this, which is merely an extension of current law-enforcement technology. Huge advances in AI and Deep Learning are making it possible to refine thise more subtle means of social control often referred to as ‘nudging’. To nudge means getting people to do what you want them to do, or what is deemed good for them, not by direct coercion but by clever choice of defaults that exploit people’s natural biases and laziness (both of which we understand better than ever before thanks to the ground-breaking psychological research of Daniel Kahneman and Amos Tversky).   

The arguments for and against nudging involve some subtle philosophical principles, which I’ll try to explain as painlessly as possible. Getting people to do “what’s good for them” raises several questions: who decides what’s good; is their decision correct; even if it is, do we have the right to impose it, what about free will? Liberal democracy (which is what we still do just about have, certainly compared to Russia or China) depends upon citizens being capable of making free decisions about matters important to the conduct of their own lives, but what if advertising, or addiction, or those intrinsic defects of human reasoning that Kahneman uncovered, so distort their reckoning as to make them no longer meaningfully free – what if they’re behaving in ways contrary to their own expressed interests and injurious to their health? Examples of such behaviours, and the success with which we’ve dealt with them, might be compulsory seat belts in cars (success), crash helmets for motorcyclists (success), smoking bans (partial success), US gun control (total failure).


 


Such control is called “paternalism”, and some degree of it is necessary to the operation of the state in complex modern societies, wherever the stakes are sufficiently high (as with smoking) and the costs of imposition, in both money and offended freedom, are sufficiently low. However there are libertarian critics who reject any sort of paternalism at all, while an in-between position, "libertarian paternalism", claims that the state has no right to impose but may only nudge people toward correct decisions, for example over opting-in versus opting-out of various kinds of agreement – mobile phone contracts, warranties, mortgages, privacy agreements. People are lazy and will usually go with the default option, careful choice of which can nudge rather than compel them to the desired decision. 



The thing is, advances in AI are already enormously amplifying the opportunities for nudging, to a paranoia-inducing degree. The nastiest thing I saw at the recent AI conference in King’s Cross was an app that reads shoppers’ emotional states using facial analysis and then 


raises or lowers the price of items offered to them on-the-fly! Or how about Ctrl-Lab’s app that non-invasively reads your intention to move a cursor (last week Facebook bought the firm). Since vocal chords are muscles too, that non-invasive approach might conceivably be extended with even deeper learning to predict your speech intentions, the voice in your head, your thoughts…

I avoid both extremes in such arguments about paternalism. I do believe that climate crisis is real and that we’ll need to modify human behaviour a lot in order to survive, so any help will be useful. On the other hand I was once an editor at Oz magazine and something of a libertarian rebel-rouser in the ‘60s. In a recent Guardian interview, the acerbic comedy writer Chris Morris (‘Brass Eye’, ‘Four Lions’) described meeting an AA man who showed him the monitoring kit in his van that recorded his driving habits. Morris asked “Isn’t that creepy?” but the man replied “Not really. My daughter’s just passed her driving test and I’ve got half-price insurance for her. A black box recorder in her car and camera on the dashboard measures exactly how she drives and her facial movements. As long as she stays within the parameters set by the insurance company, her premium stays low.” This sort of super-nudge comes uncomfortably close to China’s punitive Social Credit system: Morris called it a “Skinner Box”, after the American behaviourist BF Skinner who used one to condition his rats…



SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...