Sunday, 4 October 2020

HEAD IN THE CLOUDS

 Dick Pountain/Idealog 310/13:53 9 May 2020

When a couple of issues ago Editor Tim asked for tips for newly-working-at-home readers, mine was 'buy a Chromebook', which forced me to face up to how far I've drifted from the original Personal Computer Revolution. That was about everyone having their own CPU and their own data, but I've sold my soul to Google and I can't say I miss it. When first turned on, my Asus C301 Chromebook sucked all my personal data down automatically within five minutes, because it was all in Google Keep or on Google Drive. I do still have a Lenovo laptop but rarely use it, except via those same Google apps, and I don't miss the excitement of Windows updates one bit.  

My love for Google Keep isn't a secret to readers of this column, and it only grows stronger as new features like flawless voice dictation and pen annotations get added. Remember I'm someone who spent 30+ years looking for a viable free-form database to hold all the research data - magazine articles, pictures, diagrams, books, papers, web pages, links - that my work makes me accumulate. The task proved beyond any of the database products I tried, with Idealist, AskSam and the Firefox add-on Scrapbook lasting longer than most. Those with long memories might remember how Microsoft promised to put the retrieval abilities I need right into Windows itself, via an object-oriented file-system that they eventually chickened-out from. 

Keep's combination of categories, labels, colour coding and free text search gives me the flexible retrieval system I've been seeking, though it still isn't quite enough on its own: while it can hold pictures and clickable links they're not so convenient as  actual web pages. For a couple of decades I religiously bookmarked web pages, until my bookmarks tree structure became just a unwieldy as my on-disk folders. Nowadays I just save pages to Pocket, which is by far the most useful gadget I have after Keep. A single click on Pocket's icon on the Chrome toolbar grabs a page, fully formatted complete with pictures and a button to go to the original if needed, so making bookmarks redundant. I use the free version which supports tags similar to Keep's labels, but there's a paid-for Premium version with a raft of extra archival features for professional use. And like Keep, Pocket is cross platform so I can see my page library from Windows or a phone. 

Does the cloud make stuff easier to find? Within reason, yes.  Save too many pages to Pocket and and, as with bookmarks, you've merely shifted the complexity rather than removing it. Sometimes I fail to save something that didn't feel important at the time, then discover months later that it was, and Chrome's history function comes in handy then. I use it most to re-open recent tabs closed by mistake (I have an itchy trigger-finger) but by going to https://myactivity.google.com/ I can review searches years into the past, if I can remember at least one key word. Failing that, it's plain Google Search or the Internet Archive's Wayback Machine, recently released as a Chrome extension.

My music nowadays comes entirely from Spotify, end of. My own photographs remain the main problem. I take thousands and store them both on cloud and local hard disk, organised by camera (eg. Sony A58, Minolta, Lumix), then location (eg. Park, Italy, Scotland). I've tried those dedicated photo databases that organise by date, but find them of very little help: place reminds me far more effectively than time. My best pictures still go onto Flickr, tagged very thoroughly to exploit its rather superior search functions (it can even search by dominant colour!) Pictures I rate less Flickr-worthy I sometimes put on Facebook in themed Albums which also helps to find them The technology does now exist to search by image-matching, but that's mostly used by pros who need to spot theft or plagiarism. I can only express what I'm looking for in words, like 'Pip fixing the Gardner diesel engine'.  

What's required is a deep-AI analysis tool that can facially identify humans from their mugshots in my Contacts, recognise objects like tables, chairs or engines, can OCR any text in a picture (like 'Gardner' embossed on a cylinder block) and then output its findings as searchable text tags. It wouldn't surprise me if some Google lab is working on it. I do realise that were Google to go bust, or the internet close down then I'd be stuck with local data again, but if things get that bad then foraging for rats and cats to eat will probably be a higher priority. Right now my tip would still be, keep your feet on the ground and your data in the cloud(s)...

 





 





Saturday, 12 September 2020

A BATTLE OF OLOGIES

 Dick Pountain/ Idealog309/ 4th April 2020 08:01:58


Stewart Brand’s famous epigram “Information wants to be free” has been our collective motto for the last three decades, but few of us remember that it wasn’t an isolated phrase and was accompanied by, for example, “The right information in the right place just changes your life”. During our current nightmare we’re learning that here in the world of matter many other things want to be free that shouldn’t, like serial killers and mosquitoes and viruses, and that controlling information about them has become critical.


Across the world governments and their health organisations are trying to cope with the COVID-19 pandemic but discovering that in the age of TV news and social media it’s impossible to hide anything. We’re witnessing a war develop between two ‘ologies’, epidemiology and social psychology. The coronavirus has very particular characteristics that seem designed by some malevolent deity to test the mettle of pampered citizens of post-modern information societies. There’s a malign cascade of statistical percentages. Some experts estimate that if we do nothing, around 60% of the world population would catch it – roughly the odds of a coin toss, Bad News. But if you do catch it 80% will experience nothing worse than a cold – Good News. But of the 20% who do have worse symptoms, anywhere from 1 to 5% will die, roughly the odds of Russian Roulette – Bad, Bad News. The virus is highly contagious and thus prone to spread exponentially, but not so lethal as to be self-limiting like Ebola or Marburg.


These facts have turned two issues into political dynamite, namely ‘herd immunity’ and virus testing. An exponential virus epidemic spreads rather like a nuclear fission chain reaction, and the way to control it is the same – by introducing a moderator that can reduce the flow of neutrons or virus particles between successive targets. In a viral pandemic herd immunity – that is, many people getting it, surviving and becoming immune – is the best such moderator, and is what happens most often. An effective vaccine is a catalyst that spreads such immunity more quickly and reliably. The problem is that unlike uranium atoms, human beings have minds, and the effect on those minds is nowadays more important than cold percentages.


The measures that are being taken by most governments are self-quarantine and social distancing (avoiding contact with other people) which act to moderate the rate of spread, in order to avoid swamping health systems with too many critical cases at once. In most countries these measures depend upon the voluntary cooperation of citizens. There are already tests for the presence of live virus, and there will soon be reliable tests for antibodies in survivors, but there’s controversy over how widely to apply them.


Epidemiologists need good data to check whether isolation measures are working, to get an accurate picture of the lethality rate, to model the spread and calculate how best to allocate finite palliative care resources. And, as American professor Zeynep Tufekci points out in The Atlantic magazine (https://www.theatlantic.com/technology/archive/2020/04/coronavirus-models-arent-supposed-be-right/609271/), whenever a government acts upon the recommendations of the modellers, those actions change the model.


But citizens would very much like to know whether they have caught the virus and need urgent treatment, or had the mild form and are immune. It would technically be possible to test the whole population, but it’s neither economically nor politically sensible. The cost would be enormous, and it would conflict with the principle of isolation if people had to travel to test centres but impractical if testing vans had to visit every cottage in the Hebrides. Also no tests are perfect, and both false positives and negatives could have unpleasant consequences.So mass testing isn’t feasible and would be a poor use of scarce resources – but even if it were possible it might still be counter productive. Once everyone who’s had mild COVID-19 and achieved 'herd immunity’ knows that for sure, their incentive to continue with isolation might fade away, and worse still they might come to resent those who require it to be continued.


Masks are another psychological issue: medical opinion is that cheap ones aren’t effective and that good ones are only needed by those who deal directly with infected patients. But wearing even a cheap ineffective mask makes a social statement: actually two statements, “I care about me” and “I care about you” (which predominates in each case becoming obvious from other body language).


Perhaps the best we can conclude is that total freedom of information isn’t always a good thing in emergencies like this, but that social media make it hard to avoid. We’re slipping into the realm of Game Theory, not epidemiology.


Monday, 24 August 2020

VIRTUALLY USELESS

Dick Pountain/ Idealog308/ 6th March 2020 10:58:31


Online booking is one of the more noticeable ways computer technology has changed our lives for the better. See an advert for a concert, play or movie – maybe on paper, maybe on Facebook – and in a few keystrokes you can have e-tickets, removing even the terrible chore of having to collect them from the box-office (and, yes, I do know just barking at Alexa to do it is quicker still, but have decided not to go there). However online booking has also shown me some of the limitations of the virtual world. For example – admittedly ten years ago – EasyJet’s website was once so laggy that I thought I was clicking August 6th, but the drop-down hadn’t updated so the tickets were for the 5th. Ouch.  

More recently I booked a ticket for a favourite concert this March, only to discover that it’s actually in March 2021, not 2020. OK, that’s my fault for misreading, but the ad was embedded among a bunch of other concerts that are in March 2020. Another example: last week I read a Facebook share from a close friend that appeared to be a foul-mouthed diatribe against atheism. I was somewhat surprised, even shocked by this, but fortunately I clicked it to reveal a longer meme that, further down, refuted the diatribe. Facebook wouldn’t scroll down because it was graphic, not a text post. 

These are all symptoms of the wider cognitive weakness of two-dimensional online interfaces, that’s leading some people to call for a return to paper for certain activities, including education. The problem is all about attention. Visual UIs employ the metaphor of buttons you press to perform actions, directing your attention to their important areas. But this has a perverse side-effect: the more you use them and the more expert you become, the less you notice their less-important areas (like that year, which wasn’t on a button).

Virtualising actions devalues them. Pressing a physical button – whether to buy a bar of chocolate or to launch a nuclear missile – used to produce a bodily experience of pressure and resistance, which lead to an anticipation of the effect, which lead to motivation. Pressing on-screen buttons creates no such effect, so one may aimlessly press them in reflex fashion (hence the success of some kinds of phishing attack). 

It’s more than coincidence that 'Cancel Culture' has arisen in the age of social media and smartphones. This refers to a new style of boycott where some celebrity who’s expressed an unpopular opinion on social media gets "cancelled", that is dropped by most of their followers, which can lead to a steep decline in their careers. But of course ‘Cancel’ is the name of that ubiquitous on-screen button you hit without thinking when something goes wrong, hence this extension to remove real people by merely saying the word. 

Reading text on a screen versus paper also reveals weakness: that less goes in and less is remembered has been demonstrated by cognitive science experiments, and this is truer still when questions are asked or problems set in school or college. Receiving such requests from a human being produces a motivation to answer that’s quite absent in onscreen versions: according to cognitive psychologist Daniel Willingham “It’s different when you’re learning from a person and you have a relationship with that person. That makes you care a little bit more about what they think, and it makes you a little bit more willing to put forth effort”.

Virtualisation also encourages excessive abstraction – trying to abstract 'skills' from particular content tends to make content seem arbitrary. Cognitive scientists have long known that what’s most important for reading comprehension isn’t some generally applicable skill but rather how much background knowledge and vocabulary the reader has relating to the topic. Content does matter and can’t be abstracted away, and ironically-enough computers are the perfect tools for locating relevant content quickly, rather than tools to train you in abstract comprehension skills. Whenever I read mention of some famous painting I go straight to Google Images to see it, ditto with Wikipedia for some historical event. We’re getting pretty close to Alan Kay's vision of the Dynabook.

It will be interesting to see what these cognitive researchers make of voice-operated interfaces like Alexa. Are they Artificially Intelligent enough to form believable relationships that inspire motivation? Sure, people who’re used to Alexa (or SatNav actors) do sort of relate to them, but it’s still mostly one-way, like “Show me the Eiffel Tower” – they’re no good at reasoning or deducing. And voice as a delivery mechanism would feel like a step backward into my own college days, trying frantically to scribble notes from a lecturer who gabbled…

[Dick Pountain always notices the ‘Remember Me’ box three milliseconds after he’s hit Return]















    

THE NUCLEAR OPTION

 Dick Pountain/ Idealog307/ 8th February 2020 14:49:23


Those horrific wild-fires in Australia may prove to be the tipping point that gets people to start taking the threat of climate change seriously. Perhaps IT isn’t, at the moment, the industry most responsible for CO₂ emissions, but that’s no reason for complacency. On the plus side IT can save fossil fuel usage, when people email or teleconference rather than travelling: on the minus side, the electrical power consumed by all the world’s social media data centres is very significant and growing (not to mention what’s scoffed up mining cryptocurrencies). IT, along with carbon-reducing measures like switching to electric vehicles, vastly increases the demand for electricity, and I’m not confident that all this demand can realistically be met by renewable solar, wind and tidal sources, which may have now become cheap enough but remain intermittent. 

That means that either storage, or some alternative back-up source, is needed to smooth out supply. A gigantic increase in the capacity of battery technologies could bridge that gap, but nothing on a big enough scale looks likely (for reasons I’ve discussed in a previous column). For that reason, and unpopular though it may be, I believe we must keep some nuclear power. It doesn’t mean I admire the current generation of fission reactors, which became unpopular for very good reasons: the huge cost of building them; the huge problem of disposing of their waste; and worst of all, because we’ve realised that human beings just aren’t diligent enough to be put in charge of machines that fail so unsafely. There are other nuclear technologies though that don’t share these drawbacks, but haven’t yet been sufficiently researched to get into production.

For about 50 years I’ve been hopeful for nuclear fusion (and like all fusion fans have been perennially disappointed). However things now really are looking up, thanks to two new lines of research: self-stable magnetic confinement and alpha emission. The first dispenses with those big metal doughnuts and their superconducting external magnets, and replaces them with smoke-rings - rapidly spinning plasma vortices that generate their own confining magnet field. The second, pioneered by Californian company TAE Technologies, seeks to fuse ordinary hydrogen with boron to generate alpha particles (helium nuclei), instead of fusing deuterium and tritium to produce neutrons. Since alpha particles, unlike neutrons, are electrically charged, they can directly induce current in an external conductor without leaving the apparatus. Neutrons must be absorbed into an external fluid to generate heat, which then drives a turbine, but in the process they render the fabric of the whole reactor radio-active, which alpha does not.

The most promising future fission technology is the thorium reactor, in which fission takes place in a molten fluoride salt. Such reactors can be far smaller than uranium ones, small enough to be air-cooled, they produce almost no waste, and they fail safe because fission fizzles out rather than runs wild if anything goes wrong. Distributed widely as local power stations, they could replace the current big central behemoths. That they haven’t caught on is partly due to industry inertia, but also because they currently still need a small amount of uranium 233 as a neutron source, which gets recycled like a catalyst. But now a team of Russian researchers are proposing a hybrid reactor design in which a deuterium-tritium fusion plasma, far too small to generate power itself, is employed instead of uranium to generate the neutrons to drive thorium fission.

A third technology I find encouraging isn’t a power source, but might just revolutionise power transmission. The new field of ‘twistronics’ began in 2018 when an MIT team lead by Pablo Jarillo-Herrero announced a device consisting of two layers of graphene stacked one upon the other, which becomes superconducting if those layers are very slightly twisted to create a moirĂ© pattern between their regular grids of carbon atoms. When you rotate the top layer by exactly 1.1° from the one below, it seems that electrons travelling between the layers are slowed down sufficiently that they pair-up to form the superconducting ‘fluid’, and this happens at around 140°K, way warmer than liquid helium and around halfway to room temperature. Twisted graphene promises a new generation of tools for studying the basis of superconduction: you’ll be able to tweak a system’s properties more or less by turning a knob, rather than having to synthesise a whole new chemical. Such tools should help speed the search for the ultimate prize, a room-temperature superconductor. That’s what we need to pipe electricity generated by solar arrays erected in the world’s hot deserts into our population centres with almost no loss. Graphene itself is unlikely to be such a conductor, but it may be what helps to discover one.  

[ Dick Pountain ain’t scared of no nucular radiashun]      





      


  





   

TO A DIFFERENT DRUMMER

 Dick Pountain/ Idealog 306/  January 6th 2020

My Christmas present to myself this year was a guitar, an Ibanez AS73 Artcore. This isn't meant to replace my vintage MIJ Strat but rather to complement it in a jazzier direction. 50-odd years ago I fell in love with blues, ragtime and country finger-picking, then slowly gravitated toward jazz via Jim Hall and Joe Pass, then to current Americana-fusionists like Bill Frisell, Charlie Hunter and Julian Lage (none of whom I'm anywhere near skillwise). It's a long time since I was in a band and I play mostly for amusement, but can't escape the fact that all those idols all work best in a trio format, with drums and bass. My rig does include Hofner violin bass, drum machine and looper pedal to record and replay accompaniments, and I have toyed with apps like Band-in-a-Box, or playing along to Spotify tracks, but find none of these really satisfactory -- too rigid, no feedback. Well, I've mentioned before in this column my project to create human-sounding music by wholly programmatic means. The latest version, which I've named  'Algorhythmics', is written in Python and is getting pretty powerful. I wonder, could I use it to write myself a robot trio?

Algorhythmics starts out using native MIDI format, by treating pitch, time, duration and volume data as four seperate streams, each represented by a list of ASCII characters. In raw form this data just sounds like a hellish digital musical-box, and the challenge is to devise algorithms that inject structure, texture, variation and expression. I've had to employ five levels of quasi-random variation to achieve something that sounds remotely human. The first level composes the data lists themselves by manipulating, duplicating, reversing, reflecting and seeding with randomness. The second level employs two variables I call 'arp' (for arpeggio) and 'exp' (for expression) that alter the way notes from different MIDI tracks overlap to control legato and staccato. A third level produces tune structure by writing functions called 'motifs' to encapsulate short tune fragments, which can then be assembled like Lego blocks into bigger tunes with noticeably repeating themes. Motifs alone aren't enough though: if you stare at wallpaper with a seemingly random pattern, you'll invariably notice where it starts to repeat, and the ear has this same ability to spot (and become bored by) literal repetition. Level four has a function called 'vary' that subtly alters the motifs inside a loop at each pass, and applies tables of chord/scale relations (gleaned from online jazz tutorials and a book on Bartok's composing methods) to harmonise the fragments. Level five is the outer loop that generates the MIDI output, in which blocks of motifs are switched on and off under algorithmic control, like genes being expressed in a string of DNA.

So my robot jazz trio is a Python program called TriBot that generates improvised MIDI accompaniments -- for Acoustic Bass and General MIDI drum kit -- and plays them into my Marshall amplifier. The third player is of course me, plugged in on guitar. The General MIDI drum kit feels a bit too sparse, so I introduced an extra drum track using ethnic instruments like Woodblock, Taiko Drum and Melodic Tom. Tribot lets me choose tempo, key, and scale (major, minor, bop, blues, chromatic, various modes) through an Android menu interface, and my two robot colleagues will improvise away until halted. QPython lets me save new Tribot versions as clickable Android apps, so I can fiddle with its internal works as ongoing research.

It's still only a partial solution, because although drummer and bass player 'listen' to one another -- they have access to the same pitch and rhythm data -- they can't 'hear' me and I can only follow them. In one sense this is fair enough as it's what I'd experience playing alongside much better live musicians. At brisk tempos Tribot sounds like a Weather Report tribute band on crystal meth, which makes for a good workout. But my ideal would be what Bill Frisell described in this 1996 interview with a Japanese magazine (https://youtu.be/tKn5VeLAz4Y, at 47:27), a trio that improvise all together, leaving 'space' for each other. That's possible in theory, using a MIDI guitar like a Parker or a MIDI pickup for my Artcore. I'd need to make Tribot work in real-time -- it currently saves MIDI to an intermediate file -- then merge in my guitar's output translated back into Algorhythmic data format, so drummer and bass could 'hear' me too and adjust their playing to fit. A final magnificent fantasy would be to extend TriBot so it controlled an animated video of cartoon musicians. I won't have sufficient steam left to do either, maybe I'll learn more just trying to keep up with my robots... 

[ Dick Pountain recommends you watch this 5 minute video, https://youtu.be/t-ReVx3QttA, before reading this column ]

 

   


   

FIRST CATCH YOUR GOAT

Dick Pountain/ Idealog 305/Dec 5th 2019

In a previous column I’ve confessed my addiction to exotic food videos on YouTube, and one I watched recently sparked off thoughts that went way beyond the culinary. Set in the steppes of Outer Mongolia, a charming-but-innocent young Canadian couple travel to witness an ancient, now rare, cooking event, they call 'Boodog' - though it doesn't contain any dog and is probably a poor transliteration. The Mongolian chef invited the young man to catch a goat, by hand, on foot, which he then dispatched quickly and efficiently (to the visible discomfort of the young woman), skinned, taking care to keep all four legs intact, sewed up all the orifices including (to evident amusement of same young woman) the 'butthole', and finally inflated it like a balloon.

A very small fire of twigs and brushwood was lit in which to heat smooth river pebbles; goat carcase was chopped up on the bone then stuffed back into skin, along with hot pebbles. Chef then produced a very untraditional propane torch, burned off all the fur and crisped the skin, and the end result, looking like some sinister black modernist sculpture, was carried on a litter into his yurt where they poured out several litres of goat soup and ate the grey, unappetising meat.

Puzzled by the complication of boodogging I was hardly surprised it's become rare - but then a lightbulb popped on in my head. This wasn’t to do with gastronomy but with energetics. Mongolian steppe soil is only a few inches deep, supporting grass to feed goats and yaks but no trees, hence the tiny fire and hot stones to maximise storage of its heat, and the anachronistic propane torch (which could hardly have roasted the goat). Hot stone cooking is common enough around the Pacific but always in a covered pit, which is impossible to dig in the steppe. These Mongolians had ingeniously adapted to the severe energetic limitations of their environment, sensibly submitting to the Second Law of Thermodynamics by making maximum use of every calorie.

Having just written a column about quantum computing, this little anthropological lesson sparked a most unexpected connection. We all live in a world ruled by the Second Law, and are ourselves, looked at from one viewpoint, simply heat engines. Our planet is roughly in thermal equilibrium, receiving energy as white light and UV from the sun and reradiating it back out into space in the infrared: our current climate panic shows just how delicate this equilibrium is. Incoming sunlight has lower entropy than the outgoing infrared, and on this difference all life depends: plants exploit the entropy gradient to build complex carbohydrates out of CO₂ and water, animals eat plants to further exploit the difference by building proteins, we eat animals and plants and use the difference to make art and science, movies and space shuttles. And when we die the Second Law cheerfully turns us back into CO₂ and ammonia (sometimes accelerated by a crematorium).   

The difficulty of building quantum computers arises from this fact, that quantum computation takes place in a different world that’s governed by the notoriously different rules of quantum mechanics. The tyranny of the Second Law continually tries to disrupt it, in the guise of thermal noise, because any actual quantum computing device must unavoidably be of our world, made of steel and copper and glass, and all the liquid helium in the world can’t entirely hide that fact. 

What also occurred to me is that if you work with quantum systems, it must become terribly attractive to fantasise about living in the quantum world, free from the tyranny of thermodynamics. Is that perhaps why the Multiverse interpretation of quantum mechanics is so unaccountably popular? Indeed, to go much further, is an unconscious awareness of this dichotomy actually rather ancient? People have always chafed at the restrictions imposed by gravity and thermodynamics, have invented imaginary worlds in which they could fly, or live forever, or grow new limbs, or shape-shift, or travel instantaneously, or become invisible at will. Magic, religion, science fiction, in a sense are all reactions against our physical limits that exist because of scale: we’re made of matter that’s made of atoms that obey Pauli’s Exclusion Principle, which prevents us from walking through walls or actually creating rabbits out of hats. Those atoms are themselves made from particles subject to different, looser rules, but we’re stuck up here, only capable of imagining such freedom. 

And that perhaps is why, alongside their impressively pragmatic adaptability, those Mongolian nomads - who move their flocks with the seasons as they’ve done for centuries, but send their children to university in Ulan Bator and enthusiastically adopt mobile phones and wi-fi - also retain an animistic, shamanistic religion with a belief in guardian spirits. 



     



A QUANTUM OF SOLACE?

 Dick Pountain/ Idealog304/ 3rd Nov 2019

When Google announced, on Oct 24th, that it has achieved 'quantum supremacy' -- that is, has performed a calculation on a quantum computer faster than any conventional computer could ever do -- I was forcefully reminded that quantum computing is a subject I've been avoiding in this column for 25 years. That prompted a further realisation that it's because I'm sceptical of the claims that have been made. I should hasten to add that I'm not sceptical about quantum mechanics per se (though I do veer closer to Einstein than to Bohr, am more impressed by Carver Mead's Collective Electrodynamics  than by Copenhagen, and find 'many worlds' frankly ludicrous). Nor am I sceptical of the theory of quantum computation itself, though the last time I wrote about it was in Byte in 1997.  No, what I'm sceptical of are the pragmatic engineering prospects for its timely implementation. 

The last 60 years saw our world transformed by a new industrial revolution in electronics, gifting us the internet, the smartphone, Google searches and Wikipedia, Alexa and Oyster cards. The pace of that revolution was never uniform but accelerated to a fantastic extent from the early 1960s thanks to the invention of CMOS, the Complementary Metal-Oxide-Semiconductor fabrication process. CMOS had a property shared by few other technologies, namely that it became much, much cheaper and faster the smaller you made it, resulting in 'Moore's Law', that doubling of power and halving of cost every two years that's only now showing any sign of levelling off.  That's how you got a smartphone as powerful as a '90s supercomputer in your pocket.  CMOS is a solid-state process where electrons whizz around metal tracks deposited on treated silicon, which makes it amenable to easy duplication by what amounts to a form of printing. 

You'll have seen pictures of Google's Sycamore quantum computer that may have achieved 'supremacy' (though IBM is disputing it). It looks more like a microbrewery than a computer. Its 56 quantum bits are indeed solid state, but they're superconductors that work at microwave frequencies and near absolute zero immersed in liquid helium. The quantum superpositions upon which computation depends collapse at higher temperatures and in the presence of radio noise, and there's no prospect that such an implementation could ever achieve the benign scaling properties of CMOS. Admittedly a single qubit can in theory do the work of millions of CMOS bits, but the algorithms that need to be devised to exploit that advantage are non-intuitive and opaque, the results of computation are difficult to extract correctly and will require novel error-correction techniques that are as yet unknown and may not exist. It's not years but decades, or more, from practicality.

Given this enormous difficulty, why is so much investment going into quantum computing right now? Thanks to two classes of problem that are provenly intractable on conventional computers, but of great interest to extremely wealthy sponsors. The first is the cracking of public-key encryption, a high priority for the world's intelligence agencies which therefore receives defence funds.  The second is the protein-folding problem in biochemistry. Chains of hundreds of amino-acids that constitute enzymes can fold and link to themselves in a myriad different ways, only one of which will produce the proper behaviour of that enzyme, and that behaviour is the target for synthetic drugs. Big Pharma would love a quantum computer that could simulate such folding in real time, like a CAD/CAM system for designing monoclonal antibodies. 

What worries me is that the hype surrounding quantum computing is of just the sort that's guaranteed to bewitch technologically-illerate politicians, and it may be resulting in poor allocation of computer science funding. The protein folding problem is an extreme example of the class of optimisation problems -- others are involved in banking, transport routing, storage allocation, product pricing and so on -- all of which are of enormous commercial importance and have been subject to much research effort. For example twenty years ago constraint solving was one very promising line of study. When faced with an intractably large number of possibilities, apply and propagate constraints to severely prune the tree of possibilities rather than trying to traverse it all. The promise of quantum computers is precisely that, assuming you could assemble enough qubits, they could indeed just test all the branches, thanks to superposition. In recent years the flow of constraint satisfaction papers seems to have dwindled: is this because the field has struck an actual impass, or because the chimera of imminent quantum computers is diverting effort? Perhaps a hybrid approach to these sorts of problem might be more productive, say hardware assistance for constraint solving, plus deep learning, plus analog architectures, and anticipating shared quantum servers as one, fairly distant, prospect rather than the only bet.    


POD PEOPLE

Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...