Saturday 23 July 2022

MORE MEMORABLES

Dick Pountain /Idealog 329/ 04 Dec 2021 02:59

Last year I wrote here about three technical breakthroughs that were announced during November 2020 – an mRNA vaccine against Covid-19, Alphafold 2 solving the ‘protein folding problem’, and Apple’s M1 CPU. I’d like to repeat this exercise for November 2021, and though my picks for this year’s breakthroughs are equally profound – one in neuroscience, one in space exploration and one in quantum computing – they’re perhaps a little more difficult to explain.

The neuroscience discovery is simple enough to state: the human neuron appears to be several orders of magnitude more computationally-complex than silicon ones used to emulate it. Given each neuron’s number of axon connections this means the whole brain is almost infinitely more complex than any current AI. A team at the Hebrew University of Jerusalem trained an artificial deep neural network to mimic the computations of a simulated biological neuron, and the result suggests that the AI net requires between five and eight ‘hidden layers’ to represent the complexity of a single biological neuron. 

Deep-learning networks have created the current explosion in AI abilities like driverless cars, Alexa, Google Translate, Deepfold and so on. They all employ several layers – each containing a large number of silicon ‘neurons’ – between their input and output layers, which when trained on millions of examples configure themselves into a ‘black-box’ that classifies input into outputs. How many hidden layers are appropriate remains a keen matter of research: more isn’t always better, and the right number tends to be in single figures. Now it turns out that the classificatory power of a single human neuron is equivalent to a many-layered AI net. Our brain runs these mostly on coffee and chocolate digestives, while training a large deep-learning AI net consumes so much electrical power it’s becoming  a threat to the climate.   

At the other end of the distance scale, we’ve all thrilled to those colour photos of galaxies and nebulae captured by the Hubble Telescope. It’s so sensitive it can collect light from far-off objects which left not far after the Big Bang and has been travelling for billions of years since. Not close enough to the Big Bang however to answer some of the most important questions in cosmology. Thanks to an expanding universe the very oldest objects get red-shifted into the infrared, while the Hubble works with visible light. As I write we’re just two weeks away from the launch of the James Webb Space Telescope which works in the IR and if successful will push that vision back far enough to perhaps rewrite our story about the origins of the universe.  

I say ‘if’ because the Webb incorporates the most ambitious space engineering ever. A house-sized mirror, made from hexagonal tiles of toxic beryllium was polished to the greatest smoothness ever achieved, then folded up to fit into the rocket. In space it has to unfold correctly and each tile be constantly realigned by computers. And it needs to work at close to 0°K so it must travel to the Lagrange Point where earth, sun and a moon’s gravity cancel out, then unfurl a vast, flimsy sunshade made of five layers of metallised film. Superb engineering, but the best of luck is still required... 

Richard Feynman once prophesied that a use for quantum computers would be simulating reality at quantum level, and in this year’s breakthrough a quantum simulator has solved a real physics problem. A ‘quantum spin liquid’ is a novel state of matter in which a  magnetisable substance can’t take on a fixed polarity because its unusually threefold crystal symmetry makes domain pairing impossible. There’s a mineral, marvellously named Herbertsmithite, which may exhibit this state, giving it interesting and perhaps useful optical and magnetic properties, but directly studying quantum entanglement inside a sample is impossible. 

Simulating its lattice on a conventional computer takes months, but teams at Harvard and MIT recently simulated it in reasonable time using an unfashionable kind of quantum computer that employs neutral atoms rather than the superconductors used by Google and IBM. This isn’t a general-purpose computer: it builds a lattice of neutral atoms with the right symmetry using laser beams both to place the atoms and to switch their so-called Rydberg states to make them act as qubits. Neutral atom qubits are longer-lived and more robust against environmental decoherence than superconductors, and this success may refocus attention on the technology.

This last breakthrough tickles me for a different, less exalted reason though: “Herbertsmithite, named after the mineralogist Herbert Smith (1872–1953) is generally found in and around Anarak, Iran, hence its other name, anarakite”. We all know that kryptonite can counteract the powers of Superman, so perhaps anarakite might have a similar protective property against online computer super-nerds.

[Dick Pountain would like an anarakite amulet for Christmas]

MOLTEN METAPHOR

Dick Pountain /Idealog 328/ 05 Nov 2021 09:03

The little-known corollary to Moore’s Law is that computer columnists have to mention it in a column at least once a decade, and I am no exception. The miracle of digital search tells me that I’ve covered it here in 1996, 2004, 2005, 2009 and 2019 (the numerate among you will deduce that it’s a pretty rough sort of law). That first date is significant because it was in PC Pro issue 18, two years before the most popular piece I ever wrote for Byte, ‘The Limits Of Computing’ in 1998, about how and when Moore’s Law would end. In that piece I talked about the difficulty of getting down to 0.1 micron feature sizes in silicon CPUs: it would require short wavelength UV light from exotic krypton/fluorine lasers to illuminate the masks whose reduced images create the circuit patterns via the lithographic fabrication process. Getting much smaller would require X-rays which are intractable for many reasons.

Skip forward one quarter of a century, Moore’s Law is still at work and Intel and Apple’s latest chip families are pushing feature sizes down toward 10 nanometres (0.01 microns in old money). They do that by using EUV (Extreme UV) illumination, the last stop before those dreaded X-rays, and these are very, very difficult indeed to generate. I found myself getting interested in fab tech again, and decided to read up on EUV. 

There’s only one firm in the world that makes the EUV machines for this chip generation. It’s a Dutch firm called ASML (nothing at all to do with ASMR) and you can have one of their machines for somewhere around $180,000,000 depending on what bell and whistles you need. You’ll need to rent a new garage, around the size of a football field, and a couple of cranes to house it, and your electricity bill may rise. The reason for this price and size is that 13.5nm EUV radiation is totally absorbed by both glass and air, so everything has to be performed in the highest possible vacuum and with mirrors rather than lenses. The mirrors, by Zeiss, are the flattest ever made, flatter even than the ones in space telescopes. A whole array of vacuum pumps employ little rotors spinning at 30,000rpm that bat air molecules away one at a time like ping-pong balls. Oh, and you can’t use any old incandescent lamp to generate these rays. In fact the only way they found to generate 13.5nm EUV is by blasting a tiny drop of molten tin with two hits from a powerful laser microseconds apart, which explodes it into a tiny ball of glowing plasma. Over and over again, like a sort of hot, tinny ink-jet printer. Phew.

I’ve always been fascinated by molten metal. I love the stuff, perhaps because my father worked in the steel industry. As a small boy I watched Bessemer Converters blowing, the best firework display in the world, from just far enough so I didn’t catch fire (health-and-safety was loose back then). At home I melted down all my lead soldiers in a pan on the gas stove, poured them into two small alloy jelly moulds in the shape of tortoises which became our doorstops for years (I did spill at bit on the kitchen floor, which remained a burned-in silver splatter for years too). 

My favourite movie was ‘The Hunchback of Notre Dame’ in which Charles Laughton as Quasimodo, stands on the roof of a Gothic cathedral chanting “Molten Metal! Molten Metal” while he pours boiling lead onto the soldiers below. Studying chemistry in London I was often faced with molten metal, often sodium, not always on purpose. When later I became a magazine publisher I used to visit marvellous old printing plants in the East End that still employed Linotype machines, in which a small vat of molten lead gets poured into the lines of lead type. 

You get the picture. I just find the image of some sort of expensive ink-jet printer that fires drops of molten tin that get blasted into plasma very, very powerful indeed. It’s a bit of a cliché to compare the silicon chip industry to the building of Gothic cathedrals, in both effort and ambition, but ASML’s EUV machine makes it a very tempting metaphor. It’s certainly as costly, and getting on for a similar sized floor-plan. Instead I’ll deflate the rhetorical atmosphere slightly with a more humble metaphor. Predicting the end of Moore’s Law is very like making a long car journey with a small child, who keeps asking “Are We There Yet!, Are We There Yet!”. And I can now tell you that the answer is a resounding “Not Yet!...Soon!”.    

Sunday 1 May 2022

A TRIP DOWN MEMORY STICK LANE


Dick Pountain /Idealog 327/ 04 Oct 2021 01:24


Sunday was a beautiful sunny October day so I took a walk on Hampstead Heath. The sky was covered in small white clouds, like a Simpsons intro, so I took a picture of it on my Moto g8 Power Lite phone, then set off through the thick woods between the Viaduct Pond and Kenwood. The thick undergrowth put me in mind of a favourite piece of music so I fired up Spotify and my Bluetooth headphones and listened to Janáček’s ‘On An Overgrown Path’ (what, ‘metropolitan elitist’, moi?) On my way back, near Highgate Pond I glimpsed a movement in the grass and found a pair of bright red crayfish disporting themselves on dry(ish) land. My acid-head days are far in the past and a passing dog-walker confirmed that they were real, so I took another picture.

On the 88 bus going home I Googled ‘heath crayfish’, to discover that they are another invasive species from the USA, the Red Swamp Crayfish, and they live in holes in damp ground. When I got home I looked at Google Photos on my Chromebook and the pictures were already there: downloaded them, fettled them a little in Snapseed and posted them to Facebook. The whole operation, from Heath to FB, had taken half-a-dozen button taps and involved no cables at all.
How very different from a decade ago when I was writing columns moaning about having to carry a pocket camera, a mobile phone and a PDA every day, and dreaming of a day when one device would do it all. 

That thought sent me off to the drawer where my old pocket devices go to die, from which I pulled out all my Palm and Sony PDAs (leaving the Psion Organisers, 3s and 5s in there). There were two Palm Pilots, an original 5000 – given to me by the company at CeBiT 1996 when they first came out – and a cheapo m100 that I used for years having given away my Palm V. There also was the extraordinary folding portable keyboard. In a fit of nostalgia I popped in a couple of rechargeable AAA and both sprang back into life after 20 years of disuse.

I say life, but that’s an exaggeration as their sync cradles were nowhere to be seen, probably buried deep in my cable midden, and I no longer have anything on which to run Palm Desktop software or drivers, which won’t run on ChromeOS, Android or 64-bit Windows. And both were wholly empty of data, not having any non-volatile internal memory. The cute little Sony Clié SL10 did have a slot for a Memory Stick, but that was empty as I’d transferred its card to the Clié TJ27 I had stolen.

I searched online for signs of a revival cult of the sort you can find for, say the Sinclair Spectrum or Commodore 64, but there was absolutely nothing. That’s probably because the considerable success that Palm OS enjoyed for barely a decade was based, like Apple’s, on a ‘walled-garden’ of proprietary hardware and matching software – writing drivers for today’s hardware, and the lack of wireless protocols would make it a labour of lunacy rather than love. Unlike Apple, Palm never quite managed to breach the garden wall and run free onto the internet. It was very late integrating a camera and later still, fatally late, in integrating a mobile phone. I still have my Palm Treo 680 which I used happily for several years and does still work, but the sheer weight of Android eventually attracted me away, like everyone else.

This nostalgia fest has made me realise how much I’ve forgotten about the Palm and Sony PDAs but I swotted it all up from my online collection of Idealog back-issues on Google’sBlogger, by searching for ‘Clie’ and ‘Palm’ which pulled up a dozen columns with titles like ‘On The Road’, ‘In The Pocket’, ‘Twilight Of The PDA’, and ‘Mobile Mayhem’. Reminded me of the adventurous stuff I did on those primitive devices, like backing up my laptop to the Clié’s Memory Stick or writing whole columns in Documents To Go on a Pilot.

All gone, up into the cloud, replaced by a new phone and Google’s marvellous ecosystem (completely defanged of techie nonsense and so easy that any child can use it). But it’s also a rather brutal reminder of the extent to which Google’s ecosystem is now support and extension for my own gently dwindling memory powers. Incidentally, I started this column with that mention of the Simpsons who, whenever they construct a cheap episode from repeated excerpts, offer a tongue-in-cheek apology for doing ‘a clip show’. Well, I’m not going to, so there...

[You can admire Dick Pountain’s clouds and crawdads on Flickr at https://www.flickr.com/photos/dick_pountain/ ]

Thursday 3 March 2022

STIRLING WORK

Dick Pountain /Idealog 326/ 06 Sep 2021 11:11


If there’s an underlying theme to this column (which may be doubted) then it’s the difference between the physical and the digital worlds, summed up in an aphorism I’ve employed far too many times “You can order a pizza online but you can’t eat it online”. I’ve been living in this gap between worlds for 40 years now: my first toe in the digital water was via a Commodore PET in 1981 at the very start of the personal computer revolution, though it wasn’t until the coming of WWW that we all got properly connected together. 

Of course I was born into the physical world, and inhabited it with increasing curiosity throughout a childhood filled with Meccano (I built the travelling gantry!) and model aeroplanes with glow-plug engines (I was in a control-line combat team). At school I excelled in science and my college lab days were pretty physical: hot, smelly and toxic in organic chemistry; warm, wet, salty and mildly radioactive in biochemistry. I dropped out of science and stumbled into the digital world by accident when Dennis Publishing acquired Personal Computer World magazine, merely because I was the most numerate person present, though my previous experience had been handing a sheaf of printout out to a man in a brown lab-coat and getting the results back Tuesday.The physical world is filled with palpable, even edible, objects made of atoms and molecules that whizz around subject to rules we were taught in physics and chemistry, while the digital world consists of bits with which we construct representations of those physical objects to calculate and simulate their relationships. More precisely, bits live between the physical and yet another world, the quantum world, which follows different, weirder, rules that allow ‘entanglement’ to eradicate distance and separation. Bits are electrical charges stored in silicon capacitors or similar, and at today’s tiny feature sizes they begin to feel that weirdness. My scepticism toward the current hype over quantum computers grows from a suspicion that many quantum enthusiasts believe  ‘quantum’ is going to free us from the confines of boring old physicality, which it isn’t. 

We in the physical world are ruled by the laws of thermodynamics. For me the most readable account of those laws is Professor Peter Atkins 1984 book The Second Law, in which he explains that many things in the world are engines that take in energy, turn some (never all) of it into work, and expel the ‘waste’ as heat. For example we take in glucose and run around thinking, while chips steer electrons to perform boolean operations. 

I was first introduced to Atkins book by my brother-in-law Pip Hills, who like me was fascinated from childhood by motors and machines – we once drove to Prague together in his 1937 Lagonda, fitted with a large Gardner diesel engine (http://www.dickpountain.co.uk/home/journalism/the-classic-motoring-review). Pip studied philosophy rather than chemistry, but we share that faith in thermodynamics as a way to understand the limits of the physical world. A few years ago while visiting us in London, he bought a rather fine old working model of an odd-looking engine on Portobello Road market: we took it home, plonked it on a lit gas ring and it just started running. 

Once back home in Scotland Pip began to study the history of this unusual type of heat engine, becoming so intrigued that he’s written his own book about it called The Star Drive (Birlinn 2021). He explains how the engine was invented by a Scottish parson Robert Stirling in 1816, and it runs by expanding hot air rather than steam. It differs from internal combustion, petrol or diesel, engines by using an external heat source. Simpler and with fewer moving parts than steam engines, Stirling engines nevertheless lost out during the industrial revolution because the high temperatures they worked at broke available materials like cast iron and leather, and also because their speed isn’t easy to control. But more recently, their ability to operate completely sealed from the outside world except for a heat source has opened up several interesting niche applications. 

The Swedish navy operates small non-nuclear submarines powered by Stirling engines burning liquid oxygen and diesel fuel catalytically to charge batteries for long, silent underwater periods. One of these humiliated the US navy’s aircraft carrier USS Ronald Reagan during 2005 war games in San Diego, by penetrating its sophisticated defences to ‘paintball’ it. NASA employs closed-cycle Stirling engines heated by small nuclear reactors to generate electricity in spacecraft travelling to the outer solar system where they must operate for years without sunlight, lubrication or maintenance. And miniature Stirling engines driven backward by external electromagnets make highly efficient heat pumps for cryogenic cooling, perhaps to keep some future quantum computer within its own chilly world.    

[Dick Pountain nowadays expels most waste heat via the top of his head]



  


SPEECHLESS

Dick Pountain /Idealog 325/ 05 Aug 2021 02:1


I was casually watching a Hawaiian volcano erupt on YouTube, as you do, when I felt something slightly creepy in the narration that I couldn’t quite identify. It sounded like an adult American male, but with something very subtly wrong about its rhythm. I started noticing the same in other US videos, and posted on Facebook to ask whether anyone else thought synthetic digital voices were being used: consensus was probably not. Then last month the MIT tech review published an article about AI voice actors (www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human ) which said that although “deepfake voices had something of a lousy reputation for their use in scam calls and internet trickery. But their improving quality has since piqued the interest of a growing number of companies. Recent breakthroughs in deep learning have made it possible to replicate many of the subtleties of human speech.” It’s now possible to sample the voice of a human actor, or someone in your firm, then have a company build and rent to you a synthesiser that speaks your PR materials so well as to be undetectable.  

I’ve always had an inexplicable interest in voice synthesis. Most people nowadays regard computers as visual devices, but to me making them speak is just as interesting as drawing pictures on them. The first halfway decent text-to-speech (TTS) program I got was back in Windows 3.1 days - called Monologue, it came bundled with my first Soundblaster card. Monologue had a raw, Steven Hawking-like delivery, but it did support a simple syntax for marking up texts to add some degree of expression, and I amused myself getting to read poetry, including this poem (https://soundcloud.com/dick-pountain/the-primal-proof ) that Felix Dennis had dedicated to me. Over the next few years I kept in touch with the state of text-to-speech and voice-recognition art, particularly via the ground-breaking work of the Belgian researchers Lernhout and Hauspie who I mentioned in this column in 1999. During the 13 years I spent living part-time in Italy I keenly followed the progress being made by Google with its voice and translation engines, and by the 2000-teens it was becoming possible for me to use an Android phone like Star Trek’s universal translator when I needed to extend my feeble vocabulary, Type what I want to say into Google Translate, have it spoken to me in Italian and practice it before going into, say, a police station or hardware store. I didn’t quite have the nerve to hold up the phone to speak for me...

By this time the field was splitting between cloud-based and local ‘edge’-based software: cloud voice services were becoming convincing enough to be used in those scams that MIT Tech mentioned. I’m not so much interested in those as in the cruder TTS programs that one can get for free to run on a phone or Chromebook. One in particular, called Vocality, tickled my fancy. A simple interface onto Google Speech services, it offers control over speed and pitch plus a large selection of national voices. For example it lets me create comical action-movie-villain dialogues by choosing, say, a Russian or Albanian voice and setting pitch ridiculously low. I also discovered that by typing in strings of random characters and setting speed high I could generate something resembling ‘mouth-music’, as in this catchy little ditty (https://soundcloud.com/dick-pountain/tuvan-gruv). Politically-incorrect perhaps, but fun. 

Before writing this column I checked out the current state of local TTS apps and found dozens of free ones that are massively improved: for example Balabolka, Natural Reader, Panopreter, TTSReader and Wordtalk offer good quality speech and even customisable voices. 

But it’s in the cloud-based arena that things get scary. Nuance is a typical company offering to “deliver a human‑like, engaging, and personalized user experience. Enhance any customer self‑service application with high‑quality audio tailored to your brand.” Or maybe Amazon which offers the Polly API to developers to add Alexa-like abilities to their products. For movie professionals LucasFilm offers ReSpeecher to “create speech that's indistinguishable from the original speaker. Perfect for dubbing an actor's voice in post production, bringing back the voice of an actor who passed away, and other content creators' problems.” But it’s Amai (https://amai.io/ ) who really spell it out: “Sorry, voice actors, we will replace you soon […] this text is painted with the Love emotion.You can highlight any text, choose any emotion and listen to how it sounds, for example this phrase is pronounced with the Happiness.” Go to their site to hear the perky result. 



  







CHIPS WITH THAT?

Dick Pountain /Idealog 324/ 05 Jul 2021 09:56


During lockdown I’ve bought three new electronic gadgets, a new mobile phone, a new tablet and a new multi-effect guitar pedal. This wasn’t out of boredom but because the old ones had become unusably slow (phone) or noisy (pedal) or packed up altogether (tablet). I wonder how many VLSI chips that means I bought? I’d guess at least twenty, what with phone and tablet SoCs, signal processors in the pedal and heaps of memory. Most importantly, none of these were premium priced items, all costing below £150, which means they likely contain not the latest chips but rather the cheapest, fabricated using older processes. And there you have one cause of the drastic shortage of chips that the whole world is currently experiencing. 

Moore’s Law is turning around to bite us on the arse: the cost of building fabs for new processes like the 5 nanometer used by Apple’s M1 CPU keeps rising exponentially, meaning that chips cost more and must be sold in higher-end kit to repay the investment.Taiwan’s TSMC, the world’s largest contract chip fabricator, earned over half of its 2020 revenues from top-end chips with feature sizes below 16nm, including the M1. At the same time putting chips into everything – cars, washing machines (pencil sharpeners soon?) –  means demand for the very cheapest chips has exploded. These cheaper chips, made on older fabs, now sell for so little that there’s almost no margin left in making them, hence no one builds fab for the old processes any longer and demand is massively outstripping supply. Market forces are biting us on the other cheek. 

(As an aside, this is not at all unheard of by economists. Think of the airline industry. Concorde was state-of-the art, able to whisk 100 passengers to New York in three hours for several thousand pounds a head. It was ultimately killed off not by cost of purchase, noise regulations or US regulatory machinations but by Boeing’s 747 Jumbo which carried four times as many at less than half the speed for a tenth of the fare.) 

The economics of new fab isn’t the only cause of the shortage: ‘Acts of God’ like fires destroying Taiwan fabs and the COVID-19 pandemic all take their toll too. And the shortage doesn’t look like slackening any time soon because no-one is building new fab for old processes yet. The MIT Technology Review reports that: “Automakers have been shutting down assembly lines and laying off workers because they can’t get enough $1 chips. Manufacturers have resorted to building vehicles without the chips necessary for navigation systems, digital rear-view mirrors, display touch screens, and fuel management systems. Overall, the global automotive industry could lose more than $110 billion to the shortage in 2021”. 

China, lagging as it does behind the US design edge, however possesses quite a lot of old fab which might have interesting geo-political consequences – President Biden is already getting antsy, signing executive orders to approve a $50 billion boost for strategic semiconductor manufacturing, research and supply-chain protection. That won’t relieve the shortage in the short or medium terms, given the time it takes to build fab and that the US has exported almost all its capability to the Far East.

A couple of weeks ago at the CogX 2021 AI conference in King’s Cross I heard Nvidia CEO Jensen Huang talk about his company’s takeover of ARM. I was appalled when the UK government allowed the firm to be sold abroad, first to SoftBank and now to Nvidia, but Huang explained their commitment to working with the EU (irony alert) to build a state-of-the art supercomputer called Destination Earth for climate simulation, and another called Cambridge-1 in the UK. Far from wanting to move ARM out of Cambridge, he wants to invest and expand it there. Wearing my cynic's hat I might have thought “he would say that wouldn't he”; in historian's hat I may have thought “Frank Whittle and the jet engine all over again”; but in my realist's hat I actually thought “ARM’s probably safer with this guy than the clueless shower currently running the country”. ARM had decided decades ago that fabrication was a mugs’ game, and preferred to license the IP of its low-power cores that drive many, if not most, of the cheap chips that run mobile phones and IoT smart devices.

Where does this all leave Intel I hear you ask? Are its days as a mass-market CPU vendor numbered? Will it need to slog it out with Nvidia and AMD at the supercomputing end, using its Rocket Lake and Ice Lake CPUs and GPUs. Climate change simulation could be the last happy hunting ground for fat, government-assisted margins.

[Dick Pountain pretends to have known that gadget prices will soon be rising]  



Q’S GADGETS


                      Dick Pountain /Idealog 323/ 06 Jun 2021 09:10

Random House Business recently sent me a copy of WIRED magazine’s little guide book ‘Quantum Computing: how it works and why it could change the world’ by Amit Katwala, and coming from such a source I felt it deserves mention here. Good news is that it’s an excellent introduction to the current state and prospects for quantum computing – not too technical, wasting little space on the basics everyone already knows now (superposition, entanglement, decoherence) but avoiding the hieroglyphics of quantum algorithms. At barely 6,000 words it’s very short, and that’s because it’s surprisingly honest about the fact that quantum computers barely exist today and that their prospects remain rather dim.

In chapter 2 Katwala offers a whirlwind summary of the three major current research directions – laser ion-traps (Amazon/IonQ), cryogenic Josephson junctions (Google and IBM), and ‘topological qubits’ (Microsoft) – which he explains in a clear, readable style. As he goes he continually points to their weaknesses: ion-traps need too many lasers to be scalable; cooling to 0.01°K requires monstrous cryostats that consume lots of energy; and Microsoft’s trick for dodging decoherence would depend on the existence of a still-undiscovered fundamental particle! (That’s rather hard on Microsoft, since a recently discovered cerium/ruthenium/tin ‘Weyl-Kondo’ semi-metal alloy might render this phantom particle unnecessary).

Katwala is admirably candid about the problem that all three strands of research share, that environmental noise causes qubits to rapidly untangle, leaving barely microseconds in which to perform useful computation. This decoherence also renders the results unreliable, and so an enormous degree of error-correction is required: each working qubit must be surrounded by dozens of error-correction qubits (and whenever an error is detected these error qubits themselves must be error-corrected). Google’s Sycamore chip, which is claimed to have achieved ‘quantum supremacy’, contains 53 qubits, but most of those would have been doing error-correction. This is the reason physicist John Preskill, one of the leading researchers, dubbed this the NISQ (noisy intermediate scale quantum) era, where quantum computers exist but aren’t yet robust enough to fulfill their promise. Harsher critics suspect that the 2nd Law of Thermodynamics may be gnawing away so that quantum computing may be inherently unfeasible.

That hardly matters though, because the quantum computing bandwagon has become unstoppable now that politicians and soldiers are involved. The strong promise of quantum computing, namely an exponential speed increase over classical computers, threatens to make public-key encryption, as used by the military, the banks, even by WhatsApp, crackable. This makes it a matter of national security, unlocking unlimited funding and starting a new Cold War-style arms race between China and the West. However Katwala is equally candid that this strong promise is itself quite dubious: the problem classes for which quantum algorithms are known that give exponential speed-up are surprisingly few. It turns out the best quantum algorithms known for commercially-important optimisation problems like Travelling Salesman offer only a quadratic, not exponential, advantage over classical computers. Which isn’t peanuts – reducing a one million step calculation to one thousand may be the difference between overnight and almost real-time, which City traders would happily pay for – but it won’t satisfy the crypto-crowd nor justify such huge research budgets.

I’d certainly recommend Katwala’s book as a quick read to bring you up to speed with mainstream quantum thinking, but it doesn’t cover any radically different ‘long-shot’ directions. I’m personally convinced that if quantum computing happens it will only be through room-temperature, solid-state technologies that are barely here yet, and I’m also an enthusiast for neuromorphic computing architectures that mimic the nervous systems of animals, using electronic components that might employ hybrid digital-and-analog computations.

Neuromorphic engineering was first pursued by one of my heroes, Carver Mead in the late 1980s, for designing vision systems, auditory processors and autonomous robots. The convolutional neural networks that are driving the recent explosion in commercial AI and machine-learning are only one aspect of a far wider domain of neuromorphic computing models, and they obviously employ classical computing components. Deep-learning networks are becoming a source of concern over the colossal amount of power they consume when training on enormous data sets, so this is an area where even a ‘mere’ quadratic speed advantage would be very welcome indeed.

US researchers at Purdue have shown how ‘spintronic’ quantum measurements might be used to implement neuromorphic processors, and many groups are investigating spin switching in ‘nitrogen-vacancies’ within synthetic diamond lattices – diamond-based qubits might resist decoherence for milli- rather than micro- seconds, and at room temperature. Were I writing a science-fiction screenplay, my future quantum computers would be alternating sandwiched layers (like a Tunnock’s Caramel Wafer) of epitaxially-deposited diamond and twisted graphene, read and written by a flickering laser inside a dynamic, holographic magnetic field.

[Dick Pountain is well aware of the addiction risk posed by Tunnock’s Caramel Wafers]








SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...