Thursday 3 March 2022

STIRLING WORK

Dick Pountain /Idealog 326/ 06 Sep 2021 11:11


If there’s an underlying theme to this column (which may be doubted) then it’s the difference between the physical and the digital worlds, summed up in an aphorism I’ve employed far too many times “You can order a pizza online but you can’t eat it online”. I’ve been living in this gap between worlds for 40 years now: my first toe in the digital water was via a Commodore PET in 1981 at the very start of the personal computer revolution, though it wasn’t until the coming of WWW that we all got properly connected together. 

Of course I was born into the physical world, and inhabited it with increasing curiosity throughout a childhood filled with Meccano (I built the travelling gantry!) and model aeroplanes with glow-plug engines (I was in a control-line combat team). At school I excelled in science and my college lab days were pretty physical: hot, smelly and toxic in organic chemistry; warm, wet, salty and mildly radioactive in biochemistry. I dropped out of science and stumbled into the digital world by accident when Dennis Publishing acquired Personal Computer World magazine, merely because I was the most numerate person present, though my previous experience had been handing a sheaf of printout out to a man in a brown lab-coat and getting the results back Tuesday.The physical world is filled with palpable, even edible, objects made of atoms and molecules that whizz around subject to rules we were taught in physics and chemistry, while the digital world consists of bits with which we construct representations of those physical objects to calculate and simulate their relationships. More precisely, bits live between the physical and yet another world, the quantum world, which follows different, weirder, rules that allow ‘entanglement’ to eradicate distance and separation. Bits are electrical charges stored in silicon capacitors or similar, and at today’s tiny feature sizes they begin to feel that weirdness. My scepticism toward the current hype over quantum computers grows from a suspicion that many quantum enthusiasts believe  ‘quantum’ is going to free us from the confines of boring old physicality, which it isn’t. 

We in the physical world are ruled by the laws of thermodynamics. For me the most readable account of those laws is Professor Peter Atkins 1984 book The Second Law, in which he explains that many things in the world are engines that take in energy, turn some (never all) of it into work, and expel the ‘waste’ as heat. For example we take in glucose and run around thinking, while chips steer electrons to perform boolean operations. 

I was first introduced to Atkins book by my brother-in-law Pip Hills, who like me was fascinated from childhood by motors and machines – we once drove to Prague together in his 1937 Lagonda, fitted with a large Gardner diesel engine (http://www.dickpountain.co.uk/home/journalism/the-classic-motoring-review). Pip studied philosophy rather than chemistry, but we share that faith in thermodynamics as a way to understand the limits of the physical world. A few years ago while visiting us in London, he bought a rather fine old working model of an odd-looking engine on Portobello Road market: we took it home, plonked it on a lit gas ring and it just started running. 

Once back home in Scotland Pip began to study the history of this unusual type of heat engine, becoming so intrigued that he’s written his own book about it called The Star Drive (Birlinn 2021). He explains how the engine was invented by a Scottish parson Robert Stirling in 1816, and it runs by expanding hot air rather than steam. It differs from internal combustion, petrol or diesel, engines by using an external heat source. Simpler and with fewer moving parts than steam engines, Stirling engines nevertheless lost out during the industrial revolution because the high temperatures they worked at broke available materials like cast iron and leather, and also because their speed isn’t easy to control. But more recently, their ability to operate completely sealed from the outside world except for a heat source has opened up several interesting niche applications. 

The Swedish navy operates small non-nuclear submarines powered by Stirling engines burning liquid oxygen and diesel fuel catalytically to charge batteries for long, silent underwater periods. One of these humiliated the US navy’s aircraft carrier USS Ronald Reagan during 2005 war games in San Diego, by penetrating its sophisticated defences to ‘paintball’ it. NASA employs closed-cycle Stirling engines heated by small nuclear reactors to generate electricity in spacecraft travelling to the outer solar system where they must operate for years without sunlight, lubrication or maintenance. And miniature Stirling engines driven backward by external electromagnets make highly efficient heat pumps for cryogenic cooling, perhaps to keep some future quantum computer within its own chilly world.    

[Dick Pountain nowadays expels most waste heat via the top of his head]



  


SPEECHLESS

Dick Pountain /Idealog 325/ 05 Aug 2021 02:1


I was casually watching a Hawaiian volcano erupt on YouTube, as you do, when I felt something slightly creepy in the narration that I couldn’t quite identify. It sounded like an adult American male, but with something very subtly wrong about its rhythm. I started noticing the same in other US videos, and posted on Facebook to ask whether anyone else thought synthetic digital voices were being used: consensus was probably not. Then last month the MIT tech review published an article about AI voice actors (www.technologyreview.com/2021/07/09/1028140/ai-voice-actors-sound-human ) which said that although “deepfake voices had something of a lousy reputation for their use in scam calls and internet trickery. But their improving quality has since piqued the interest of a growing number of companies. Recent breakthroughs in deep learning have made it possible to replicate many of the subtleties of human speech.” It’s now possible to sample the voice of a human actor, or someone in your firm, then have a company build and rent to you a synthesiser that speaks your PR materials so well as to be undetectable.  

I’ve always had an inexplicable interest in voice synthesis. Most people nowadays regard computers as visual devices, but to me making them speak is just as interesting as drawing pictures on them. The first halfway decent text-to-speech (TTS) program I got was back in Windows 3.1 days - called Monologue, it came bundled with my first Soundblaster card. Monologue had a raw, Steven Hawking-like delivery, but it did support a simple syntax for marking up texts to add some degree of expression, and I amused myself getting to read poetry, including this poem (https://soundcloud.com/dick-pountain/the-primal-proof ) that Felix Dennis had dedicated to me. Over the next few years I kept in touch with the state of text-to-speech and voice-recognition art, particularly via the ground-breaking work of the Belgian researchers Lernhout and Hauspie who I mentioned in this column in 1999. During the 13 years I spent living part-time in Italy I keenly followed the progress being made by Google with its voice and translation engines, and by the 2000-teens it was becoming possible for me to use an Android phone like Star Trek’s universal translator when I needed to extend my feeble vocabulary, Type what I want to say into Google Translate, have it spoken to me in Italian and practice it before going into, say, a police station or hardware store. I didn’t quite have the nerve to hold up the phone to speak for me...

By this time the field was splitting between cloud-based and local ‘edge’-based software: cloud voice services were becoming convincing enough to be used in those scams that MIT Tech mentioned. I’m not so much interested in those as in the cruder TTS programs that one can get for free to run on a phone or Chromebook. One in particular, called Vocality, tickled my fancy. A simple interface onto Google Speech services, it offers control over speed and pitch plus a large selection of national voices. For example it lets me create comical action-movie-villain dialogues by choosing, say, a Russian or Albanian voice and setting pitch ridiculously low. I also discovered that by typing in strings of random characters and setting speed high I could generate something resembling ‘mouth-music’, as in this catchy little ditty (https://soundcloud.com/dick-pountain/tuvan-gruv). Politically-incorrect perhaps, but fun. 

Before writing this column I checked out the current state of local TTS apps and found dozens of free ones that are massively improved: for example Balabolka, Natural Reader, Panopreter, TTSReader and Wordtalk offer good quality speech and even customisable voices. 

But it’s in the cloud-based arena that things get scary. Nuance is a typical company offering to “deliver a human‑like, engaging, and personalized user experience. Enhance any customer self‑service application with high‑quality audio tailored to your brand.” Or maybe Amazon which offers the Polly API to developers to add Alexa-like abilities to their products. For movie professionals LucasFilm offers ReSpeecher to “create speech that's indistinguishable from the original speaker. Perfect for dubbing an actor's voice in post production, bringing back the voice of an actor who passed away, and other content creators' problems.” But it’s Amai (https://amai.io/ ) who really spell it out: “Sorry, voice actors, we will replace you soon […] this text is painted with the Love emotion.You can highlight any text, choose any emotion and listen to how it sounds, for example this phrase is pronounced with the Happiness.” Go to their site to hear the perky result. 



  







CHIPS WITH THAT?

Dick Pountain /Idealog 324/ 05 Jul 2021 09:56


During lockdown I’ve bought three new electronic gadgets, a new mobile phone, a new tablet and a new multi-effect guitar pedal. This wasn’t out of boredom but because the old ones had become unusably slow (phone) or noisy (pedal) or packed up altogether (tablet). I wonder how many VLSI chips that means I bought? I’d guess at least twenty, what with phone and tablet SoCs, signal processors in the pedal and heaps of memory. Most importantly, none of these were premium priced items, all costing below £150, which means they likely contain not the latest chips but rather the cheapest, fabricated using older processes. And there you have one cause of the drastic shortage of chips that the whole world is currently experiencing. 

Moore’s Law is turning around to bite us on the arse: the cost of building fabs for new processes like the 5 nanometer used by Apple’s M1 CPU keeps rising exponentially, meaning that chips cost more and must be sold in higher-end kit to repay the investment.Taiwan’s TSMC, the world’s largest contract chip fabricator, earned over half of its 2020 revenues from top-end chips with feature sizes below 16nm, including the M1. At the same time putting chips into everything – cars, washing machines (pencil sharpeners soon?) –  means demand for the very cheapest chips has exploded. These cheaper chips, made on older fabs, now sell for so little that there’s almost no margin left in making them, hence no one builds fab for the old processes any longer and demand is massively outstripping supply. Market forces are biting us on the other cheek. 

(As an aside, this is not at all unheard of by economists. Think of the airline industry. Concorde was state-of-the art, able to whisk 100 passengers to New York in three hours for several thousand pounds a head. It was ultimately killed off not by cost of purchase, noise regulations or US regulatory machinations but by Boeing’s 747 Jumbo which carried four times as many at less than half the speed for a tenth of the fare.) 

The economics of new fab isn’t the only cause of the shortage: ‘Acts of God’ like fires destroying Taiwan fabs and the COVID-19 pandemic all take their toll too. And the shortage doesn’t look like slackening any time soon because no-one is building new fab for old processes yet. The MIT Technology Review reports that: “Automakers have been shutting down assembly lines and laying off workers because they can’t get enough $1 chips. Manufacturers have resorted to building vehicles without the chips necessary for navigation systems, digital rear-view mirrors, display touch screens, and fuel management systems. Overall, the global automotive industry could lose more than $110 billion to the shortage in 2021”. 

China, lagging as it does behind the US design edge, however possesses quite a lot of old fab which might have interesting geo-political consequences – President Biden is already getting antsy, signing executive orders to approve a $50 billion boost for strategic semiconductor manufacturing, research and supply-chain protection. That won’t relieve the shortage in the short or medium terms, given the time it takes to build fab and that the US has exported almost all its capability to the Far East.

A couple of weeks ago at the CogX 2021 AI conference in King’s Cross I heard Nvidia CEO Jensen Huang talk about his company’s takeover of ARM. I was appalled when the UK government allowed the firm to be sold abroad, first to SoftBank and now to Nvidia, but Huang explained their commitment to working with the EU (irony alert) to build a state-of-the art supercomputer called Destination Earth for climate simulation, and another called Cambridge-1 in the UK. Far from wanting to move ARM out of Cambridge, he wants to invest and expand it there. Wearing my cynic's hat I might have thought “he would say that wouldn't he”; in historian's hat I may have thought “Frank Whittle and the jet engine all over again”; but in my realist's hat I actually thought “ARM’s probably safer with this guy than the clueless shower currently running the country”. ARM had decided decades ago that fabrication was a mugs’ game, and preferred to license the IP of its low-power cores that drive many, if not most, of the cheap chips that run mobile phones and IoT smart devices.

Where does this all leave Intel I hear you ask? Are its days as a mass-market CPU vendor numbered? Will it need to slog it out with Nvidia and AMD at the supercomputing end, using its Rocket Lake and Ice Lake CPUs and GPUs. Climate change simulation could be the last happy hunting ground for fat, government-assisted margins.

[Dick Pountain pretends to have known that gadget prices will soon be rising]  



Q’S GADGETS


                      Dick Pountain /Idealog 323/ 06 Jun 2021 09:10

Random House Business recently sent me a copy of WIRED magazine’s little guide book ‘Quantum Computing: how it works and why it could change the world’ by Amit Katwala, and coming from such a source I felt it deserves mention here. Good news is that it’s an excellent introduction to the current state and prospects for quantum computing – not too technical, wasting little space on the basics everyone already knows now (superposition, entanglement, decoherence) but avoiding the hieroglyphics of quantum algorithms. At barely 6,000 words it’s very short, and that’s because it’s surprisingly honest about the fact that quantum computers barely exist today and that their prospects remain rather dim.

In chapter 2 Katwala offers a whirlwind summary of the three major current research directions – laser ion-traps (Amazon/IonQ), cryogenic Josephson junctions (Google and IBM), and ‘topological qubits’ (Microsoft) – which he explains in a clear, readable style. As he goes he continually points to their weaknesses: ion-traps need too many lasers to be scalable; cooling to 0.01°K requires monstrous cryostats that consume lots of energy; and Microsoft’s trick for dodging decoherence would depend on the existence of a still-undiscovered fundamental particle! (That’s rather hard on Microsoft, since a recently discovered cerium/ruthenium/tin ‘Weyl-Kondo’ semi-metal alloy might render this phantom particle unnecessary).

Katwala is admirably candid about the problem that all three strands of research share, that environmental noise causes qubits to rapidly untangle, leaving barely microseconds in which to perform useful computation. This decoherence also renders the results unreliable, and so an enormous degree of error-correction is required: each working qubit must be surrounded by dozens of error-correction qubits (and whenever an error is detected these error qubits themselves must be error-corrected). Google’s Sycamore chip, which is claimed to have achieved ‘quantum supremacy’, contains 53 qubits, but most of those would have been doing error-correction. This is the reason physicist John Preskill, one of the leading researchers, dubbed this the NISQ (noisy intermediate scale quantum) era, where quantum computers exist but aren’t yet robust enough to fulfill their promise. Harsher critics suspect that the 2nd Law of Thermodynamics may be gnawing away so that quantum computing may be inherently unfeasible.

That hardly matters though, because the quantum computing bandwagon has become unstoppable now that politicians and soldiers are involved. The strong promise of quantum computing, namely an exponential speed increase over classical computers, threatens to make public-key encryption, as used by the military, the banks, even by WhatsApp, crackable. This makes it a matter of national security, unlocking unlimited funding and starting a new Cold War-style arms race between China and the West. However Katwala is equally candid that this strong promise is itself quite dubious: the problem classes for which quantum algorithms are known that give exponential speed-up are surprisingly few. It turns out the best quantum algorithms known for commercially-important optimisation problems like Travelling Salesman offer only a quadratic, not exponential, advantage over classical computers. Which isn’t peanuts – reducing a one million step calculation to one thousand may be the difference between overnight and almost real-time, which City traders would happily pay for – but it won’t satisfy the crypto-crowd nor justify such huge research budgets.

I’d certainly recommend Katwala’s book as a quick read to bring you up to speed with mainstream quantum thinking, but it doesn’t cover any radically different ‘long-shot’ directions. I’m personally convinced that if quantum computing happens it will only be through room-temperature, solid-state technologies that are barely here yet, and I’m also an enthusiast for neuromorphic computing architectures that mimic the nervous systems of animals, using electronic components that might employ hybrid digital-and-analog computations.

Neuromorphic engineering was first pursued by one of my heroes, Carver Mead in the late 1980s, for designing vision systems, auditory processors and autonomous robots. The convolutional neural networks that are driving the recent explosion in commercial AI and machine-learning are only one aspect of a far wider domain of neuromorphic computing models, and they obviously employ classical computing components. Deep-learning networks are becoming a source of concern over the colossal amount of power they consume when training on enormous data sets, so this is an area where even a ‘mere’ quadratic speed advantage would be very welcome indeed.

US researchers at Purdue have shown how ‘spintronic’ quantum measurements might be used to implement neuromorphic processors, and many groups are investigating spin switching in ‘nitrogen-vacancies’ within synthetic diamond lattices – diamond-based qubits might resist decoherence for milli- rather than micro- seconds, and at room temperature. Were I writing a science-fiction screenplay, my future quantum computers would be alternating sandwiched layers (like a Tunnock’s Caramel Wafer) of epitaxially-deposited diamond and twisted graphene, read and written by a flickering laser inside a dynamic, holographic magnetic field.

[Dick Pountain is well aware of the addiction risk posed by Tunnock’s Caramel Wafers]








SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...