Sunday, 30 November 2025

COLOR ME OLO

Dick Pountain /Idealog 371/ 10 Jun 2025 12:15

I’ve expressed my feelings about science fiction before many, many, probably too many, times in this column. A big fan in my 1960s teens, a bout of illness in the ‘70s let me binge all the greats – Vonnegut, Ballard, LeGuin, Dick, Pohl, Bester etc – overdosing so badly that I never wanted to read sci-fi again. Looking back now as emotionally-retarded pseudo-intellectual sci-fi fans appear to be taking over the world, I think perhaps my immune system was telling me something. However this allergic reaction doesn’t apply to the closely related genre of fantasy (or gothic, or cosmic) horror. I still can cringe a little to M.P. Shiel’s ‘The Purple Cloud’, William Hope Hodgson’s ‘The House on the Borderland’, or the entire oeuvre of H.P, Lovecraft. 

One of Lovecraft’s stories, ‘The Colour Out Of Space’, struck me particularly hard. A meteorite lands in a tiny community in the New England woods, containing globules of a weird colour that isn’t in the solar spectrum: it has unpleasant effects that consume all living animals, plants and humans and turn them into grey ash. Several unsuccessful attempts have been made to film this story, hard work given that it’s not in the technicolor spectrum either: perhaps the nearest anyone has come is Alex Garland’s 2018  ‘Annihilation’ which clearly shows Lovecraftian influence and employs a digitally-produced shimmer in place of a new colour. I suppose Lovecraft’s story is a sort of parable about environmental destruction, but that’s not what explains its hold on me – I’ve always been fascinated by colour, studying its chemistry, reading up on all the various systems, appreciating great paintings and creating my own digital art as a favourite hobby.   

All of which explains my enormous excitement on reading in the Science Adviser newsletter about new research at the University of California Berkeley, which ‘creates’ a new colour that lies outside the gamut humans can perceive, but can be seen by shining a laser onto ones’ retina (don’t try this at home kids). First some background is required. You already know that our eyes, and hence also digital display devices, perceive or present colours as mixtures of the three components red (R), green (G) and blue (B). That’s because the retinas of our eyes contain three types of colour-sensitive cell called ‘cones’, sensitive to different wavelengths: long (L), medium (M) and short (S). Objects illuminated by natural sunlight stimulate L, M and S cones to different extents which gives us the experience of different colours. Red light primarily stimulates L cones and blue light S cones, but since M cones respond to the middle of the range, overlapping with both L and S, there’s no component of sunlight that stimulates M alone. 

The authors of the Berkeley paper (https://www.science.org/doi/10.1126/sciadv.adu1052) set out to investigate a new system for describing colour perception, by using a laser to illuminate retinal cells one at a time: 

“We introduce a principle, Oz, for displaying color imagery: directly controlling the human eye’s photoreceptor activity via cell-by-cell light delivery. Theoretically, novel colors are possible through bypassing the constraints set by the cone spectral sensitivities and activating M cone cells exclusively. In practice, we confirm a partial expansion of colorspace toward that theoretical ideal. Attempting to activate M cones exclusively is shown to elicit a color beyond the natural human gamut.”

Calling their new system Oz was of course a trigger for me, having cut my journalistic teeth on that notorious hippie journal where my articles were printed in every colour of the rainbow on a background of every other colour of the rainbow. But I digress. Their new colour, from stimulating M cells alone with a laser, they named ‘olo’. It can’t be reproduced in paint, ink or screen, so you’ll only ever see it sitting in a dentists’ chair with a laser strapped to your head: 

“Subjects report that olo in our prototype system appears blue-green of unprecedented saturation, when viewed relative to a neutral gray background. Subjects find that they must desaturate olo by adding white light before they can achieve a color match with the closest monochromatic light.”

Thankfully olo has not so far shown any inclination to suck the life force out of living beings and reduce them to grey ash. The best approximation is a light turquoise, a colour you might glimpse fleetingly when watching a big wave break on a rocky headland in bright sunshine. Astronomers for a while believed the whole universe, by mixing all the light together, would be a light turquoise but that turned out to be a bug in their software, and it’s now ‘cosmic latte’, a light beige (hex triplet value in standard sRGB #FFF8E7). 


[Dick Pountain wonders whether Olo could become ‘the new Pistachio’]


GOING NEURO

Dick Pountain /Idealog 370/ 05 May 2025 01:31

I’ve written many, many sceptical words about AI in this column over the years, railing against overconfidence and hype, hubristic pursuit of AGI, deepfakery and content pillage, but nevertheless I do believe AI – once we’ve civilised it – is going to be hugely  important to science, economics, robotics, control systems, transport and everyday life itself. Given the political will, public concern about misinformation, invasion of privacy, and theft of artistic data can be regulated away, but there would remain one colossal stumbling block, namely energy consumption. 

When AI corporations consider purchasing mothballed nuclear reactors to power their compute-servers the absurdity of AI’s current direction ought to be visible to everyone. The current generation of GPT-based AI systems depend on supercomputers that can execute quintillions of simple tensor arithmetic operations per second to compare and combine multiple layers of vast matrices holding encoded parameters. Currently all this grunt is supplied using the same CMOS semiconductor process technologies that gave us the personal computer, the smartphone and especially the computer game – the Nvidia chips that drive most AI servers are descendants of ones originally developed for rendering real-time 3D games. The latest state-of-the-art GPUs have a watts/cm² power density around the same as an electric cooking hob, and the power consumption of AI server farms scales exponentially, as the square of the number employed (order O(N²) in the jargon of complexity theory). 

In their 1978 bible of the CMOS revolution ‘Introduction to VLSI Systems’, Mead and Conway devoted a final chapter to the thermodynamics of computation: we’ve long known that logic operations and memory accesses always consume energy, whether in silicon or in protein-and-salt-water like the human brain. However the human brain has far, far more neurons and synapses than even the largest current AI server farms have GPUs, yet consumes around 20 Watts as opposed to AI’s 50+ Megawatts. Understanding what’s responsible for this immense efficiency gap is crucial for creating a more sustainable next generation of AI, and the answer may lie in new architectures called ‘neuromorphic’ because they mimic biological neurons.

Individual metal-on-silicon-oxide transistors aren’t six orders of magnitude more power-hungry than biological neurons, so other factors must be responsible for the huge difference. One factor is that biological neurons are analog rather than digital, and another is that they act upon data in the same place that they store it. In contrast the CMOS GPUs in AI servers are examples of von Neumann architecture, with processing logic separated from memory, and program code from data. But the MOSFET transistors they’re made from are inherently analog, operated by varying voltages and currents, so the digital data they manipulate gets continuously converted back and forth between the domains, at great energy cost. 

Neuromorphic AI hardware designers try to bring data and processing closer together. Intel introduced its Loihi 2 research chip, with 128 neuromorphic cores and 33MB of on-chip SRAM, which communicates via trains of asynchronous voltage ‘spikes’ like those in biological neurons. Steve Furber (of ARM fame) works at Manchester University on a neuromorphic system called Spinnaker that has tens of thousands of nodes each with 18 ARM cores and memory, also using spike-based communication. These schemes do reduce data access overhead, but they remain digital devices, and to approach biological levels of energy economy will require a still more radical step into purely analog computation that exploits the physics of the chip material itself. 

The US firm Mythic’s AMP (Analog Matrix Processor) chip employs a 2D grid of tunable resistors whose values encode the weights for an AI model, whereupon it relies on Kirchoff’s Laws to in effect multiply-and-add the analog input voltages and perform convolutions. However AMP is still fabricated in CMOS. A more radical next step would be to implement this resistive analog computation using low-power ‘spintronic’ memristors – devices in which the orientation of magnetic spins represent bits as in modern hard disks. One way to implement non-volatile memristors is by FTJs (Ferroelectric Tunneling Junctions) formed by sandwiching nano-thin magnet/insulator/magnet layers which can be fabricated using existing semiconductor processing. These devices can be written to and switched cumulatively like real neurons, and read-out non-destructively using very little power. 

The Dutch physicist Johan Mentink used a recent Royal Institution lecture (https://youtu.be/VTKcsNrqdqA?si=ZRdxeyP4B-hfUw3X) to announce neuromorphic computing experiments in Holland that employ two-dimensional cross-bar grids of memristors, organised into a network of  ‘Stochastic Ising Machines’ that propagates waves of asynchronous random noise whose interference yields the spike trains that transmit information. The Dutch researchers claim such devices can potentially be scaled linearly with the number of synaptic connections, reducing power consumption by factors of 1000s. I love the idea of working with rather than against noise, which certainly feels like what our brains might be doing… 


[Dick Pountain’s neurons are more spiky than most]


Tuesday, 14 October 2025

INTERESTING TIMES?

Dick Pountain /Idealog 369/ 07 Apr 2025 12:06 

Some 26 years ago (May 1999/ Idealog 58) I opened this column thus: “It's said there's an ancient Chinese curse, "May you live in interesting times!" Actually it's been said by me at least twice before….”  

Well, it looks time to trot it out for the fourth time, though nowadays I can check its truth using my old friend ChatGPT: “No record of this phrase (or anything close to it) exists in classical Chinese Literature: the earliest known attribution is from British sources in the 20th Century, a 1936 speech by Sir Austen Chamberlain (brother of Neville) who claimed to have heard it as a Chinese curse but it seems more like a Western invention made to sound exotic or wise.”  So, I’ve been peddling fake news for a quarter of a century, but I don’t feel too guilty as everyone seems to be doing it. 

The financial turmoil created by President Trump’s tariff barrage purports to be against a whole world ripping off the USA through unfair trade, but we all know it’s really about China. Many of us believe Trump has made a monumental blunder that will ultimately help China to economic dominance: The Wall Street Journal expressed it thus “What a fabulous change in fortunes for the Chinese leader. Mr. Trump has taken an ax to the economic cords that were binding the rest of the world into an economic and strategic bloc to rival Beijing – and at precisely the moment many countries finally were starting to re-evaluate their economic relationships with China.” 

I’ve covered semiconductor fab, Taiwan invasions and DeepSeek in recent columns so let’s not go there again, except to guess that Trump’s shenanigans could cause interest rate, futures and bond market chaos that may bring down the intricate house-of-cards finance of the AI bubble corporations (already under siege from IP lawyers). Instead I’d rather talk about how the ‘interesting’ times are affecting my own everyday activities. 

For many years my online presence, apart from my own website, consisted of Flickr for posting photos and Facebook for chatting/arguing/posing with friends. No longer, as my online self has been shattered into a dozen fragments, none of which have quite the same scope or satisfaction as before. Facebook started to deteriorate for me a couple of years ago as old friends left and un-asked-for content increased, but since Zuck did a Musk on it by removing moderation it’s becoming intolerable, and as a result I’ve begun to work on building a following on both BlueSky and SubStack. 

BlueSky is full of left-leaning refugees from the steaming pit that X has become: lots of excellent, sympa content, in fact too much to read it all and unanimous enough to risk boredom. I joined SubStack years ago hoping to get paid for some of my stuff but that didn’t work out so I forgot it until now, when it appears to be changing into something different. It’s becoming a social platform to rival Facebook, an alternative refuge for X-iters and I actually find it more interesting than BlueSky, but with one huge reservation - its structure and user interface remain totally baffling. Is it a mailing list or a website, a forum or… what?  Do I add posts or notes, and where will the comments arrive? My efforts in computer-generated music are now scattered among a host of platforms including SoundCloud, YouTube, NoisePit, BandCamp and GrungeDump (I may have invented one of those) and it remains stubbornly antiviral on all of them. YouTube is still my main source of entertainment, from genial luthiers to hilarious espresso gurus, Rick Beato’s music interviews to Jon Stewart’s Weekly Show. I even watch some movies there cheaper than other paid platforms (recently found ‘Lonesome Dove’ for free). 

The ‘interestingness’ seems to be spreading from online matters to offline. In recent months my Chromebook finally ran out of support (bought a new Asus CX340, cheap, way faster and nicer). BT announced that it was killing off my analog landline early, meaning a new hub, and that my mobile account should be moved to EE. While trying to surf such unwelcome disruptions several websites started playing up – I became adept at navigating poorly-implemented two-factor authentication schemes that trap you into endless loops of passcode tennis, and discovered a new game called ‘hunt the human’ while traversing the maze of AI chatbots that firms now erect in the name of Help…

Shall I end on a cheerful note, that things can only get better? It’s getting ever harder to believe that. Once the DOGE-days are over, assuming some kind of sanity is restored, then the craven way the big Silicon Valley corporations crowded onto Trump’s rattling gravy-train will haunt and taint them for years to come. 

[ Dick Pountain pronounces DOGE as ‘doggie’, like that creepy Shiba Inu dog meme] 






 



KIND OF BLUE(TOOTH)

Dick Pountain /Idealog 368/ 05 Mar 2025 04:38

I’m literally a one-man-band, by which I mean that I make music using an assembly of electro-acoustic gadgets that permit me to do without the collaboration of other human musicians. (I hasten to add that those gadgets do not include drums or cymbals strapped to my legs, like the blokes who used to entertain cinema and theatre queues). 

I love music, from a wide range of genres, and I’m picky about quality reproduction for other peoples’ music I enjoy. I’m far from being a hi-fi nut (and indeed quite sceptical of the excesses they indulge in) but I do run a nice-sounding system based around a Fosi Class D amplifier connected to vintage British speakers, which sucks in music via wire from a vinyl turntable or CD player, and via Bluetooth from my Chromebook, Samsung tablet and smartphone. (I can also listen to those via Bluetooth headphones and earbuds). It would be handy to incorporate the sound from my Panasonic smart TV via Bluetooth too, rather than SPDIF, but the brute refused to pair with my amp so I bought a tiny cheap Bluetooth sender/receiver from Amazon and plugged that into the set’s headphone socket. It worked fine but the cursed TV then hogged the Bluetooth and had to be unpaired before any other source could use  it, so I’ve learned to live with its (actually quite acceptable) Dolby sounds, and so now had a Bluetooth dongle to spare.

I tried plugging this into my various electric guitars via an adapter, to play them wirelessly: it works but is totally unusable due to latency. Meanwhile in another corner of my music room stands the very analog, one-man-band conglomeration through which I play those guitars: two acoustic, two electric, one bass switched into a small Marshall amplifier that’s encrusted with five effects pedals and a small Zoom drum machine. This is all connected via standard ¼” jack cables, and It’s only taken barely 15 years to arrive at a satisfactory topology, the brainstem of which is a 3-channel passive audio mixer, barely bigger than a Yorkie Bar, which the late-lamented Maplin emporia used to flog for £20 (I bought two). One channel is taken up by the drum machine, a second one by the multi-effects box which the other instruments go through, so one channel remained empty and it was simply irresistible to plug in the Bluetooth dongle. Paired with my Samsung tablet, a whole new world opened up... 

Among the pedals is an Akai Headrush echo/looper I bought 20 years ago after seeing KT Tunstall play one on the telly – this lets me store and replay short clips of music and overdub them with more layers. By logging on to Spotify on the Samsung I can play any tune and store a chunk of it into the Headrush to play over and add to – great fun with slices of Bill Frisell or Julian Lage, or Ron Carter bass riffs. I also have Volcanic Mobile’s MIDI Sequencer for creating backing clips using its piano or brass sounds, and also my own Algorhythmics programmatic system (which I’ve described here before). I can compose parts using BandLab, a popular free Android DAW, with sampled instruments other than guitars. In fact any software that can make a noise on my tablet, phone or Chromebook can now be routed into my guitar amp and chunks saved and looped to play over – which even includes my various voice synths and text-to-speech readers, or mic recordings.

However what I really wanted was to create original soundscapes for use as backdrops to improvise over, an ability that I discovered quite by chance in Pegboard (https://semitune.com/pegboard/). Its publishers Semitune describe this app as “an advanced mobile polyphonic wavetable synthesizer with a virtual analog filter, 12 standard modules and 6 effects modules.” It’s driven by two separate wavetable oscillators that you can customise via a graphic interface in which you drag envelope shapes and make them evolve over time in complex ways for really rich sounds. Play them back via an onscreen keyboard, either piano-style or an array of accordion-style touch buttons whose layout you can alter to represent a chosen scale or mode. Pegboard isn’t a sequencer so you can’t save whole tunes, just the sounds themselves, but that’s perfect for my purpose, to play short phrases into my looper (it can also work as a MIDI controller, a feature I’ve yet to master).Pegboard is free to play with but you must upgrade to Pro for £18.99 to save sounds, which I very quickly did. Think of my guitar as a pen and Pegboard sounds as washes I draw over to make sound pictures for my one-man band’s audience of one…

[You can see Dick Pountain’s one-man-band rig at https://www.facebook.com/share/v/15xD1dS1UD/ ]




 

  

DEEPCHEEK?

Dick Pountain /Idealog 367/ 06 Feb 2025 09:12

I used to think that a monthly column was a fairly relaxed schedule compared to, say, a daily newspaper, but no longer. I’d decided to do this one about how China upset the USA by doing AI on the cheap, but now every ten minutes I feel a need to check online for whatever new geopolitical atrocity has just overshadowed that. Nevertheless I’ll start with a nod to the original plan, how China pulled down the knickers of the US AI bubbleheads.  

I won’t dive deep into tech details of how Deepseek succeeded in doing what ChatGPT does for a fraction of the price, or how it rocketed to the top of Apple’s mobile-app store hit parade, nor how it did so by parasitising the US AI bros’ own data in just the same morally and legally unsavoury way they got it from us in the first place. No, instead I’d prefer to harp on about something I’ve been harping on about for at least 20 years, namely how the whole AI industry deludes itself because, being lead by sci-fi-addled nerds (one of whom now appears to be the de facto POTUS), it has a severely limited grasp of biology and philosophy.

Two columns ago I forcefully expressed my opinion of OpenAI’s plans for continued expansion in order to achieve AGI (Artificial General Intelligence), which they claim would confer human-level reasoning. One objection was its colossal, antisocial, power requirements, but my real objection is that I don’t believe AGI is even achievable by simply crunching more data. That’s for reasons of biology I’ve explored here many times, namely that though human intelligence expresses itself through language – by manipulating symbols which is all any computer can do – that’s neither its only nor its most important source. 

We’re animals who have been equipped by evolution to succeed at living in a physical world, achieved with the help of many (more than five) senses to sample what’s going on around us. We build, continuously update and maintain a mental model of that world. We have needs – including to eat, drink, reproduce and avoid predators – which are intimately entwined into that model. We’re born with some built-in knowledge about gravity, upness and downness, light and shade, convexity and concavity, that control the model in ways of which we’re not conscious, but which deeply affect our symbolic processing of that world. We’re by no means just ‘rational’. AI has learned how to pretend to be intelligent only by plundering our symbolic representations of the world, texts and pictures, but knows nothing, and can know nothing, of our embodied experience. Sure, it could build imitations of emotions and needs, but they’d just be more static representations, not extracted from the real world by living in it.  

Historically computers arrived thanks to advances first in mathematics, and then electronic engineering, so it’s hardly surprising that the intellectual atmosphere in which they’re embedded is more influenced by science fiction than by philosophy, anthropology or cognitive psychology. It may well be too late now for AI practitioners to go back and do the necessary reading, since they’ve reached a level of megalomania that convinces them they already know it all, and have just achieved power over the world’s most powerful nation to prove it. 

Were I to be asked to set the tech bros some homework I’d recommend first of all my favourite philosopher George Santayana and his theory of ‘animal faith’, which enables us to navigate life’s uncertainties by deploying our intrinsic knowledge and not ‘overthinking’. That leads directly into the more modern version by Nobel Laureate Daniel Kahneman and his discovery of two modes of thought, the fast, imprecise-but-often-good-enough one, and the slow symbolic one which is all that computers can mimic. Then I’d suggest perhaps George Lakoff and Mark Johnson’s ‘Metaphors We Live By’ which explores the actual content of our intrinsic embodied knowledge and how it modulates our language. Oddly enough, smartphones are already more embodied than GPTs because they have senses (hearing, sight, touch, spatial orientation) and hunger (for charging). Fortunately they can’t yet reproduce.

Having absorbed all that, then perhaps they might dip into Chris Frith’s ‘Making Up The Mind’, the best explanation I’ve read of how the brain creates and updates the mind using fundamentally Bayesian rather than Cartesian mechanisms. That ought to convince them they don’t (or most likely don’t want to) know it all, but the final step would be by far the most difficult, to get them to take democratic politics seriously and divert their megalomaniac schemes toward improving life for the majority of the population rather than a feckless techno-elitist minority. Of course they may prefer to go to Mars, which would provide a rigorous education in embodiment…

[Dick Pountain hopes that Elon doesn’t read his column]




Wednesday, 13 August 2025

POD PEOPLE

Dick Pountain /Idealog 366/ 05 Jan 2025 03:05

It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse, though I’ll try to be different by not saying that 2024 was the year of AI. Instead I’m going to say that for me it was the year of the podcast. That’s partly because I got exposed to AI rather early via Stable Diffusion in 2022, and was bored stiff by the end of 2023. But it’s also because online services that had kept me amused for years, like FaceBook and YouTube, started sliding down a sloppily slippery slope into irrelevance during 2024. Feeds filled up with unwanted sponsored guff and AI-generated fluff, real friends abandoned platforms to be replaced by reels and clickbait that spread like digital cockroaches. In response I began to view more podcasts. 

Just as reels were shrinking down to 30 seconds of inane pointlessness, podcasts started expanding into 3-hour epics. Of course our own excellent PC Pro podcasts, crafted by Barry, Tim, Jon, Lee and Rois, adopt a manageable one hour format, probably the optimum length for normal attention spans, but several other podcasts I consume started at that length then got carried away. Back in 2023 an old friend recommended an article about The Velvet Underground, of special interest to me as the first piece I ever had published was about my experience of working at Max’s in New York in 1970 while they were the house band. This piece was on a podcast called “A History Of Rock Music In 500 Songs” by Andrew Hickey, and it was three hours long… 

Rather to my surprise I listened to all of it and was riveted: Hickey’s taste, depth of research, even his bluff Mancunian accent kept me enthralled. This episode on “White Light/White Heat” was only number 164 of the 500 he plans, in chronological order, but I was hooked and started listening from the beginning – number 1 was on Benny Goodman Sextet’s 1939 “Flying Home”, the first record with electric guitar, played by Charlie Christian. Andrew’s early episodes ran around 30 minutes, soon zoomed past the hour and now are regularly split into two or more parts – as for example The Beatles and Rolling Stones – reaching three hours plus. Thanks to his immense research efforts they remain quite engrossing. He’s now at episode 177 and intends to finish with a song from 1999 (which may take another 25 years at his current delivery rate). 

Another mega-podcast I’ve listened to all through is Paul Cooper’s superb “Fall Of Civilisations” about the rise and fall of empires throughout human history. He has an advantage over Andrew Hickey in that they’re fewer of them, mostly long in the past, and he’s covered most of them in 19 episodes. While not an academic historian, Cooper like Hickey has invested huge research effort and is an excellent presenter, making every episode informative and exciting without resorting to sensationalism. Some online niggling about historical accuracy is only to be expected, but his interpretations are largely convincing, not grossly ideological biassed, and the video version of the podcast (free on YouTube) illustrates his arguments with a well-curated montage of photographic, film and literary evidence on par with the work of Adam Curtis. Turns out that my favourite dead empires were the Nabataean and the Pagan.

Cooper’s series, available in both audio and video, raises the question of when is a podcast actually a vlog, but I don’t much care. Among my favourites is a series of 80+ YouTube interviews with living musicians by the veteran jazz guitarist and producer Rick Beato, which is probably neither or both but his interview with Rick Rubin is priceless.  

Have I ever podcasted myself? Only once because I don’t much like the sound of my own voice. It happened this way: in 1990 my brother-in-law Pip Hills and I took a road trip to Prague in his 1937 Lagonda saloon to witness Václav Havel’s inauguration as president of the Czech Republic. Following this trip another friend, Mark Williams, commissioned us to write about it for his magazine The Classic Motoring Review and subsequently the Scotch Malt Whisky Society, which Pip had founded in 1983, asked to reprint our article in their magazine and accompany it with a podcast. I charily agreed, and since I don’t possess a professional-grade microphone let alone a studio, performed my part over my Chromebook’s mic, using an audio editor called Lexis (my Android replacement for the wonderful Audacity with which I had 20 years of experience). I managed a usable take after two attempts, even including a snatch of music by Smetana at a pivotal point. Judge for yourself from the link below whether a career in voice-overs beckons…

[Dick Pountain’s Prague trip podcast is at https://unfiltered.smws.com/unfiltered-01-2024/smws-adventures-prague]


TRUMP OF DOOM?

Dick Pountain /Idealog 365/ 09 Dec 2024 10:48

I’ve been writing this column for over 30 years, during most of which I’ve deliberately tried to keep my political opinions out of it, apart from the occasional nod and wink about my lack of faith in free-market dogmas. However there are, very occasionally, world-historic events of such importance that to avoid mentioning them would be a sign of ignorance and cowardice. The last such event was the destruction of the World Trade Centre on the 11th of September 2001, and I did permit myself a column on that. Well, to me the re-election of Donald Trump on 6th November 2024 is another such event. 

I have of course been commenting on the rising power of Silicon Valley moguls – corporations like Microsoft, Apple, Amazon, Facebook, eBay, HP, Twitter and more – who built the industry whose products we document in this magazine. And during those whole 30 years I was writing under an unspoken assumption that these moguls, having emerged from the post-1960s counterculture, were fundamentally inclined toward ‘liberal’ (in the American sense) values. The two Steves Jobs and Wozniac were once ‘blue box’ phone phreaks, Google was started by two Stanford students in a friend’s garage under the motto ‘Don’t Be Evil’. And in the interest of full disclosure, PC Pro itself was created by a company founded by Felix Dennis, once editor of Oz magazine on which I too worked. 

In order to remain neutral, over the last couple of years I’ve refrained from expressing alarm as it became clearer that my assumption was being overturned. It started to look really shaky when Elon Musk, owner of Tesla, bought Twitter and proceeded to corrupt it from a vital news conduit for journalists of all persuasions into X, a conduit for previously-banned hate speech and pro-Trump propaganda. Then a week or so before the November election Musk came out for Trump and appeared prancing on platforms with him. Meanwhile Jeff Bezos, owner of the Washington Post as well as Amazon, forbade its editors to endorse any candidate, while Mark Zuckerberg announced he’d made a “20-year mistake” and “political miscalculation” (coded language for dumping the Dems). 

What has induced such a hand-brake turn in these billionaires’ opinions? A stock price rally following Trump’s victory increased their collective fortunes by $64 billion overnight but that’s merely chump change: Musk spent $250 million to finance Trump’s election campaign, a sum he earns every 15 minutes. Trump is promising to oppose internet regulation and prosecute journalists who investigate or criticise too much, but I think even those aren’t sufficient bait. These moguls already had everything except power to rule, which is now on offer.

The other promise Trump makes is to dump the Democrats’ (already feeble) policies toward climate change mitigation, turning the USA away from the Paris Agreement and Net Zero. This very well suits a second generation of moguls – the AI barons. My own attitude to AI has changed somewhat over the last few years. I’ve been sceptical of earlier claims that silicon tech will soon produce intelligence equivalent to or greater than humans, a goal now renamed a AGI, but I’m enormously impressed by the strides made in language and perceptual processing (I did after all let ChatGPT write a guest column for me). 

Three AI problems are rapidly becoming visible. The first is that those who really know (as opposed to simply hyping a stock-price bubble) are  as sceptical as I am about whether merely adding more GPU and training data will push GPTs across into AGI: there are already signs of plateauing or even degeneration through data pollution. A second problem is the absurd, even obscene, amount of electrical power consumed by the huge processing arrays that support the current generative AI models. Pronouncements from OpenAI about their future energy needs are beginning to sound frankly deranged – restart old nuclear power stations to marginally improve AI services which are, let’s face it, really only souped up search engines rather than solutions to any physical-world problems. Building a new clean energy infrastructure to mangle words and bitmaps rather than provide clean transport, heating and air-conditioning is actually psychotic. 

The third problem is that if Trump humours his new silicon buddies by employing their current, flawed, AI products to displace huge numbers of human jobs, he’ll likely trigger an economic crisis that leads to social unrest or even breakdown. This magazine is called PC Pro, the first P standing for Personal. We grew out of a 1980s technical revolution that put computing power into the hands of individuals and decentralised power away from the mainframes of state bureaucracies. The ambitions of the AI brigade concentrate processing back into gargantuan data centres that threaten data democracy itself.  


[Dick Pountain is busy gathering followers on BlueSky (@dick-pountain) as an act of Xtermination]

COLOR ME OLO

Dick Pountain /Idealog 371/ 10 Jun 2025 12:15 I’ve expressed my feelings about science fiction before many, many, probably too many, times i...