Friday, 6 February 2026

COLLECTABLES

Dick Pountain /Idealog 373/ 08 Aug 2025 10:52  

I don’t really have the collectors’ instinct. When I was a kid my father was a serious stamp collector and I briefly made a feeble effort to be one too. I was slightly more interested in my album of labels from exotic canned goods, but that petered out pretty soon too. I find that in adulthood I’ve accumulated nine guitars but each of those was bought to play, then superseded but not sold, so it doesn’t really count as a collection. I have owned ten motorcycles over sixty or so years, but only ever one at a time. Books don’t count: I started accumulating those as a student and continued as a book reviewer, but all were obtained to read and never sold (there are around a thousand of them, none rare).

When I’m not writing about computers here I review books for a political journal, mainly ones about political economy and sociology. An author who had a big influence on me was the French sociologist Pierre Bourdieu, whose best-known book ‘Distinction: A Social Critique of the Judgment of Taste’ examined taste as an act of social status building, drawing on huge amounts of data gleaned from quantitative surveys, photographs and interviews. Two former associates of his, Luc Boltanski and Arnaud Esquerre in 2014 published a paper called ‘The Economic Life of Things: Commodities, Collectibles, Assets’ which is the best, most interesting account of collectability I’ve seen. B&E describe the way that all the things we manufacture, purchase and use pass through three phases of ownership, which they label the ‘standard form’, the ‘collectible form’ and the ‘asset form’. Consider for example a lemonade bottle from before WW1 with one of those stoppers that’s a caged glass marble. A mass-produced item, made as cheaply as possible, reusable and returnable for a penny. It served its purpose of containing and dispensing lemonade, then eventually got thrown in the bin (standard form). Years later someone found it on a tip and it ended up in an antique shop, sold for £15 to a middle-class couple as a kitchen ornament (collectible form). Their friend was a famous film director and it became a key prop in a very successful movie, and ended up sold for £5000 in a sale of memorabilia at Christie’s (asset form). An ancient Roman olive-oil jar might follow a comparable trajectory, but with prices several orders of magnitude higher. A drawing done on a napkin to pay for supper by a famous painter ditto, but its asset form might be in millions. In the asset form, things are no longer used, may often not even be displayed but stored in a vault, a hedge against inflation or financial crisis, a store of value.

What has this to do with computers you may be wondering. Well nothing actually, and that’s the point. Computers, along with much of the rest of the merchandise of the digital world, seem to defy B&E’s classification scheme by being stuck forever in the standard form: they end up in a skip, then get ripped apart to recycle a few chips and some gold-plating. The aesthetic appearance and quality of workmanship of such goods is so low that very, very few people want to collect them, and what’s more those few who do face insurmountable problems in keeping them 

working, due to the rapid and haphazard evolution of firmware, software, ports and cables, storage media, the lack of effective documentation, and the rapid disappearance of smaller manufacturers prior to the monopoly era we now inhabit. 

If you detect a faint tinge of animus in that last paragraph you’re correct, it’s because I have a room upstairs full of digital junk accumulated over my 40+ years of computer journalism that I can’t get rid of (and which those nearest and dearest to me would love to see dumped in a skip to reclaim the room). I can’t bring myself to do that. Along with some quite notable historic hardware – first-gen IBM PC and Macintosh, Acorn Archimedes, Newbrain, Epson HX20 – there are shelves full of software both famous and obscure that I have a hunch I may perhaps be the only person to still have, given my privileged status as recipient of review materials. The early history of the UK personal computer scene is sitting up there, and no-one appears to want it. I’ve tried all the various computer museums people have recommended, and none are interested in collecting the lot (I no longer have a car). All the reasons I mentioned above render it enormously hard for them to get this stuff working, and once they do it’s hardly entertaining. I do feel though that someone ought to document this history before skips claim it all. 

[Dick Pountain will hold onto the draw containing every Psion Organiser]

 

TOO DARNED HOT

Dick Pountain /Idealog 372/ 07 Jul 2025 01:15

I’ve been watching the rebellious mood that’s growing among Microsoft Windows 11 users with a degree of (not very nice) complacent amusement, as someone who dumped Windows in favour of a Chromebook more than eight years ago. Actually my defection was as much an accident as an example of prescient wisdom. When Dennis Publishing moved to new offices, then CEO James Tye took the quixotic decision to deploy Chromebooks to all, I took the opportunistic decision to borrow one and, being totally brassed-off with Windows 8.1, immediately became hooked. The Asus I bought for myself still works well and had been a source of great pleasure but for one problem – Google stopped supporting its version of ChromeOS with automatic updates about a year ago, and I started encountering apps that demanded an OS update I couldn’t procure. So I splashed out £229.99 on a new Asus CX3402 Chromebook Plus, which has a better screen, twice the memory, an 8-core Intel CPU and 10 years guaranteed updates. Migrating to the new machine was as arduous as usual: charge the battery, switch it on and wait 10 minutes for my online life to come down from the cloud. There were two extra chores though. Because in all matters digital I’m very far from being a trusting person, I also keep data I consider crucial on a local 128Gb memory stick that lives permanently in one USB port, so I had to unplug that and plug it into the new machine. Another quirk of mine is that I don’t really like touchpad cursor control and so use a Logitech Wireless Mouse whose dongle lives in another USB port.

I periodically backup the contents of the USB stick, a tiny metal-cased one from Integral, for which purpose I swap the Logitech dongle for another backup memory stick. A few weeks later I was doing such a backup when I noticed that the sticks had become very hot. Not warm but hot, hot enough to make me flinch on touching them, hot enough to worry. So I went online to the source of all wisdom which is Reddit, where I discovered that hundreds of people were reporting the same experience, not only with Chromebooks and with a variety of brands of stick, but in all cases when using them in USB-C ports. The consensus was that it’s OK, it’s because of the huge capacities of the current generation of sticks and the poor ventilation of the smaller cases. I pretended to believe that and to live with it for a few more months, until I started to get unreliable behaviour from the toasting stick. First it started unmounting at random times, though it always came back after unplugging and replacing. Then during a backup session, ‘copy failed’ messages and directories going missing from listings, at which point I panicked. I dug out my original old Asus where I confirmed that the contents of the stick were in fact intact, and that it didn’t get hot, and I carried out the backup there on the cool older USB ports. 

Something clearly had to be done because it’s become part of my work practice to keep this tiny, unknockoutable, USB stick permanently in place, and changing to some huge protruding one wasn’t acceptable. My first recourse was a heat-sink, cobbled together by wrapping the Integral stick tightly in aluminium cooking foil held in place with a 15mm binder clip with the handles detached. This worked, dissipating enough heat to reduce it to just warm to the touch, but it was too inelegant for me to live with. I couldn’t find any hard info online about which brands were most liable to overheat, but my own collection accumulated over the years revealed Sandisk, Patriot and Tab all got just as hot. As I was dolefully scrolling down the endless Amazon list of sticks, one from Samsung caught my eye because it looked nicer: the same shade of grey as my computer, short, fat and shiny. I ordered one and discovered that it’s just as fast, and barely gets warm… 

What moral to draw from this story I’m not really sure. I’m not a semiconductor engineer and can’t find an adequate explanation online from anyone who is, as to why such a huge discrepancy in performance exists between brands. Does the difference lie in the chips themselves, the design of the cases, the electrical interfaces or a problem in USB-C sockets? Such silence is deafening and disturbing when data loss is a distinct possibility. But of course the computer is becoming very much the poor cousin to the smartphone, for which such sticks are not relevant and for which professional standards of data hygiene are barely relevant either.    

[Dick Pountain still occasionally dreams that he’s trapped inside the Windows Registry]

Sunday, 30 November 2025

COLOR ME OLO

Dick Pountain /Idealog 371/ 10 Jun 2025 12:15

I’ve expressed my feelings about science fiction before many, many, probably too many, times in this column. A big fan in my 1960s teens, a bout of illness in the ‘70s let me binge all the greats – Vonnegut, Ballard, LeGuin, Dick, Pohl, Bester etc – overdosing so badly that I never wanted to read sci-fi again. Looking back now as emotionally-retarded pseudo-intellectual sci-fi fans appear to be taking over the world, I think perhaps my immune system was telling me something. However this allergic reaction doesn’t apply to the closely related genre of fantasy (or gothic, or cosmic) horror. I still can cringe a little to M.P. Shiel’s ‘The Purple Cloud’, William Hope Hodgson’s ‘The House on the Borderland’, or the entire oeuvre of H.P, Lovecraft. 

One of Lovecraft’s stories, ‘The Colour Out Of Space’, struck me particularly hard. A meteorite lands in a tiny community in the New England woods, containing globules of a weird colour that isn’t in the solar spectrum: it has unpleasant effects that consume all living animals, plants and humans and turn them into grey ash. Several unsuccessful attempts have been made to film this story, hard work given that it’s not in the technicolor spectrum either: perhaps the nearest anyone has come is Alex Garland’s 2018  ‘Annihilation’ which clearly shows Lovecraftian influence and employs a digitally-produced shimmer in place of a new colour. I suppose Lovecraft’s story is a sort of parable about environmental destruction, but that’s not what explains its hold on me – I’ve always been fascinated by colour, studying its chemistry, reading up on all the various systems, appreciating great paintings and creating my own digital art as a favourite hobby.   

All of which explains my enormous excitement on reading in the Science Adviser newsletter about new research at the University of California Berkeley, which ‘creates’ a new colour that lies outside the gamut humans can perceive, but can be seen by shining a laser onto ones’ retina (don’t try this at home kids). First some background is required. You already know that our eyes, and hence also digital display devices, perceive or present colours as mixtures of the three components red (R), green (G) and blue (B). That’s because the retinas of our eyes contain three types of colour-sensitive cell called ‘cones’, sensitive to different wavelengths: long (L), medium (M) and short (S). Objects illuminated by natural sunlight stimulate L, M and S cones to different extents which gives us the experience of different colours. Red light primarily stimulates L cones and blue light S cones, but since M cones respond to the middle of the range, overlapping with both L and S, there’s no component of sunlight that stimulates M alone. 

The authors of the Berkeley paper (https://www.science.org/doi/10.1126/sciadv.adu1052) set out to investigate a new system for describing colour perception, by using a laser to illuminate retinal cells one at a time: 

“We introduce a principle, Oz, for displaying color imagery: directly controlling the human eye’s photoreceptor activity via cell-by-cell light delivery. Theoretically, novel colors are possible through bypassing the constraints set by the cone spectral sensitivities and activating M cone cells exclusively. In practice, we confirm a partial expansion of colorspace toward that theoretical ideal. Attempting to activate M cones exclusively is shown to elicit a color beyond the natural human gamut.”

Calling their new system Oz was of course a trigger for me, having cut my journalistic teeth on that notorious hippie journal where my articles were printed in every colour of the rainbow on a background of every other colour of the rainbow. But I digress. Their new colour, from stimulating M cells alone with a laser, they named ‘olo’. It can’t be reproduced in paint, ink or screen, so you’ll only ever see it sitting in a dentists’ chair with a laser strapped to your head: 

“Subjects report that olo in our prototype system appears blue-green of unprecedented saturation, when viewed relative to a neutral gray background. Subjects find that they must desaturate olo by adding white light before they can achieve a color match with the closest monochromatic light.”

Thankfully olo has not so far shown any inclination to suck the life force out of living beings and reduce them to grey ash. The best approximation is a light turquoise, a colour you might glimpse fleetingly when watching a big wave break on a rocky headland in bright sunshine. Astronomers for a while believed the whole universe, by mixing all the light together, would be a light turquoise but that turned out to be a bug in their software, and it’s now ‘cosmic latte’, a light beige (hex triplet value in standard sRGB #FFF8E7). 


[Dick Pountain wonders whether Olo could become ‘the new Pistachio’]


GOING NEURO

Dick Pountain /Idealog 370/ 05 May 2025 01:31

I’ve written many, many sceptical words about AI in this column over the years, railing against overconfidence and hype, hubristic pursuit of AGI, deepfakery and content pillage, but nevertheless I do believe AI – once we’ve civilised it – is going to be hugely  important to science, economics, robotics, control systems, transport and everyday life itself. Given the political will, public concern about misinformation, invasion of privacy, and theft of artistic data can be regulated away, but there would remain one colossal stumbling block, namely energy consumption. 

When AI corporations consider purchasing mothballed nuclear reactors to power their compute-servers the absurdity of AI’s current direction ought to be visible to everyone. The current generation of GPT-based AI systems depend on supercomputers that can execute quintillions of simple tensor arithmetic operations per second to compare and combine multiple layers of vast matrices holding encoded parameters. Currently all this grunt is supplied using the same CMOS semiconductor process technologies that gave us the personal computer, the smartphone and especially the computer game – the Nvidia chips that drive most AI servers are descendants of ones originally developed for rendering real-time 3D games. The latest state-of-the-art GPUs have a watts/cm² power density around the same as an electric cooking hob, and the power consumption of AI server farms scales exponentially, as the square of the number employed (order O(N²) in the jargon of complexity theory). 

In their 1978 bible of the CMOS revolution ‘Introduction to VLSI Systems’, Mead and Conway devoted a final chapter to the thermodynamics of computation: we’ve long known that logic operations and memory accesses always consume energy, whether in silicon or in protein-and-salt-water like the human brain. However the human brain has far, far more neurons and synapses than even the largest current AI server farms have GPUs, yet consumes around 20 Watts as opposed to AI’s 50+ Megawatts. Understanding what’s responsible for this immense efficiency gap is crucial for creating a more sustainable next generation of AI, and the answer may lie in new architectures called ‘neuromorphic’ because they mimic biological neurons.

Individual metal-on-silicon-oxide transistors aren’t six orders of magnitude more power-hungry than biological neurons, so other factors must be responsible for the huge difference. One factor is that biological neurons are analog rather than digital, and another is that they act upon data in the same place that they store it. In contrast the CMOS GPUs in AI servers are examples of von Neumann architecture, with processing logic separated from memory, and program code from data. But the MOSFET transistors they’re made from are inherently analog, operated by varying voltages and currents, so the digital data they manipulate gets continuously converted back and forth between the domains, at great energy cost. 

Neuromorphic AI hardware designers try to bring data and processing closer together. Intel introduced its Loihi 2 research chip, with 128 neuromorphic cores and 33MB of on-chip SRAM, which communicates via trains of asynchronous voltage ‘spikes’ like those in biological neurons. Steve Furber (of ARM fame) works at Manchester University on a neuromorphic system called Spinnaker that has tens of thousands of nodes each with 18 ARM cores and memory, also using spike-based communication. These schemes do reduce data access overhead, but they remain digital devices, and to approach biological levels of energy economy will require a still more radical step into purely analog computation that exploits the physics of the chip material itself. 

The US firm Mythic’s AMP (Analog Matrix Processor) chip employs a 2D grid of tunable resistors whose values encode the weights for an AI model, whereupon it relies on Kirchoff’s Laws to in effect multiply-and-add the analog input voltages and perform convolutions. However AMP is still fabricated in CMOS. A more radical next step would be to implement this resistive analog computation using low-power ‘spintronic’ memristors – devices in which the orientation of magnetic spins represent bits as in modern hard disks. One way to implement non-volatile memristors is by FTJs (Ferroelectric Tunneling Junctions) formed by sandwiching nano-thin magnet/insulator/magnet layers which can be fabricated using existing semiconductor processing. These devices can be written to and switched cumulatively like real neurons, and read-out non-destructively using very little power. 

The Dutch physicist Johan Mentink used a recent Royal Institution lecture (https://youtu.be/VTKcsNrqdqA?si=ZRdxeyP4B-hfUw3X) to announce neuromorphic computing experiments in Holland that employ two-dimensional cross-bar grids of memristors, organised into a network of  ‘Stochastic Ising Machines’ that propagates waves of asynchronous random noise whose interference yields the spike trains that transmit information. The Dutch researchers claim such devices can potentially be scaled linearly with the number of synaptic connections, reducing power consumption by factors of 1000s. I love the idea of working with rather than against noise, which certainly feels like what our brains might be doing… 


[Dick Pountain’s neurons are more spiky than most]


Tuesday, 14 October 2025

INTERESTING TIMES?

Dick Pountain /Idealog 369/ 07 Apr 2025 12:06 

Some 26 years ago (May 1999/ Idealog 58) I opened this column thus: “It's said there's an ancient Chinese curse, "May you live in interesting times!" Actually it's been said by me at least twice before….”  

Well, it looks time to trot it out for the fourth time, though nowadays I can check its truth using my old friend ChatGPT: “No record of this phrase (or anything close to it) exists in classical Chinese Literature: the earliest known attribution is from British sources in the 20th Century, a 1936 speech by Sir Austen Chamberlain (brother of Neville) who claimed to have heard it as a Chinese curse but it seems more like a Western invention made to sound exotic or wise.”  So, I’ve been peddling fake news for a quarter of a century, but I don’t feel too guilty as everyone seems to be doing it. 

The financial turmoil created by President Trump’s tariff barrage purports to be against a whole world ripping off the USA through unfair trade, but we all know it’s really about China. Many of us believe Trump has made a monumental blunder that will ultimately help China to economic dominance: The Wall Street Journal expressed it thus “What a fabulous change in fortunes for the Chinese leader. Mr. Trump has taken an ax to the economic cords that were binding the rest of the world into an economic and strategic bloc to rival Beijing – and at precisely the moment many countries finally were starting to re-evaluate their economic relationships with China.” 

I’ve covered semiconductor fab, Taiwan invasions and DeepSeek in recent columns so let’s not go there again, except to guess that Trump’s shenanigans could cause interest rate, futures and bond market chaos that may bring down the intricate house-of-cards finance of the AI bubble corporations (already under siege from IP lawyers). Instead I’d rather talk about how the ‘interesting’ times are affecting my own everyday activities. 

For many years my online presence, apart from my own website, consisted of Flickr for posting photos and Facebook for chatting/arguing/posing with friends. No longer, as my online self has been shattered into a dozen fragments, none of which have quite the same scope or satisfaction as before. Facebook started to deteriorate for me a couple of years ago as old friends left and un-asked-for content increased, but since Zuck did a Musk on it by removing moderation it’s becoming intolerable, and as a result I’ve begun to work on building a following on both BlueSky and SubStack. 

BlueSky is full of left-leaning refugees from the steaming pit that X has become: lots of excellent, sympa content, in fact too much to read it all and unanimous enough to risk boredom. I joined SubStack years ago hoping to get paid for some of my stuff but that didn’t work out so I forgot it until now, when it appears to be changing into something different. It’s becoming a social platform to rival Facebook, an alternative refuge for X-iters and I actually find it more interesting than BlueSky, but with one huge reservation - its structure and user interface remain totally baffling. Is it a mailing list or a website, a forum or… what?  Do I add posts or notes, and where will the comments arrive? My efforts in computer-generated music are now scattered among a host of platforms including SoundCloud, YouTube, NoisePit, BandCamp and GrungeDump (I may have invented one of those) and it remains stubbornly antiviral on all of them. YouTube is still my main source of entertainment, from genial luthiers to hilarious espresso gurus, Rick Beato’s music interviews to Jon Stewart’s Weekly Show. I even watch some movies there cheaper than other paid platforms (recently found ‘Lonesome Dove’ for free). 

The ‘interestingness’ seems to be spreading from online matters to offline. In recent months my Chromebook finally ran out of support (bought a new Asus CX340, cheap, way faster and nicer). BT announced that it was killing off my analog landline early, meaning a new hub, and that my mobile account should be moved to EE. While trying to surf such unwelcome disruptions several websites started playing up – I became adept at navigating poorly-implemented two-factor authentication schemes that trap you into endless loops of passcode tennis, and discovered a new game called ‘hunt the human’ while traversing the maze of AI chatbots that firms now erect in the name of Help…

Shall I end on a cheerful note, that things can only get better? It’s getting ever harder to believe that. Once the DOGE-days are over, assuming some kind of sanity is restored, then the craven way the big Silicon Valley corporations crowded onto Trump’s rattling gravy-train will haunt and taint them for years to come. 

[ Dick Pountain pronounces DOGE as ‘doggie’, like that creepy Shiba Inu dog meme] 






 



KIND OF BLUE(TOOTH)

Dick Pountain /Idealog 368/ 05 Mar 2025 04:38

I’m literally a one-man-band, by which I mean that I make music using an assembly of electro-acoustic gadgets that permit me to do without the collaboration of other human musicians. (I hasten to add that those gadgets do not include drums or cymbals strapped to my legs, like the blokes who used to entertain cinema and theatre queues). 

I love music, from a wide range of genres, and I’m picky about quality reproduction for other peoples’ music I enjoy. I’m far from being a hi-fi nut (and indeed quite sceptical of the excesses they indulge in) but I do run a nice-sounding system based around a Fosi Class D amplifier connected to vintage British speakers, which sucks in music via wire from a vinyl turntable or CD player, and via Bluetooth from my Chromebook, Samsung tablet and smartphone. (I can also listen to those via Bluetooth headphones and earbuds). It would be handy to incorporate the sound from my Panasonic smart TV via Bluetooth too, rather than SPDIF, but the brute refused to pair with my amp so I bought a tiny cheap Bluetooth sender/receiver from Amazon and plugged that into the set’s headphone socket. It worked fine but the cursed TV then hogged the Bluetooth and had to be unpaired before any other source could use  it, so I’ve learned to live with its (actually quite acceptable) Dolby sounds, and so now had a Bluetooth dongle to spare.

I tried plugging this into my various electric guitars via an adapter, to play them wirelessly: it works but is totally unusable due to latency. Meanwhile in another corner of my music room stands the very analog, one-man-band conglomeration through which I play those guitars: two acoustic, two electric, one bass switched into a small Marshall amplifier that’s encrusted with five effects pedals and a small Zoom drum machine. This is all connected via standard ¼” jack cables, and It’s only taken barely 15 years to arrive at a satisfactory topology, the brainstem of which is a 3-channel passive audio mixer, barely bigger than a Yorkie Bar, which the late-lamented Maplin emporia used to flog for £20 (I bought two). One channel is taken up by the drum machine, a second one by the multi-effects box which the other instruments go through, so one channel remained empty and it was simply irresistible to plug in the Bluetooth dongle. Paired with my Samsung tablet, a whole new world opened up... 

Among the pedals is an Akai Headrush echo/looper I bought 20 years ago after seeing KT Tunstall play one on the telly – this lets me store and replay short clips of music and overdub them with more layers. By logging on to Spotify on the Samsung I can play any tune and store a chunk of it into the Headrush to play over and add to – great fun with slices of Bill Frisell or Julian Lage, or Ron Carter bass riffs. I also have Volcanic Mobile’s MIDI Sequencer for creating backing clips using its piano or brass sounds, and also my own Algorhythmics programmatic system (which I’ve described here before). I can compose parts using BandLab, a popular free Android DAW, with sampled instruments other than guitars. In fact any software that can make a noise on my tablet, phone or Chromebook can now be routed into my guitar amp and chunks saved and looped to play over – which even includes my various voice synths and text-to-speech readers, or mic recordings.

However what I really wanted was to create original soundscapes for use as backdrops to improvise over, an ability that I discovered quite by chance in Pegboard (https://semitune.com/pegboard/). Its publishers Semitune describe this app as “an advanced mobile polyphonic wavetable synthesizer with a virtual analog filter, 12 standard modules and 6 effects modules.” It’s driven by two separate wavetable oscillators that you can customise via a graphic interface in which you drag envelope shapes and make them evolve over time in complex ways for really rich sounds. Play them back via an onscreen keyboard, either piano-style or an array of accordion-style touch buttons whose layout you can alter to represent a chosen scale or mode. Pegboard isn’t a sequencer so you can’t save whole tunes, just the sounds themselves, but that’s perfect for my purpose, to play short phrases into my looper (it can also work as a MIDI controller, a feature I’ve yet to master).Pegboard is free to play with but you must upgrade to Pro for £18.99 to save sounds, which I very quickly did. Think of my guitar as a pen and Pegboard sounds as washes I draw over to make sound pictures for my one-man band’s audience of one…

[You can see Dick Pountain’s one-man-band rig at https://www.facebook.com/share/v/15xD1dS1UD/ ]




 

  

DEEPCHEEK?

Dick Pountain /Idealog 367/ 06 Feb 2025 09:12

I used to think that a monthly column was a fairly relaxed schedule compared to, say, a daily newspaper, but no longer. I’d decided to do this one about how China upset the USA by doing AI on the cheap, but now every ten minutes I feel a need to check online for whatever new geopolitical atrocity has just overshadowed that. Nevertheless I’ll start with a nod to the original plan, how China pulled down the knickers of the US AI bubbleheads.  

I won’t dive deep into tech details of how Deepseek succeeded in doing what ChatGPT does for a fraction of the price, or how it rocketed to the top of Apple’s mobile-app store hit parade, nor how it did so by parasitising the US AI bros’ own data in just the same morally and legally unsavoury way they got it from us in the first place. No, instead I’d prefer to harp on about something I’ve been harping on about for at least 20 years, namely how the whole AI industry deludes itself because, being lead by sci-fi-addled nerds (one of whom now appears to be the de facto POTUS), it has a severely limited grasp of biology and philosophy.

Two columns ago I forcefully expressed my opinion of OpenAI’s plans for continued expansion in order to achieve AGI (Artificial General Intelligence), which they claim would confer human-level reasoning. One objection was its colossal, antisocial, power requirements, but my real objection is that I don’t believe AGI is even achievable by simply crunching more data. That’s for reasons of biology I’ve explored here many times, namely that though human intelligence expresses itself through language – by manipulating symbols which is all any computer can do – that’s neither its only nor its most important source. 

We’re animals who have been equipped by evolution to succeed at living in a physical world, achieved with the help of many (more than five) senses to sample what’s going on around us. We build, continuously update and maintain a mental model of that world. We have needs – including to eat, drink, reproduce and avoid predators – which are intimately entwined into that model. We’re born with some built-in knowledge about gravity, upness and downness, light and shade, convexity and concavity, that control the model in ways of which we’re not conscious, but which deeply affect our symbolic processing of that world. We’re by no means just ‘rational’. AI has learned how to pretend to be intelligent only by plundering our symbolic representations of the world, texts and pictures, but knows nothing, and can know nothing, of our embodied experience. Sure, it could build imitations of emotions and needs, but they’d just be more static representations, not extracted from the real world by living in it.  

Historically computers arrived thanks to advances first in mathematics, and then electronic engineering, so it’s hardly surprising that the intellectual atmosphere in which they’re embedded is more influenced by science fiction than by philosophy, anthropology or cognitive psychology. It may well be too late now for AI practitioners to go back and do the necessary reading, since they’ve reached a level of megalomania that convinces them they already know it all, and have just achieved power over the world’s most powerful nation to prove it. 

Were I to be asked to set the tech bros some homework I’d recommend first of all my favourite philosopher George Santayana and his theory of ‘animal faith’, which enables us to navigate life’s uncertainties by deploying our intrinsic knowledge and not ‘overthinking’. That leads directly into the more modern version by Nobel Laureate Daniel Kahneman and his discovery of two modes of thought, the fast, imprecise-but-often-good-enough one, and the slow symbolic one which is all that computers can mimic. Then I’d suggest perhaps George Lakoff and Mark Johnson’s ‘Metaphors We Live By’ which explores the actual content of our intrinsic embodied knowledge and how it modulates our language. Oddly enough, smartphones are already more embodied than GPTs because they have senses (hearing, sight, touch, spatial orientation) and hunger (for charging). Fortunately they can’t yet reproduce.

Having absorbed all that, then perhaps they might dip into Chris Frith’s ‘Making Up The Mind’, the best explanation I’ve read of how the brain creates and updates the mind using fundamentally Bayesian rather than Cartesian mechanisms. That ought to convince them they don’t (or most likely don’t want to) know it all, but the final step would be by far the most difficult, to get them to take democratic politics seriously and divert their megalomaniac schemes toward improving life for the majority of the population rather than a feckless techno-elitist minority. Of course they may prefer to go to Mars, which would provide a rigorous education in embodiment…

[Dick Pountain hopes that Elon doesn’t read his column]




COLLECTABLES

Dick Pountain /Idealog 373/ 08 Aug 2025 10:52   I don’t really have the collectors’ instinct. When I was a kid my father was a serious stamp...