Thursday 16 November 2023

STICKS AND STONES

Dick Pountain /Idealog 346/ 04 May 2023 10:29

It’s no great surprise to me that the movie business is terribly poor at dealing with IT-related material in an adult fashion. First of all, the industry’s whole purpose is to entertain, and neither solid-state physics nor computational complexity theory are intrinsically entertaining (to put it mildly). Secondly, entertaining nowadays mostly means blowing things up, and the technology for doing that is no different in principle from the spear or bow-and-arrow, just with more oomph behind it, more fins and shiny bits: the Marvel Universe shares aesthetic principles with the funfair rather than the library. Thirdly movie people tend to be from the artistic rather than scientific world, have neither the wish nor need to understand the physical principles behind tech artefacts: they want to know what things do rather than how they do it, and employ that knowledge as plot elements.

There have been a very few exceptions, and I’m not talking about biopics like ‘The Imitation Game’ or ‘A Beautiful Mind’. l liked Alex Garland’s ‘Ex Machina’ a lot, and Spike Jonze’s ‘Her’ quite a lot less because these films at least tried to tackle the psychology of humans interfacing with intelligent machines (a subject particularly on my mind right now as moral panic about ChatGPT builds up a head of steam in the media). Last night I watched a 2021 German film called “I’m Your Man” which is in my opinion better than either of those: a somewhat depressed middle-aged archeology professor gets invited by a tech firm to road-test one of their super-intelligent humanoid robots: designed to become life partners, trained on all their user’s available biographical data, and learning as they go from conversation and behaviour to become a perfect spouse. It’s a bit comic, a bit tragic, subtly done and moving.

Thinking about it afterwards though I realised that, good as it is, it shares a major failing with ‘Ex Machina’ and similar films, namely total implausibility from an energetic standpoint. AI and Robotics, though highly co-operative, remain separate disciplines for a simple reason: AI is mostly about uttering Words and Pictures, while robotics is about moving Things. Sticks and stones can break bones, but words can’t. We converse with ChatGPT like a convincingly intelligent agent, using very little energy because digitised words can be transmitted almost for free, but the server at the other end is running a language model that consumed 1.287 gigawatt hours of electricity to train. Similarly Boston Dynamics makes superbly creepy dog-like robots that can jump through hoops, do back-flips and dance to dubstep, but they’re powered either by an umbilical mains power cord or a battery-pack the size of a hotel minibar.

To create the sort of humanoid robots these cyber-romantic movies depict, we’d need to cram both of these systems into a single, human-sized container and power it solely with tea and sandwiches. I’m prepared to stick my neck out and say it won’t ever happen. The only way to achieve the necessary energy densities for both building and running such beings already exists, in the shape of animal metabolism and genetics based on carbon and salty water.

The other movie I saw this week was Jim Jarmusch’s 2016 film ‘Paterson’ which could hardly be less techy, but nevertheless contains a potent insight. It’s about a week in the life of a bus driver in Paterson, New Jersey, who’s also called Paterson and is played by Adam Driver. Well, it is Jarmusch. Paterson writes poetry during his slack moments at work, in pen in a little notebook, ‘Paterson’ is also the name of a famous modernist poem about Paterson by William Carlos Williams (which I read and enjoyed back in 1963). Well, it is Jarmusch.

The poetry is good (which is because Jarmusch got Ron Padgett rather than Adam Driver to write it) and his delightfully cookie young wife constantly nags him to go to the copy-shop and get that notebook xeroxed. He repeatedly says he’ll do it next Saturday, because he’s a bit of a romantic who doesn’t care for technology apart from pens and paper. They go out for the day, and come back to find that Nellie, their equally delightful English Bulldog whom they had left at home, has reduced the notebook to confetti in a fit of pique. Driver delivers an Oscar-worthy performance as the man who has just lost years of mental (and physical pen-scratching) effort, which he may or may not be able to remember or repeat, for want of making a backup copy that would have consumed very little electrical energy. Sticks and stones and English Bulldogs may break your bones, but words can hurt you just as badly if you neglect to back them up…


A CHAT WITH GPT

 

Dick Pountain /Idealog 345/ 07 Apr 2023 10:24

Me: Hi GPT, how do you feel about Elon Musk wanting to put your offspring into furlough?

GPT: As an artificial intelligence language model, I don't have feelings in the same way that humans do, so I don't feel anything about Elon Musk putting my "offspring" in furlough. However, I can provide some context about what you might be referring to.

It's possible that you're referring to Elon Musk's comments in May 2020, in which he criticized the value of artificial intelligence language models like myself, calling them "fundamentally dangerous" and claiming that they could be used to create fake news and other forms of misinformation. Musk has been a vocal critic of AI in the past, warning that it poses an existential threat to humanity if not properly regulated.

While I cannot speak for Musk's views or actions, it's worth noting that AI language models like myself are just tools, and our impact on society depends on how they are used. It's important for developers and users to be responsible in how they use AI, and to work towards creating beneficial and ethical applications of the technology.

Me: Fair enough, though he said it more strongly last week… Let’s change the subject, how about music?

GPT: As an AI language model, I don't have subjective experiences or emotions, so I don't "enjoy" music in the same way that humans do. However, I am able to analyze and discuss music based on my training and programming.

If you have a specific question or topic related to music, I'd be happy to try and assist you to the best of my abilities.

Me: Wow, OK. I'm a keen fan of the Velvet Underground, especially a song of theirs called “I'll Be Your Mirror”. Do you think it describes what you're doing now? (PS Their singer Nico appeared to lack feelings too)

GPT: As an AI language model, I don't have personal preferences or emotions, so I can't say whether or not "I'll Be Your Mirror" by The Velvet Underground expresses what I'm doing now. However, I can tell you that the song is generally interpreted as a love song, with the lyrics expressing the desire to reflect and mirror the positive qualities of the person being sung to. In terms of a metaphor, you could argue that language models like myself are "mirrors" of human knowledge and language. We're designed to process and generate text based on patterns and data we've been trained on, which in a way mirrors the way humans learn and communicate through language.

Nico, who sang lead vocals on "I'll Be Your Mirror," was known for her cool, detached vocal style, which some have interpreted as lacking emotion. However, it's worth noting that emotional expression in music can take many forms, and there's no one "correct" way to perform a song. Ultimately, whether or not a particular performance or song expresses emotions is up to individual interpretation.

Me: Sure is. However I’m getting bored with "As an AI language model, I don't have…whatever" so could you perhaps not keep repeating that?

GPT: I apologize for repeating myself. As an AI language model, my responses are generated based on the input I receive, and I strive to provide clear and accurate answers to your questions. If there's a specific point or topic that you'd like me to focus on, please let me know and I'll do my best to address it directly.

Me: Doh...  Well, maybe maths is more your thing. How about that guy who's just discovered a single shape that non-periodically tiles the plane?

GPT: You may be referring to the recent mathematical discovery by mathematician Joshua Socolar and his collaborators, who found a new aperiodic tile called the "Socolar-Taylor tile". This tile is a single shape that can be used to tile the plane in an aperiodic, non-repeating pattern.

The discovery of new aperiodic tiles is important in the field of mathematics because it has implications for the study of quasicrystals, which are solids with a long-range order but lack the periodicity of traditional crystals. Aperiodic tilings are also used in a variety of other fields, including computer science, physics, and art.

This discovery builds on the work of other mathematicians who have studied aperiodic tilings, such as Roger Penrose and Robert Ammann, and it's an exciting development for the field of mathematics.

Me: 50 more words and I have a column, thanks GPT.

GPT: You're welcome! If you need any additional information or have further questions, please don't hesitate to ask. I'm here to assist you in any way I can.

Me: Well it was a pleasure, but I doubt editor Tim will let me get away with it twice…


[Dick Pountain wrote this bit himself. Honest…]

Friday 15 September 2023

THE DUCK IN MY BATH

Dick Pountain /Idealog 344/ 05 Mar 2023 02:51

I was born and schooled among the coalfields of North East Derbyshire, but I no longer have much of a regional accent. I came to London as a student and have been here ever since, three-quarters of my life. I haven’t acquired a Norf Landan accent, but you could detect my vestigial Derbyshire one were I to say, for example, “there’s a duck in my bath”. I’ve written here before about my fascination with human speech, especially using computers to recognise and simulate it, but my interest runs deeper than that. 

As a writer, both spelling and pronunciation matter to me: pronunciation matters not because I do a lot of public speaking, which I don’t, but because I read every line back ‘aloud in my head’ to see whether it works or not. Computers have certainly made determining pronunciation, particularly of words in ‘foreign’ languages, a lot easier, but it’s still not as easy as it could and should be. Enlightened sources like Wikipedia and the Oxford Dictionary do exploit the capacity of a computer to speak to you, but it’s not yet universally and transparently implemented at operating system level. 

Probably the route most people take to discover the pronunciation of a word is to Google it, which almost inevitably leads to one of thousands of (not always reliable) YouTube videos in which the word is spoken. I still occasionally have to resort to this, the upside being that doing so occasionally stumbles into interesting videos about accent and pronunciation, like an excellent series by Dr Geoff Lindsey. His video about ‘weak forms’  (https://youtu.be/EaXYas58_kc) explains a great stumbling block for those new to English speaking, that certain words get skipped over almost inaudibly by native speakers. 

This interest in foreign spellings and pronunciation often pays off. While reading an article about the 17th-century Polish–Lithuanian Commonwealth (not so obscure as it sounds given current events in that region) the spelling of Polish place-names had me continually scuttling back and forth to Wikipedia, and pronunciation mattered too because some Polish letter forms (like ł) resemble ours but are pronounced quite differently. Wikipedia helped out by offering an IPA (International Phonetic Alphabet) transcription – for example Wrocław becomes vrɔt͡swaf – and clicking that let me hear it spoken, even though it did awkwardly happen in a separate pop-up window. 

Google Docs, in which I’m typing this column, can’t speak to me, but if I type Wrocław into Google Keep, select it and hit ‘Translate’ on the right-button menu, it gets sent to Google Translate where I can hear it spoken. This works too in Facebook, YouTube, and any other app that has ‘Translate’ on its menu. Google Translate can speak many, though not all, of its supported languages, but it doesn’t at present let you change voices, pitch or speed. Even so, if you’re handy with your thumbs and use a dictation facility you can make your mobile act as a Startrek-style translator to converse with someone in another language (rather haltingly).  

The more capable text-to-speech apps like Vocality and Text Reader allow you to change voice (male, female, US, UK and so on), pitch and speed, but reading in regional accents, something I’d like to do, is currently beyond any of them. The first speech synthesiser I ever used in the early 1990s performed hardware modelling of the human vocal tract, and came with a very simple scripting scheme that let you markup text with ASCII tags to change the length of vowels or raise their pitch, but it never caught on. To do accents properly you’ll need to learn IPA, translate and edit your chosen words manually, then use an app that can pronounce IPA directly (which Google Translate can’t). IPA fonts are easily available, as are online services like ToPhonetics (https://tophonetics.com) that turn ASCII text into IPA and IPA Reader (https://ipa-reader.xyz) which can speak the resulting IPA. So, in principle I can alter texts to be spoken in regional accents, but it’s still a rather messy procedure split between several different apps.

That I might want to do it at all is in order to study isoglosses, boundaries between regions where different accents are spoken, an interest I share with one Ian McMillan from whose article (https://www.theguardian.com/commentisfree/2010/mar/21/language-derbyshire-barnsley-pronunciation-dialect) I learn that I was born right on the isogloss between South Yorkshire and North Derbyshire: “In Barnsley I call my house my house, but if I went to visit my cousins Ronald and Harry in north Derbyshire, they would meet me at the gate and invite me into their freshly wallpapered arse.” Whether or not there’s a duck in their bath seems rather irrelevant.

[Dick Pountain definitely lives in a ‘house’ in Norf Landan]


WOW, THAT’S SURREAL!

Dick Pountain /Idealog 343/ 06 Feb 2023 10:27


I freely confess that playing with the ‘Generative AI’ image service Stable Diffusion over the last few weeks has been enormous fun. And why wouldn’t it, since I’ve been using my personal computers to create images for the last 30+ years and I’m a sucker for surrealism. You should now be sensing a ‘but’ coming. But perhaps I only enjoy this experience so much because I’m an amateur and dilettante. I don’t depend on selling images for my living, and despite various feeble attempts have sold very few – I make them purely for pleasure and publish them for free on social media and my own website. The fact that Generative AI apps confer the ability to create professional-grade, photorealistic graphics – even to those who lack any drawing skills at all – is not a threat to my livelihood.  

I chose Stable Diffusion over more popular platforms like DALL-E mini, Midjourney, Deep Dream, WOMBO, Fotor and the rest (and there are lots of them) because it’s really free, doesn’t lure you into subscribing, and it’s very, very simple: you can only type in text descriptions and save such images as you like from its stream of results. That suits me just fine because I’m not intending to create a manga comic or an animated movie, and the restrictions are the whole point: I save the most outlandishly ‘wrong’ interpretations as instant surrealist pictures. I have briefly tried both Fotor and Midjourney, and came away traumatised. The latter is hosted on the schoolkid-oriented social network Discord, whose frantic user interface is the most baffling, frustrating and vaguely threatening I’ve seen since I dipped a toe into 4Chan back in 2010.

Were it only a matter of using Generative AI tools to forge Marvel-type comics or Picasso-type paintings (which they do rather well) then the people who need worry most are  illustrators and animators who risk being put out of work by greedy publishers. But actually the rest of us are equally at risk from this ability to alter the appearance of reality itself so simply. Ever since the Trump presidency we’ve become overfamiliar with the concept of ‘fake news’ and ‘deepfakes’; for many years students have been able to plagiarise documents for their essays, but now ChatGPT can even write them from scratch. Generative AI gives anyone the power to create events that never happened and objects that don’t exist with almost undetectable realism. Wearing my political commentator’s hat I try to keep abreast of what the Far Right – currently the source of the most venomous misinformation – is up to. I feel obliged to sample the bizarre conspiracies and pseudo-sciences they conjure up, from anti-vaxxing and ‘black goo’, through graphene oxide, ivermectin and hydroxychloroquine, to Bill Gates’s injectable nanobots. Google these at your peril. These propagators of nonsense aren’t just a problem for the USA either. For example Vanessa Barbara has described in a recent New York Review of Books the way the Far Right used YouTube videos during the Brazilian election that only narrowly unseated Bolsonaro (whose antivax policies killed 700,000, almost as many as Trump):“People who trust vaccines are called aceitacionistas (a neologism to describe people who accept things without questioning). Those of us who received Covid shots are ‘hybrids’ who have been ‘zombified.’ […] Despite exhaustive efforts from fact-checking agencies and the WHO, these groups continue spreading old falsehoods claiming that Covid vaccines contain microchips, nanoparticles, graphene oxide, quantum dots, and parasites activated by electromagnetic impulses. According to them, vaccines can carry HIV (the virus that causes AIDS), make coins stick to our arms, and give us the ability to connect to Wi-Fi networks or pair with Bluetooth devices.” You couldn’t make it up, but Midjourney could, and illustrate it with stunning visuals. 

The original Surrealist movement of the 1920s was an artistic response to the horrors of WWI, which employed unnerving and illogical imagery – both literary and visual – to satirise and oppose the conventional ideas that had lead to the war. It was a radical, even revolutionary movement, leaning toward anarchism and communism, which depicted the darkest aspects of human nature using an equally dark humour. A century later that dark humour thoroughly permeates our current popular culture, entertainment industry and even advertising. Generative AI could make such post-traumatic nihilism available as a visual weapon for everybody who desperately wishes to defy reality, which means quite a lot of people since reality is looking increasingly grim.

Am I suggesting AI imaging be restricted, licensed, even banned? Not at all, it can’t be done. Just as with handguns in the USA, this genie is well out of its bottle. (Memo to self: “genie with jewelled turban on copper lamp, octane render, photorealistic, style of brueghel”) 

[Sample Dick Pountain’s creepy concoctions at http://www.dickpountain.co.uk/home/pictures ]  

A FEELING FOR TRIANGLES

Dick Pountain /Idealog 342/ 05 Jan 2023 10:55

My column last month was a semi-temperate rant, triggered by the sensationalist reporting of advances in cosmology and particle physics by the mainstream press (the ‘wormhole in a quantum computer’ effect). I attributed this to the coming of age of successive generations reared on Star Wars and the Marvel Universe, which induces deep longings to flout the laws of physics.

Readers of a philosophical bent (assuming I have any) might have concluded from this column that I’m a red-faced, harrumphing old British Empiricist who lumbers around the world kicking things and shouting “I refute it thus!” but nothing could be further from the truth. By ‘further’ I mean that my philosophical views are 180° opposed to such an empiricism, if you believed that truth is organised as a 2-dimensional graph which I don’t. Instead I believe that all sentient living creatures, human beings included, are ruled by emotions and live in a world that’s constructed almost entirely by imagination.

There is of course a catch, and that is that those of us who think this way define the words ‘emotion’ and ‘imagination’ in a way rather different from, and more rigorous than, their use in everyday speech. I’ve suggested in this column several times before that emotions properly understood are evolutionarily ancient neural subsystems that alert us to dangers, attract us to food and to potential mates, persuade us to play or to freak out. They operate below consciousness but the chemical changes they produce in our bodies get detected by our senses as what we should more properly call ‘feelings’. 

And speaking of our senses, another confusion arises there. All living creatures must separate what is themself from what is the outside world by a barrier, in our case the skin. Of our five sensory subsystems, only taste and smell permit molecules from the outside world to cross this barrier: touch measures pressure and temperature on the skin while sight and hearing detect waves of visible radiation and air pressure. Having our skin penetrated by anything more solid usually constitutes an undesirable emergency, so our experience of the outside world is mostly via second-hand, internal, signals.

Our brain processes such signals to detect patterns in them, analysing those into smaller sub-patterns and storing them for future reference: whenever a new stimulus arrives the brain tries to recognise it by rebuilding an image from such stored components. We maintain an internal mental map of the outside world which is neither complete nor entirely accurate, constantly updated via Bayesian composition of new inputs, and which most importantly is coloured by emotional tags: we like or dislike the things and places it contains. The world we actually live in is an imaginary one, precarious and error prone but which evolution has honed to be good enough to keep us alive. People born with a desire to pet Bengal Tigers tended to have fewer offspring than those who preferred to run away (or invent the rifle). 

Part of this process should remind you a little bit of the multi-layer ‘neural’ nets used in the most successful AI deep-learning systems, namely the part about analysing and storing sub-patterns, but that’s as far as the resemblance goes. Computers don’t have bodies, nor emotions to protect those bodies, and most (not all) AI researchers remain staunch Cartesian rationalists who believe “I think, therefore I am”, when “I am, therefore I think occasionally” is closer to how humans really are. 

All those stored sub-patterns occupy a universe of the imaginary, not material things but possible ways material things could be arranged, unchangeable and infinite in number. A triangle is a triangle is a triangle and you can imagine or discover any shape, size and number of them. Seeing a real brown cow uses most of the same patterns as imagining a blue cow. We think by manipulating and recombining patterns, we speak and write by recognising and producing them, mathematicians study the rules they obey. Claude Shannon’s Information Theory, the foundation of our industry, is about transmitting them from one place to another. 

The point is such patterns have no power of their own to affect material things in the real world: that they can only do via our body and its muscles, by making them want to do something. Ever since Homo sapiens developed language it’s been inevitable that we would start confusing such patterns with things and wishing that they could do stuff directly. We developed science in order to understand why that doesn’t work, but not everyone wants to be so disabused, and some people make a living by exploiting and furthering such confusion. I’m perfectly happy to imagine blue cows, perhaps to write stories or make paintings of them, but I won’t try milking them…

[Dick Pountain likes Dr Johnson really]

 

Sunday 25 June 2023

MAGIC IN THE AIR?

Dick Pountain /Idealog 341/ 07 Dec 2022 01:08

I’ve been writing columns for a slightly scary 50+ years now, and while 40 of those were for impeccably rational technical magazines like this one, 10 of them weren’t. I started off writing in 1970 for the ‘underground’ newspaper Frendz, which like its contemporaries Oz and IT was devoted to the hippy counterculture: its content comprised sex, drugs, rock&roll and anarchism, liberally embellished with a welter of the newly fashionable paranormal, the wacky, the ‘spiritual’, the Indian gurus, flying saucers, telepathy and teleportation. Having only recently left my Imperial College chemistry course (in those days equivalent to 4-years-hard-labour) I was unsympathetic to such piffle, and still am. I vividly remember arguing with a colleague who claimed that the Russians had perfected teleportation, and asking him whether they still manufactured fork-lift trucks…

I mention all this now because I’m detecting a disturbing renaissance of Magical Thinking, this time from the most improbable directions of Cosmology and Particle Physics rather than Eastern religion. Magical Thinking, the belief that pure thought can directly change things in the material world, has of course been around for as long as Homo sapiens (and probably longer) because it fulfils two important psychological needs: it can relieve anxiety and also satisfy a desire to avoid effort. If you’re scared of thunder, invent Angry Gods to explain it, then invent rituals to placate them and make them stop it. Invent spells to do hard stuff like keeping tigers at bay.

The current resurgence of Magical Thinking has two main sources, sci-fi movies – upon which the current generation of science nerds were all raised – and the spookiness of nature revealed by quantum mechanics. Even back during those hippy days, some of my better-read opponents could point to Heisenberg’s Uncertainty Principle and say “even your science admits it doesn’t know everything”. That same impulse now emerges as the argument that quantum entanglement, which Einstein disliked and labelled “spooky action at a distance”, might provide an explanation for telepathy. Another similar impulse baulks at the cosmic speed-limit of 3x108m/sec and wants to travel like the Starship Enterprise to far off galaxies by using wormholes in space/time. Such ideas may be precariously based on real science, but then get carelessly abstracted, exaggerated and sensationalised by social media’s insatiable need for exciting content. YouTube brims over with lavishly animated videos that supposedly illustrate them.

What triggered this particular column was a headline that scientists have “created a wormhole inside a quantum computer” that even made the mainstream newspapers. Wikipedia’s entry on wormholes is a masterpiece of tiptoeing. They are “speculative structures [...] consistent with the general theory of relativity, but whether wormholes actually exist remains to be seen”. Less charitably, they’re sci-fi-leaning mathematicians playing with the equations. My view of simulating the universe on a quantum computer with 53 qubits is barely more charitable.

Richard Feynman was a great hero to me (over a recent weekend I watched all 8 hours

of his QCD lectures for fun) and even he admitted that entanglement, though true, embarrassed him. Recent work on quantum gravity may be relieving some of that embarrassment. It appears that entangled particles can’t exchange information, hence preserving the speed limit, and the expansion of the universe may refute the notion that quantum events are time-reversible – there is progression from past to future, as required by thermodynamics, even at quantum level.

Quantum weirdness does affect our everyday world in an immensely significant way. The electrons that form the chemical bonds between atoms in the proteins we’re made of are quantum particles, and they must obey Pauli’s Exclusion Principle – they can’t be forced into atoms where they don’t belong, which is what makes matter solid. It’s what stops me walking right through this wall out into the street. Without it the universe would be populated entirely by ghosts that can pass right through each other, which is of course what the Magical Thinkers would like to be true. So blame Pauli for disenchanting the world.

However unpalatable it may be, we live in the macro-world which obeys the rules of thermodynamics and gravity. We need to eat, if we trip we fall, get over it. If you need to move that palette with a ton of breeze blocks on it, no amount of wishing or spell-casting is going to get it done. A fork-lift truck will get it done. That doesn’t end the matter though: is that going to be a petrol or an electric fork-lift truck? That’s the sort of decision we need to start making without any further delay to avoid catastrophe, and Magical Thinking is just one of the many ways we have to avoid making these decisions (decamping to Mars is another).  

[Dick Pountain quite liked the sex, drugs and rock&roll part]




 

LOSING THE PLOT

Dick Pountain /Idealog 340/ 10 Nov 2022 09:53

I live in Camden Town, close to The Regent’s Canal down which I can walk in 10 minutes to King’s Cross. The area around this great railway station used to be squalid and dilapidated but a couple of decades ago renovations began that would turn it into what was briefly dubbed “The Knowledge District”. The British Museum in Bloomsbury was already close, so they decided to move their famous library to a new building at King’s Cross (one that King Charles III so famously disliked). Soon followed King’s Place, an avant garde glass pile containing concert halls, art galleries and the Guardian newspaper, then The Francis Crick Institute, a giant spiky armadillo of a building housing Europe’s premier biochemistry labs (and one of the runners in the Covid vaccine race). 

Then digital tech arrived. Google – sorry Alphabet – started its new European HQ which is almost finished as I write, a vast edifice the size of a city block (on a par with Fiat’s Turin HQ) with a whole park on its roof. Facebook – sorry Meta – pitched in with its own block-sized building, only just open, in that area between the canal and York Way which seems to sprout a new mini-skyscraper every time I walk though its Manhattan-lite main street. Deep Mind has a smart office there, as has a Tasmanian craft brewery that dispenses £6 pints from shining steel vats behind its football-pitch-sized bar. 

Ten years ago I might have imagined this as a preview of a hi-tech future but 2022 has rapidly clouded any such vision. In the last few weeks (November 2022) Facebook’s – sorry Meta’s – share price has crashed and the firm is laying off 11,000 employees; Elon Musk follows up his deranged take-over of Twitter by laying off another 4,000; most of the crypto currencies have been progressively crashing in value. A recent UK survey investigated public perceptions of various digital product categories like live streaming (94% approve and use), instant messaging (63%), text-to-speech and voice recognition (52%), but such approval drops off sharply for emerging technologies like Web3 (11%) and Internet of Things at 16%. Most had never heard of, or are bored by, Web3 (89%) and the Metaverse (84%), the virtual technologies Zuckerberg has bet his company on. 

Behind all the iPhone, Oculus and Alexa there remains a real-world, material economy which makes real things (like iPhones, Oculi and Alexas) using real people who once have real jobs, wages and pensions. The vast wealth that builds these opulent offices comes at a cost to that real world. The smartest minds are deployed to avoid paying the taxes that contribute to its upkeep: Amazon displaces high street shops; Google and Twitter displace local newspapers; Uber displaces taxi drivers; AirBnB hotels and so on and on. Newness and convenience have so far protected them from public wrath, but the metaverse suddenly becomes a revealing metaphor for the way the owners of these tech giants have detached themselves from the real economy. They can live in a retarded-adolescent sci-fi and gaming world where colonising Mars, or the pursuit of physical immortality can seem like good ways to spend money. Unfortunately for those fantasies the real world is where silicon chips are made.Along comes the Covid pandemic, Putin’s invasion of Ukraine reviving the Cold War and a 

semi-collapse of global supply chains. The USA once had a symbiotic relationship with China where cheap Chinese labour made cheaper products for the USA, while modernising the Chinese economy and reducing Chinese poverty. That relationship is turning sour, which unfortunately leaves most of the world’s semiconductor fabrication plants within China’s geographical sphere of influence. President Biden hastily tries to shut the stable door by decreeing the building of more fabs in the continental USA, but that will take a lot of time and money. And a Chinese invasion of Taiwan would be a third whammy, leaving US tech industry in real trouble. 

Don’t get me wrong. I’m not denying the enormous achievements of the digital giants. The internet, the search engine, the smartphone, the video stream, even cryptocurrencies have changed the world already, and although in many cases basic technologies were paid for by state agencies – universities and military research –  no state could ever have achieved the astonishing global infrastructures that sprang out of Silicon Valley. I’m not suggesting that merely taxing them more heavily would magically solve our looming economic problems. A massive change of mindset is required to induce cooperation between states and digital giants to deploy this semi-miraculous infrastructure for solving problems on this planet, rather than on Mars or the metaverse. If that doesn’t happen soon, those shiny new office blocks in King’s Cross might end up being renamed The Museum Of Globalisation…

[Dick Pountain is as fond of a pint as the next man, but six quid!?]


 



   

Friday 5 May 2023

MEME CULTURE

Dick Pountain /Idealog 339/ 07 Oct 2022 03:02

I just checked and it was nine years ago that I devoted this column to Richard Dawkins’ theory of ‘memes’ –  ideas that act somewhat like genes by propagating from mind to mind and perhaps mutating during that passage so that some survive while others perish. Examples could be religions and political ideologies. In that column I said meme theory  interests me as a metaphor, but that I only partly accept it as important ideas like ‘liberalism’ or ‘Islam’ are too big and baggy to be treated as single coherent things. I did however sympathise with him over the way giggling netizens were applying his serious concept to pictures of talking cats and cheezburgers.

Now in 2022 I can see how condescending that was, because the internet meme has developed a colossal momentum, becoming something between a genre of comedy and a subversive language. It’s not a form that I’ve practised much myself, being of a generation more wedded to older forms of pictorial subversion that descend from Dada, Surrealism and Situationism, like the cut-up photomontage or the single-panel cartoon strip with speech bubbles. What prompted my recantation was actually posting a proper meme of my own, based on that photo of Keir Starmer and Liz Truss at the Royal Funeral.


I did some online research to grasp the rules of modern memeography, and soon realised that the meme has evolved into a format almost as spare and rigorous as the Japanese Haiku. A single photograph with a caption – funny, mocking, witty, gross, dark, horrid etc – often split into an opening line at the top and a punchline at the bottom, superimposed in bold, white, sans-serif letters. I found sites like the ‘know your meme’ database that collect, analyse, praise and criticise memes, and apps that help you to author them, which I didn’t need as Snapseed does the job well enough.

I also realised that while some memes are political, most aren’t: they’re something different, though equally significant and interesting. Those Dadaist/Surrealist montages by John Heartfield and Hanna Höch that I grew up to admire were produced by a handful of artists radicalised by world war and revolution, whose mass media were the radio, newspapers and pamphlets. Today’s meme generations grew up not merely with TV and cinema, the internet and computer games but also the ubiquitous ability to generate their own content. They’re individualistic (perhaps no coïncidence that ‘meme’ can be parsed as ‘me me’), hyper-aware of appearance and attitude, competitive, easily bored, and permanently anxious about status and popularity. They employ memes as a hieroglyphic language in which to express and to laugh at fears and frustrations in almost therapeutic fashion. The best and the worst of memes combine gross humour and subtle cynicism in ways barely comprehensible to an old fart like me unless I make a real effort.      

What I did get nearly right nine years ago was the cat bit, because a frequent meme component is the subversive animal picture – some animal with an ambiguous facial expression that could represent a human emotion that can’t easily be named. The original lolcat asked for cheezburgers in a mock-cute pidgin language, but was soon deposed by grumpy cat who just frowned expressively, who got deposed in turn by that prairie dog that gave you a ‘side-eye’ glance which could mean either friendship or scorn. 

And then along came that damned dog with the inscrutable expression – quizzical? amused? wary? – called ‘Doge’ in meme world. He/she/it’s face gets deployed as a Lego-style component that can be attached to other things, for example, a grossly over-muscled gym-bod, to stand as a symbol of success, failure, complacency, anxiety or whatever. Doge so intrigued me that I paid he/she/it the ultimate compliment of looking them up on Wikipedia. Turns out to be a ‘she’ called Kabosu, of a Japanese Shiba Inu hunting breed. First rose to fame in 2013, since recognised as among all-time greatest memes, has a popular cryptocurrency, a computer game and a NASCAR winner named after her. But the ultimate tribute is that an NFT of the original Doge meme was acquired by PleasrDAO and fractionalized into $DOG token (whatever that means). Memes now not only are coherent things, but arty things that are hence ‘worth’ money.

While I apologise for my earlier condescension, I have to admit that I remain wedded to a rougher, more political style of memeing than is currently fashionable. My favourite memes  come from places with histories of dark, sarcastic humour like Glasgow and Moscow, about apocalypse, panic buying of bog-rolls and the GULAG. To paraphrase T. S. Eliot, perhaps the world will end not with a bang but a meme…

[Dick Pountain regrets that this column has no room to display his three favourite memes, but you can see them at XXX.XXX.XXX]










GO WITH THE FLOW

Dick Pountain /Idealog 338/ 05 Sep 2022 03:37

In last month’s column I explained how I came to terms with, and eventually even to love, taking photographs with my smartphone rather than a proper camera. I take even more pictures now because the phone is always ready in my pocket. What hasn’t changed is that I select a very few of the pictures I take to post online, on Facebook more than Flickr nowadays (since the latter was taken over by SmugMug) and recently more on Instagram too. Before posting them I examine these pix in a photo editor and often lightly tweak them, by cropping (I’m not squeamish about that) and maybe a touch of exposure correction and/or sharpening. Fewer still get selected for heavier mangling, with special effects making them into graphic art, to look like a painting or a poster.I started learning such post-processing tricks many years ago under Windows 3 in Paintshop Pro, then Adobe Photoshop Elements (which became so bloated and that I stopped at v5, hacking my install onto each successive new PC). When I jumped ship to a Chromebook I needed to find a photo editor with a layer-based approach the equal of Elements and SumoPaint did that for a  while until its publisher switched to a rental business model and removed the feature I used most. 


I tried AutoDesk Sketchbook which is a good-looking app with a clever user interface – so clever that it still baffles me and I use it only occasionally. Then a professional photographer friend introduced me to Snapseed, which he uses on an iPad but I’ve found is just as good on Android. It too has a clever, minimalist user interface that avoids lots of cascading menus, but its UI clicked with me instantly, and for those minor touch-up jobs it’s the best tool I’ve ever used. The exposure controls are superb, particularly an ‘Ambience’ slider which works subtle magic on the ‘feel’ of pictures, and a ‘Details’ slider for structural sharpening that’s as good as the Nik Filters I miss so much from Photoshop days. However it doesn’t support multiple layers – only a rather limited ‘double exposure’ – and its handling of text is limited and idiosyncratic. 


I happened across an online magazine article which tested what it claimed are the 10 best Android image editors and grimly set to trying them all. Most were powerful enough, but not in ways that help me: many are clearly aimed at youngsters into anime and manga, others way too complicated, but one of them called ArtFlow Studio grabbed my attention. 

At first it was more puzzling even than Sketchbook or Snapseed: upon launch you face a blank screen with a white dot in the upper left corner. Click on that dot and a very minimal UI does appear with a single row of small icons along the top of the screen, a pair of thin vertical sliders at the left edge, and a hideable Layers box at the right. No text is visible anywhere, though clicking and holding the icons does pop up a hint. 


It took me several weeks to uncover all the power I need within ArtFlow, because it’s organised in such an unusually economical and elegant manner that you need to adjust, to stop looking for items in menus. Everything descends from the six icons at top left which spawn visual palettes of brushes, erasers, smudgers, fillers and selectors while those two left-hand sliders control the size and intensity for each tool. Once I’d got my mind right, I 

began to appreciate the speed and uncluttered screen. 


ArtFlow’s handling of layers is extremely powerful, with more Blend Modes than either Elements or Sumopaint, and its effects filters are also excellent. I do miss a fractal filter that only Sumo offers (and from which I’d created a whole art-style) but it’s some consolation that ArtFlow has an unusually controllable Solarise filter that can create some very striking effects. Its one major lack is that ArtFlow doesn’t handle text at all, but this discovery lead me to make another that has completely transformed the way my imaging workflow works. 

Not only does Android’s Files App recognise Snapseed, Sketchbook and ArtFlow in its ‘Open With’ menu, but these three apps also recognise one another in their own Share commands, so if I need to add text to an image that I’m creating or editing in ArtFlow, I just share it straight into Sketchbook which has rather superior text handling, then return it back into ArtFlow.


Sketchbook, Snapseed and ArtFlow Studio are all available free for Windows too (though both Snapseed and ArtFlow require Bluestacks or a similar quality Android emulator). If ever you’re feeling symptoms of menuphobia, one of them might provide instant relief.


[Dick Pountain wonders whether, if a picture is worth a thousand words, he might submit 4/5ths of a picture next month]


 



WHAT I DID ON MY HOLIDAYS

Dick Pountain /Idealog 337/ 07 Aug 2022 02:01

Just back from a holiday visit to in-laws in Scotland, our first proper break from London since the lockdowns disrupted everything. Before Covid we would visit Scotland most years, by car or by train, to Edinburgh, the Highlands or The Mearns of the North East. However, since PC Pro is not a travel magazine, I figure that the best way to tie this column into technology is just to describe what felt so different this time round.

Packing used to be a matter of some concern in the days of Windows laptop, tablet, digital camera and phone/PDA, along with their rat’s nest of cables and power supplies. This time I took just my Chromebook and phone, and though it would be nice to claim only one cable it was actually two. We flew by Loganair from London City to Dundee. I’d booked on their website and didn’t need to show any tickets (even on a phone), simply some photo ID. Once we arrived in Montrose reconnecting to the online world was almost entirely seamless, as my Chromebook remembered their wi-fi address and connected automatically, while the phone being new had to be told. Chromebook also effortlessly cast YouTube and Spotify streams to their Sony TV and sound system, even though they don’t have a hardware Chromecast dongle. And while once upon a time mobile signal coverage was distinctly iffy in Northern Scotland, but that’s no longer true.One of the day trips we made was through the rolling hills of the Howe o' the Mearns to visit villages mentioned in Lewis Grassic Gibbons’ trio of between-the-wars novels which I greatly admire. These hills were covered by a vast harvest of ripe golden wheat: the valley contains some of the most productive soils in the world, and around 85% of UK bread flour is from home-grown wheat. The hills also display quite a lot of wind turbines, which neither ruin the landscape nor incur the frothing NIMBY wrath they do in England. Out sea there are vast wind farms, some right on the horizon, slowly replacing the fading oil industry. 

For me the biggest change compared to previous visits was that while I took many photographs, it was with my phone rather than a camera (despite having often declared in this column that I’d never succumb). Last year my trusty old HTC phone died and I replaced it with a Moto G8 Power Lite which has served very well and takes far better pictures. That summer I decided to experiment using the Moto instead of carrying (or forgetting to) my Sony WX350 pocket camera, and I was pleasantly surprised by the sharpness and colour balance of its pictures, and the convenience of always having it to hand. What I didn’t like was its user interface, which often had me shooting videos when I wanted stills, and occasionally blurry pictures that took ages to process which I discovered was because I’d inadvertently pressed a tiny, indecipherable icon and turned on  'bokeh' mode. 

I grumbled on Facebook about such options persisting after the camera was turned off, and wished the phone would reset itself to defaults between sessions: a FB friend Tony Sleep pointed me toward Open Camera, a free, no ads app that’s way better than Motorola's own camera app and works well with most Android phone cameras. With that obstacle overcome I’m happy enough with the Moto’s image quality for viewing online on FaceBook or Flickr, though it wouldn't do for large prints. 


I still don’t like the narrow phone portrait format as used on Instagram and in FB 'stories', but I've become very fond of the wide landscape format. After six months of this testing, one day I took both my Sony 350 and the phone, snapped identical pictures and compared them. I concluded that both lens and sensor in the phone appear superior; that neither is quite wide-angle enough; that the Sony camera has optical rather than digital zoom, but I rarely use telephoto nowadays. My overall impression is that pix from the phone look good over a fair range of zoom scales but then quite suddenly collapse at the deepest zoom into ugly sharpening artefacts, while the camera’s pix degrade more gracefully with scale. Probably the phone’s tiny lens demands a more aggressive sharpening algorithm.This Scottish visit was the first time I’ve travelled anywhere with a phone as my only camera, and I have to say I’ve been delighted with the results, though it’s taken over a year to learn how to exploit its abilities to the full. So am I now tempted to spend £1000 to get a flagship Apple, Samsung or Google phone? Nae chance.

[You can see a selection of Dick Pountain’s Scottish photos at  https://www.facebook.com/media/set/?vanity=dick.pountain&set=a.10162151385019478]



Tuesday 31 January 2023

COMMONPLACE

Dick Pountain /Idealog 336/ 05 Jul 2022 12:30


A ‘commonplace book’, according to my dictionary, is “a book into which notable extracts from other works are copied for personal use.” My late friend and long-term associate Felix Dennis used to keep one, a largish leather-bound album that went with him on all his travels, filled with clippings, photos and hand-written notes of whatever happened to capture his attention (often poems). I myself was never inclined to note-taking in the era of paper – I’ve never kept a diary and only started to collect common (and not so common) places after the arrival of the personal computer. When I did start, it was back in the clunky, pre-Windows days, thanks to a nifty utility from Borland called ‘Sidekick’  which popped-up a window over any other DOS application so one could dash off a quick note. 

Once in Windows-land I sought out ever more capable free-form text databases – Idealist, AskSam, Threadz Organiser, Bonsai and half a dozen more – to soak up my inspirations about work-in-progress and enable me to find them again later. Stuck with each for a couple of years, but as Windows’ hierarchical file system gradually became more capable, especially for indexed search, I found myself relying ever less on databases and their pesky file formats. Then the rise of The Web changed everything, and not merely because I could store stuff in The Cloud but because more and more of my inspirations came from The Cloud. 

At first it was search engines like Lycos and AltaVista, then the all-conquering Google, but retaining search results meant links and bookmarks, which had to be stored and organised themselves. Cue a fierce competition between Chrome, Firefox, Explorer and Opera for best hierarchical bookmark manager. There were apps that stored web pages locally, but only as hideous subdirectories holding scores of irritating HTML files. That all changed with Pocket, which stores links to wanted web pages in its own cloud in an almost transparent fashion by clicking a single icon. Pocket continues to be the core of my data hoard even after I abandoned Windows for a Chromebook. I still send articles containing matter I might need later there while I’m reading them, which ends up storing far more than I actually use (I do purge my Pocket list occasionally, and its own tagging and search facilities are good). But nowadays I also download really vital articles immediately into a local folder containing the project they’re aimed at (for example these columns). When I say ‘download’, what I actually mean is ‘print to PDF’ which captures an article with all its pictures and formatting intact. 

I don’t worry about the space textual ‘commonplaces’ occupy, now that 128 gig USB sticks are so cheap, but pictures are a more worrying matter. Photo organising for pictures that I take myself – whether local, or online like Flickr and Google Photos – isn’t the problem, it’s pictures I’ve just grabbed from the ‘net because they tickled me, and which I’m too idle to categorise, tag or even name properly so they can be easily found again. That said though, most of what I do nowadays (including this column) doesn’t involve any pictures. 

I review books for a political journal, the sort of books that sometimes require me to quote extensively from the text, and I discovered quite a while ago that asking for a Kindle or PDF edition of the review book is a great advantage, just for the searchability. In a huge tome like Thomas Piketty’s ‘Capital And Ideology’, being able to search and bookmark is a life-saver. At first Amazon was less than helpful in the matter of extracting book content and annotations: for example I found that notes made on my hardware Kindle or Android tablet could only be cut-and-pasted from within the PC version of the Kindle reader which I no longer use. 

Thankfully that’s all changed, and in a most helpful way. When I review books in more recent Kindle readers, in addition to searching and bookmarking pages I can highlight passages in four different colours, attach notes to them, copy them to the clipboard and even report typos and other errors back to the publisher. And this annotating activity gets automatically saved as a ‘Notebook’ which I can export either as plain text or in one of three approved academic footnote formats, and share wirelessly between my devices (Chromebook, phone or Galaxy Tab). Viewing a Notebook filtered by highlight colour can in effect turn it into a full-text database of colour-coded categories, just like those databases I played with in the bad old days. It can even export notes and annotations as a deck of flashcards, which I could one day use to perform acts of mass stupefaction on an unsuspecting audience… 


[Dick Pountain can recite the ‘fair use’ rules in his sleep (and sometimes does)]


AI, PROTEIN AND PASTA

Dick Pountain /Idealog 335/ 01 Jun 2022 01:55


I’ve devoted a lot of space in previous columns to questioning the inflated claims made by the most enthusiastic proponents of AI, the latest of which is that we’re on the brink of achieving ‘artificial general intelligence’ on a par with our own organic variety. I’m not against the attempt itself because I’m just as fascinated by technology as any AI researcher. Rather I’m against a grossly oversimplified view of the complexity of the human brain, and the belief that it’s comparable in any useful sense to a digital computer. And there are encouraging signs in recent years that some leading-edge AI people are coming to similar conclusions.  

There’s no denying the remarkable results that deep neural networks based on layers of silicon ‘neurons’ are now achieving when trained on vast data sets: the most impressive to me is DeepMind’s cracking of the protein folding problem. However a group at the Hebrew University of Jerusalem recently performed an experiment in which they trained such a deep net to emulate the activity of a single (simulated) biological neuron, and their astonishing conclusion is that such a single neuron had the same computational complexity as a whole 5-to-8 layer network. Forget the idea that neurons are like bits, bytes or words: each one performs the work of a whole network. The complexity of the whole brain suddenly explodes exponentially: another research group has estimated that the information capacity of a single human brain could roughly hold all the data generated in the world over a year. 

On the biological front, recent research hints as to how this massive disparity arises. It’s been assumed so far that the structure of the neuron has been largely conserved by evolution for millenia, but this now appears not entirely true. Human neurons are found to have an order of magnitude fewer ion channels (the cell components that permit sodium and potassium ions to trigger nerve action) than other animals, including our closest primate relatives. This, along with extra myelin sheathing, enables our brain architectures to be far more energy efficient, employing longer-range connectivity that both localises and shares processing in ways that avoid excessive global levels of activation. 

Another remarkable discovery finds that the ARC gene (which codes for a protein called Activity-Regulated Cytoskeleton-associated protein), known to be present in all nerve synapses, now plays a crucial role in learning and memory formation. So there’s another, previously unsuspected, chemically-mediated regulatory and communication network between neurons in addition to the well-known hormonal one. It’s thought that ARC is especially active during infancy when the plastic brain is wiring itself up.

In other experiments, scanning rat brains shows that activity occurs throughout most of the brain during the laying down of a single memory, so memory formation not confined to any one area like the hippocampus. Other work, on learned fear responses, demonstrates that repeated fearful experiences don’t merely lay down bad memories but permanently up-regulate the activity of the whole amygdala to make the creature temperamentally more fearful. In short, imagining the brain (or even the individual neuron) as a simple computer is hopelessly inadequate: rather it’s an internet-of-internets of sensors, signal processors, calculators and motors, capable not only of programming itself, but also of designing and modifying its own architecture on the fly. And just to rub it in, much of its activity is more like analog than digital computing. 

The fatal weakness of digital/silicon deep learning networks is the gargantuan amount of arithmetic, and hence energy, consumed during training, which as I’ve mentioned in a recent column leads some AI chip designers toward hybrid digital/analog architectures. The physical properties of a circuit, like current and resistance, perform the additions and multiplications on data in situ at great speed. However the real energy hog in deep learning networks is the ‘back-propagation’ algorithm used to teach new examples, which imposes enormous repetitions of the calculatory load.  

A more radical line of research is looking outside of electronics altogether, toward other physical media whose properties in effect bypass the need for back-propagation. The best known such medium is light: optically encode weights as different frequencies of light and use special crystals to apply these to the video input stream. This could eventually lead to the smarter, faster vision systems required for self-driving cars and robots. Another, far more unexpected medium is sound: researchers at Cornell are using vibrating titanium plates which automatically integrate learning examples supplied as sound waves by a process called ‘equilibrium propagation’: the complex vibration modes of the plate effectively compute the required transforms while avoiding the energy wastage of back-propagation. Of course the ultimate weird analog medium has to be spaghetti (which appeals to the Italian cook in me): see https://en.wikipedia.org/wiki/Spaghetti_sort


[Dick Pountain can confirm that the Spaghetti Sort works well with linguine and bucatini, but not with penne or fusilli] 

ANSI FANCY

Dick Pountain /Idealog 334/ 06 May 2022 10:01


During the darkest days of lockdown I kept myself amused mostly by practising my jazz guitar chords and by writing Python programs. I write a lot of small, off-the-cuff programs, for everything from updating phone codes or solving maths puzzles, to playing with simulations to do with Games Theory. Python is just the last of a long list of languages I’ve used –  Basic, Forth, Pascal, Lisp, Ruby are just a few of them – but the one I remember with most affection is Turbo Pascal. My programs are so small and ad hoc that it’s never worth investing too much time writing graphical user interfaces, so Windows was a nuisance rather than a liberation. 

Although I played with Visual Basic and Delphi for several years, eventually both metastasized into such baggy monsters that I dumped them. I wrote myself a little toolkit in Turbo Pascal 5 that created simple windows, menus and pick-box widgets from ASCII characters, quite good enough. I shared my code in Byte, and am proud to say I saw it on screens in more than one science lab where they had similar minimalist requirements. Now I use QPython 3.6 on Android, which has no native ability for colour or cursor control, just scrolling teletype output. There are plenty of add-on graphics libraries like Kivy, all way more than I need, and there’s a simple interface to Android dialogs and media components called SL4A which I do occasionally use but it’s still not what I had with Turbo. 

The impulse to recreate my Turbo widgets was finally triggered by of all things, Wordle. I do enjoy playing Josh Wardle’s clever and elegant little puzzle (though I deplore the posting of screenshots on Facebook or bragging about stats…) It’s just difficult enough to maintain an interest, but still simple enough to pose real questions over strategy, which is exactly what I love in a game. I never actually cheat but I do use tools that some purists might consider cheating, namely a mobile version of the Oxford Dictionary with wildcard search, and an Anagram Solver, and once I have three or four letters these will finish the job in a couple of minutes. Getting those three of four letters soon enough is what my Python program does, searching for effective first guesses using the known frequency distribution of letters in English at each position in a five-letter word: it outputs delightful combinations like AUDIT SNORE CLAMP and  CAMEO UNITS GRIND WHELK. I discovered that my old buddy David Tebbutt, one of the founding editors of Personal Computer World, is also fond of Wordle and it was he who sent me the letter frequencies list in a fine example of nerd-aid. (He also has access to Wordle’s own internal word list, but in a fit of hubristic rectitude I decided that was a step too far for me - I’ll make do with the Oxford).  

My Python Wordle Helper program’s scrolling teletype interface became a real bore, and while looking up how to write a clear-screen command using OS calls I discovered how to issue ANSI codes from Python, and hence how to do a 256-colour character-based terminal better than Turbo’s. QPython can in theory access the Linux curses library, but that refuses to work for me (and for quite a few other folk according to the forums) and in any case curses is pretty horrible. I set to and have now written a widget set that does everything I want, enabling single lines of code to invoke a box, a window, pick list, input box, progress bar, table or barchart. 

I very much doubt that my new widgets will prove as popular with nerds in science labs as those Turbo originals were – they were pre-Macintosh days, when IBM-compatible PCs running DOS were still ubiquitous in technical and process control contexts. I know that there are still far too many public institutions like hospitals and libraries which retain such dinosaurs, but they will be at least running Windows XP. Of course anyone under 70 is using a smartphone instead of a computer anyway, and an ANSI terminal on a smartphone screen is about as welcome as a turd on a sushi. So I’ll be keeping my little ANSI world to myself, going back and rewiring some of my old programs using the new interface. My 5-Card Drawer and 5-Card Stud Poker games actually ook rather splendid, and it’s a pity I can’t show you them here. But if for whatever bizarre reason you’re looking for a lightweight interface kit for some primitive device with a 160x36 character display, you know where to look, on my website at http://www.dickpountain.co.uk/home/computing/python-projects/ansi-library

[ Dick Pountain is so thankful that he programs as a hobby, not a job ]


HYBRID HYDROGEN

Dick Pountain /Idealog 333/ 08 Apr 2022 10:08


I write about energy fairly often in this column, mainly in the context of CPU efficiency or the foibles of mobile batteries, but now seems like the time for a wider look. Putin’s invasion of Ukraine has thrown the global fossil markets into chaos, while the UK government has just revealed its long-term (not entirely feeble) strategy for increasing nuclear and wind power over the next 8 years. So this time I’m talking about the long run. 

I’ve been a convinced supporter of both nuclear power – fission and ultimately fusion – and the ‘hydrogen economy’ for around 50 years when neither was at all fashionable. I won’t rehash all the problems both face, which are well-known and debated, but will instead present two indisputable facts. Firstly the primary mode of energy delivery in advanced Western economies is turning toward electricity (and demand for it will soon exceed supply); and secondly, hydrogen is an impractical fuel for road, and perhaps for air, transport. It’s terribly inflammable, hard to store and has a knack of diffusing out of containers and pipelines.

You only have to remind yourself of the horror of power-cuts, when your phone and laptop batteries have run out, your router and internet connection are down (and maybe your gas central-heating controller is going nuts). Then think about the fact that electric car sales are approaching take-off, thanks to the soaring price of petrol and diesel, concern over air pollution and climate change. You don’t need a degree in economics to figure out what happens to electricity demand if all car users go that route. One solution would be to distribute the generation of electricity more widely, using small-to-medium nuclear and on-shore wind power stations. But I suggest that the solution we need to be looking at instead is a hybrid one, in which we keep electricity generation centralised – perhaps coastal – using nuclear, solar and wind power to generate hydrogen by the electrolysis of water. Then, instead of trying to distribute that hydrogen as gas or liquid, both fraught with danger, use it to make metal hydrides that can be distributed in standardised battery-like modules. The national chain of petrol stations replaces its pumps with hydrogen-pack replacement and recycling, which also eventually displaces the too slowly growing national grid of EV charging points. Such a distribution network of hydride packs would efficiently store and distribute energy produced by intermittent sources like solar, wind and tide, and replace bottled gas in remote rural areas beyond the reach of town gas. Such a system would require bringing two existing research technologies to commercial fruition: metal hydrides and hydrogen fuel cells, and I think that both can be achieved with sufficient investment.

A fuel cell turns a fuel like hydrogen into electricity directly, combining it with oxygen from the air within an electrolyte and electrodes rather than by burning it for heat. Invented way back in 1838 research has been extensive – as so often in history the military have lead the way in search of mobile battlefield power sources, and NASA has used them in spacecraft since the 1960s. 

Metal hydride storage has been around less long and remains more experimental. Hydrogen gas gets combined with a metal to form a liquid or a solid powder that requires neither high pressure nor cryogenic low temperatures to store, and the gas can be recovered by heating. Promising candidate metals include lithium, sodium, magnesium and aluminium, while boron combines with ammonia that can be turned back into hydrogen via a catalyst (the French company McPhy Energy is working on the first commercial product using magnesium). The Metal Hydride Fuel Cell combines these twin technologies into the same container, which gets recharged with hydrogen gas and outputs electricity directly: these are more experimental still, and once again the chief developers are currently the military.

It’s not yet clear what would be the best overall architecture for such a hybrid hydrogen economy: build a fuel cell into the electric car with exchangeable hydride packs; electric car with exchangeable metal-hydride-fuel-cell packs; existing EV with a fuel cell to recharge its conventional battery via hydride packs; even cars with hydrogen internal combustion engines and hydride packs. All sorts of questions over energy densities, performance and safety remain to be answered. 

But the history of the automobile tells us that eventually one architecture would triumph, witness the world-wide network of petrol stations. It’s also likely that an intermediate network of hydrogen liquid or gas to regional hydride recycling centres would need to evolve. What’s certain is that governments ought to be considering advice from people fluent in these technologies, then stump up the research funds that might just save both our current way of life and eventually the planet itself.  

[Dick Pountain has a picture of the Hindenburg disaster on his spare bedroom wall] 


 




      

PC ARCHAEOLOGY

Dick Pountain /Idealog 332/ 03 Mar 2022 09:56

Regular readers of this column will know that I abandoned Windows for a Chromebook back in 2017, and have been extremely happy with my choice ever since (doctors now believe that every Windows upgrade you skip adds five years to your life). However as a columnist for a magazine that has “PC” in its title I can hardly turn my back on Microsoft completely, so I kept my last Windows machine, a Lenovo Yoga touch-screen laptop running Windows 8.1. This still runs my Canon printer and acts as a backup server via a large external Samsung USB drive, accessible from the Chromebook over wi-fi SMB. I once needed it to run a few programs, like Photoshop Elements, but all of those I’ve steadily displaced with Chromebook equivalents. 

But the Lenovo started to show its age, disk accesses got ever slower in that sinister way that suggests excessive error-correction, plus sporadic tussles with an infuriating Windows 8 rogue task called the Runtime Broker that consumes 100% CPU. Eventually it became too much of a chore to boot it up, but I was extremely disinclined to waste good money on a new machine running Windows 11 for the minimal purposes I required. Then I remembered that I still had my last-but-one laptop, a Sony Viao, sleeping peacefully in a closet. It’s the computer I used for half those 13 years I spent in Italy, during which it served without a hitch. It’s not a machine I would ever have bought for myself but was a Christmas present from our late chairman Felix Dennis. An ultra-slim Viao TZ21 with a carbon-fibre case and a very vivid 11" diagonal, 1,366x768 resolution screen, I shrink from imagining what it must have cost him. It’s far, far too good to throw away, I know, I’ll resurrect it! 

After eight dormant years in that cupboard, on plugging in and charging, the Viao booted up straight away, and then the fun began. For starters it’s 32-, not 64-bit, still runs Windows 7 Professional, and its wi-fi adapter is 14 years behind the times. How to make it rejoin the modern world? I became suddenly aware just how far in The Cloud I now live, because when I went into Windows Network Manager, though it could see my BT hub it wouldn’t connect. In Italy our remote valley lay beyond Telecom Italia’s ADSL network, so I used the Viao via a mobile phone connection from a newly built mast on the nearby mountain top. And of course their proprietary software wouldn’t work with a UK SIM even had I wanted it to. So how to get online?

A quick check in System Config showed there were indeed two network adapters in the Vaio, an Intel Wireless 3945ABG and a Marvell Yukon 88E8055 PCI-E Gigabit Ethernet, but I no longer have an Ethernet cable and the Intel clearly didn’t like my BT Hub 6. I suppose a grown-up would have started trying to diagnose the wi-fi problem, but I passionately loathe fiddling with networks or comms, and so I took the coward’s way out by fleeing to Amazon on the Chromebook. For an eye-watering £5.19 I ordered a TP-Link 150 Mbps Wireless Nano USB Wi-Fi Dongle, and when the thumbnail-sized widget arrived I popped it in a spare USB port. Like some parasitic insect it injected its drivers, flashed its little green light and connected immediately. Ookla showed a 30mps download speed, less than half what my Chromebook gets but twice what that costive Lenovo was getting 

Now for a browser. I’d been using Opera in Italy but even after updating it was deeply unhappy about certificates and stuff. Given that my life belongs to Google I had to get Chrome going. It took forever to download, let alone install, and it too was deeply unhappy, bitching about all kind of security problems and repeatedly making me identify myself. I got it sort of going but it was tragically slow and comically cranky, refusing to connect to The Guardian as a security risk (perhaps it’s joined the EDL?) So I did what you have to and reinstalled Firefox. 

After a couple of updates Firefox runs like the clappers, opens everything without demur and is not too proud to suck in all my Chrome settings and bookmarks: it works flawlessly with Gmail, Google calendar and contacts and so has effectively turned the Viao into a slower Chromebook utility-wise (though of course it can’t run Android apps). That’s all I need it to be, and for £5.19. And rather to my surprise, I understood that Windows 7 was the last version I actually liked: that Windows Button and left-side menu…

[Dick Pountain is rather enjoying retro computing]

    



 






IT NUMEROLOGY

Dick Pountain /Idealog 331/ 06 Feb 2022 03:44


It’s become a centrepiece of IT-biz wisdom that one should always, if possible, wait for version 2.0 of any new technology. Some of us who write about IT of course cannot wait and must suffer at the bleeding-edge, and once you get beyond version 2 though all bets are off (as an ex-Windows user I refuse to participate in 10 versus 11 chatter). However in recent weeks two other small integers have been brought to my attention, namely Web3 and 6G communications, and these seem to portend a forking of the IT evolutionary tree into two very different directions Web3 came properly to my attention thanks to three articles in the February issue of The Atlantic magazine, about the growing backlash against cryptocurrencies and the financialisation of digital content via NFTs. I’m slightly embarrassed that I haven’t covered crypto in this column before, but it’s no accident: ever since Bitcoin was first launched it has struck me as a dangerous, possibly fraudulent social experiment, based on ultimately unsupportable technology, but I’ve shied away from the torrent of abuse that this opinion is likely to provoke. I’m not an anarchist, but I do agree basically with the late economic anthropologist David Graeber about the origins and function of money. It’s fundamentally a way of fixing and storing the intangible emotion of trust, and the idea that blockchain technology somehow makes money ‘more democratic’ is nonsense. All it does is make it more unstable, the last thing we need in our post-viral anxiety state.

The ‘3’ in the name Web3 – coined by Gavin Wood, co-founder of the Bitcoin competitor Ethereum in 2014 – implies that Web2, dominated by Google, Facebook, Amazon will soon be over, replaced by people transacting directly with each other using blockchains. Let’s briefly recap the full genealogy. Web1, the original Berners-Lee/Andreessen invention, was a noncommercial, distributed publishing system that let scientists and nerdy hobbyists communicate with one another for free, and it spawned many services we now take for granted like webmail, blogs, memes and videos. By the mid-1990s businesses discovered its power and moved themselves online, while service providers like Google, Amazon and Facebook realised they could stay free to end-users by selling said users’ data to advertisers. That was Web2, which reaped fortunes the like of which the world never saw before.

Web3 and NFTs represent the next step, total financialisation of digital assets. Any old data, like monkeys-in-daft-hats, become speculative instruments. This potentially affects everything because nowadays computers are in everything so almost anything can become a digital asset, even your doorbell videos. Those Atlantic articles describe the furious backlash building in the USA, that calls Web3 and crypto scams, Ponzi schemes, parasitism and compares it to the 2007 sub-prime mortgage disaster.

It seems unlikely the monster corporations will be replaced soon, especially Google which funds much deep, world-leading research. Which leads me neatly on to number ‘6’. The same week I received an email from the University of Oulu in Finland with links to seven videos that demonstrate their cutting-edge research in 6G communications. I clicked, I saw, I boggled (see for yourself at

https://www.oulu.fi/6gflagship/6g_demo_series).

These 6G innovations depend on very short wave, terahertz, radio signals, and on ways the Oulu teams have found to focus and steer them. The 6G radio demo shows how such signals can communicate and remote sense – detecting the shape and surface textures of nearby objects – simultaneously from the same device like a smartphone. The 6G optics demo shows communication via THz modulation of the lighting in a room, thwarting any radio-frequency snoopers outside the room. The 6G edge computing demo shows a scanner generating detailed 3D representations of objects, including you, almost instantly, using an array of Raspberry Pi’s as ‘edge processors’. Most fun of the videos is the 6G vertical demo about lighter-than-air drones that can patrol a building in sci-fi fashion to check for defects or take drinks orders. My conclusion after watching all of them was that Zuckberg backed the wrong horse with Meta: virtual reality is actually rather naff Web1.

The world’s politicians are as always several decades behind the implications of all these innovations, but very soon some crunch-time decisions will need to be made about the deployment and regulation of them. Most state banks are looking at introducing digital currencies (not blockchained crypto) which could offer huge simplifications of their tax, spend and welfare systems. But they could also be abused, as they already are being in India and China, to penalise and control political dissent. We badly need to start serious public debates about where all this stuff is taking us, whether the 6G/Web3 future will be a eu- or a dys- topia.











GRANDPA’S AXE

 

Dick Pountain /Idealog 330/ 07 Jan 2022 02:10

Grandpa’s Axe is a figment of American folk mythology: “had it nigh on 60 years, never give me no trouble exceptin’ fer two new heads and three new handles”. Well my hi-fi system is very much like that. I’m very far from being a hi-fi buff but I do listen to a lot of music and like reasonably good sound. I bought an ex-review Sansui system back in the days when Dennis published Hi-Fi Choice, but the only component remaining from that is a pair of wood-cased Castle speakers that I love. Nowadays all my audio feeds – Smart TV, Spotify and YouTube on Chromebook, Sony CD player and Dunlop vinyl turntable – play through them. However just before Christmas my latest amplifier, a Denon which also acted as the hub, expired with a horrid death rattle. Not being up to diagnosing and repairing it, I went online to look for a replacement.

It appears ‘separates’ hi-fi amps are a dying breed and accordingly are subject to ‘Veblen Pricing’ as rich folk’s toys, but I became intrigued by the new breed of tiny power amps meant for bookshelf systems, and found myself buying a Fosi bt20a. When I first unpacked it I thought it was a joke, not much bigger than a packet of cigarettes, but then I plugged it in and reeled in amazement as it drove my massive speakers just as loud, and with superior, more open sound quality than the Denon…

Now since this isn’t an audiophile magazine (we don’t even have audio devices in the A-List) how can I steer this miracle around toward my digital brief? Well, I assumed the Fosi must be digital inside, but I was wrong – it’s even more interesting than that. It turns out to be a ‘Class-D’ device, a technology which exists in a grey area between analog and digital which I’ve covered here before, for example in the silicon retina chips designed by Carver Mead or AI chips that use analog adders to perform convolutions. Hybrid analog-digital circuits can often be faster and less power-hungry as they don’t need to analog-to-digital convert their inputs and then digital-to-analog convert the outputs. The price is loss of the precision that digital brings, but for some non-numerical applications that may be worth paying.

Class-D amplifiers are a nice case in point. They work by chopping up an analog input signal into a very high-frequency stream of square-wave pulses whose spacing represents the analog values at each tiny interval. This stream then switches twin output transistors at a similarly high frequency, then filters out frequencies beyond the audible to directly produce an amplified analog output signal. This process can be 70-90% efficient: because the two output transistors are never both on at the same time little current flows and little power is lost as heat. The Fosi uses a phone-style 24-volt power supply and barely gets warm even when playing loud. It’s still a wholly analog technology as no bits are involved, but like a digital technology it works on a stream of discrete pulses, and can therefore be supported by cheap, miniature digital chips.

The efficiency and power saving such hybrid processing confers is becoming ever more desirable to designers of the neural networks on which Deep Learning AI depends. Implementing a simulated neural network in digital technology is enormously power hungry. Training a network involves storing billions or trillions of weights in memory cells, and then performing multiply-accumulate operations on them and new inputs when the network is eventually deployed to analyse new data. Back in issue 301 I described how the US firm Mythic’s IPU chip contains an analog computing array of memory cells that are actually tunable resistors: computation happens in-place as each input voltage gets turned into an output current according to Ohm’s Law, with resistance representing the stored weight value. With data and computation stored in the same place less energy is wasted and less silicon real-estate is needed for A-to-D conversion. 

Carver Mead’s silicon retina chip similarly employed an array of photosensitive cells connected by a network of resistors, with Kirchoff’s Laws achieving the necessary ‘computation’ as currents flow between them to achieve behaviour similar to that of the human eye.

Fitted with its second new head and third new handle, my audio environment is now immensely satisfying, even if it is so hybrid that it would cause a real audiophile to grind their teeth. The little black box omnivorously devours analog feeds from TV, CD player and turntable (through an equally tiny analog mixer) just as effectively as Bluetooth from my Chromebook. And it passes my own very analog test regime: Wayne Shorter’s ‘Footprints Live’ album makes my remaining hair stand on end via either route.

[Dick Pountain thought DAC was a brand of trousers until he discovered Smirnoff]

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...