Friday, 18 April 2025

ARTY FACTS

Dick Pountain /Idealog 363/ 05 Oct 2024 03:05

When I’m not writing this column, which let’s face it is most of the time, I perform a variety of activities to keep me amused. Apart from walking, cooking, reading, listening to music (live and recorded), playing and fettling guitars, I have a couple of computer-based ‘hobbies’, namely making computer-generated music and computer-generated pictures (non-moving). I recently restarted work on my Python-based computer composition system Algorhythmics – which I described here back in issue 306 – after a four-year rest. What prompted me was viewing a YouTube video about the Indian mathematician D. R. Kaprekar and some interesting numbers he discovered, so I set-up Algorythmics to compose a short piano sonata around two of his numbers, and it sounds like a fairly pleasing mashup of Ravel, Janáček and Satie.

I’ve been documenting my efforts at visual art in this column for over 20 years as I marched through successive generations of paint software from Paintshop Pro to Photoshop Elements to Sumo Paint to Artflow. I loved art at school and can use both paint and pencil reasonably well, but I’ve never been tempted to use either seriously since I discovered the computer (any more than I’m tempted to write articles with a quill pen since I discovered the word-processor). The crux is editability: once you discover the infinite flexibility of digital imagery it’s hard to give up this ability to experiment, redo and correct without wasting paper, canvas and paint. Images on a screen certainly lack the texture of proper paintings, but then I’m strictly an amateur with no realistic ambition to make a living selling my work.

I’ve also been into photography ever since the 1960s and my computer explorations began by processing snaps to make them look like paintings, which taught me how to use layers and blend-modes to take total control of both the colour and tonal content. Later I began using this knowledge to create purely abstract images.

It won’t have escaped anyone in the habit of watching Instagram, Facebook or YouTube reels that there’s a remarkable revival of abstract painting going on right now among the social-media savvy younger generations. Unlike me they’re not working in the digital domain but rather in the messy and expensive domain of wet paint. Under various labels like fluidart, spinart and poured art they’re making dynamic action paintings – Jackson Pollock style – by pouring multicoloured acrylic paints onto a canvas, manipulating it using palette knives and then spinning and tipping it to produce a finished image. Often they incorporate silicone-based additives that introduce cell-like patterns of bubbles, and the results have a very particular biological look. Some of them are very attractive indeed and nearly all are highly decorative. I doubt that many of these folk consider their products to be high art but they are highly saleable, and that’s on top of any revenues that derive from successful social media hits, which is just as well as the costs in canvas and acrylic paint must be considerable.       

I'm very keen myself on early 20th-century modernist abstract art, especially Vassily Kandinsky, Paul Klee and Sonia Delaunay. I don’t set out to imitate their works but merely play around using digital processing on a starting image, which could be a photograph, a clip from a website or a digital image that I draw by hand. Mostly I just use a mouse nowadays (I’ve had several Wacom tablets in the past) since my starters are so simple. Another way I sometimes start is by using Zen Brush 3 on my Samsung Galaxy Tab, a delightful finger-painting program with extraordinarily realistic watercolour bleeding effects, and then send the result to my Chromebook via Android’s Nearby Share. Then I spend some time layering, blending, smudging and slicing until I see something I like, which does indeed tend to mean something that reminds me of one of my modernist mentors. 

I’m not at all tempted to go in for fluidart myself to make money, even though those attractive canvases are more readily marketable than digital prints. That’s not only because I don’t have a garage in which I can splatter paint up the walls, but also because whenever I watch these artists at work on Facebook, more often than not I find the earliest part of a new work the most pleasing but then they keep adding too many colours and over-do it. So, while watching one particularly spectacular piece I hit ‘Watch Again’, then hit || to stop it at an early stage I liked better, took a screenshot and used that as a starter for my own piece! This potentially raises a novel legal issue about copyright and plagiarism: I froze a moment in time that didn’t make it into the final painting, so was I really stealing…. 

[You can see six of Dick Pountain’s abstracts at https://www.facebook.com/dick.pountain/posts/pfbid02UBtGRbAU7aTLSPYjeyTebxJUjMJ6EME6cKd5iqYBsYcdbaPCPrUNxZNqJhE48rSKl and hear his Kaprekar tune at https://soundcloud.com/dick-pountain/kaprekar-sonatina]


  

SMELL U LATER

Dick Pountain /Idealog 362/ 05 Sep 2024 11:5

I have no qualms in claiming that I have a better (or least better trained) sense of smell than the average citizen. That’s partly, maybe mostly, because I studied organic chemistry in the 1960s. During my first few weeks of working in the cavernous Victorian college lab I was instructed to learn the odours of a dozen commonly encountered chemicals, and advised to employ smell as the first step in recognising any new compound. I can often tell an aldehyde from a ketone by sniffing, and became briefly addicted to ionone, cinnamaldehyde and menthol in succession, carrying little specimen tubes in my pocket. I’m sure this method is no longer taught, on health-and-safety grounds, as there are many substances nowadays that can kill at one sniff. 

In later life this training came in handy when my brother-in-law Pip founded The Scotch Malt Whisky Society and was writing a book that needed to categorise the nose of various famous spirits. Of course odour is now a huge business, not merely for perfumes as it has been for centuries but for those hundreds of flavourings contained in most supermarket foods which are manufactured in a huge chemical works in New Jersey. But smell has barely impinged upon the computer business so far, apart from the smell of burning insulation which most of us quickly learn to recognise (and investigate…)

I wrote semi-humorously in an earlier column about the possibility of a ‘sminter’ loaded with an assortment of smelly ‘inks’ that could be triggered via internet messaging, and I even got a letter some years later about an (unsuccessful) attempt at one. Even Hollywood attempted a brief stab at a Smell-o-Vision movie (‘Scent of Mystery’  by Mike Todd Jr.,1960 in case you’re interested) but the obstacles in both cases were the same, that smell is a chemical, not electronic, signal that moves at the speed of breeze rather than light – and you can’t just switch it off quickly either…

But a far more serious obstacle is that while the components of human light perception are threefold – red, green and blue retinal cells – the components of smell perception are vastly more numerous. Our noses contain at least 400 different chemical receptors and individual smells are recognised by trillions of combinations their outputs, which release a plethora of proteins that are still not entirely understood.  

But when you hear the word trillions nowadays, you’re usually either talking about a GPT (or at least about NVidia’s market cap). Understanding smell perception, like protein folding and DNA sequencing, is a perfect candidate for AI to analyse, so it comes as no surprise to learn (via an article in Nature https://www.nature.com/articles/d41586-024-02833-4) that many teams are working toward this end, with ample financing from industries. 

The problem has several aspects, which include: predicting the smell of a molecule from its structure; decoding the output of human odour sensors for particular compounds; and automating comparison of smells of different mixtures by identifying their components. The current hot variant of AI – the Generative Pre-trained Transformer (GPT) – works using the mathematics of parameter spaces: identify the important parameters of the subject to be analysed, apply tensor calculus to create a multidimensional space with a dimension for each parameter, and then map training examples into this space. For graphical AIs like Stable Diffusion and MidJourney such spaces already have trillions of parameters for identifying shapes in visual worlds.  

One immediate problem for applying this to smell is getting training data: odour receptors, whether human or animal, are hard to study, often won’t work outside the creature, fragile and the amount of protein released is minuscule. Two receptors from insects and two more from mice have been deciphered in the last year, leaving just 400+ more to go. A team at Duke University in North Carolina is using AlphaFold and machine learning to screen millions of chemicals for binding to two synthetic receptors they’ve engineered. A very important motivation for such work is to use smell recognition in diagnostic medicine by identifying odour molecules produced by disease processes (dogs are doing this already). Precisely how and where odour nerve signals are processed in the brain is perhaps the leading-edge study right now. 

Real progress is being made and AI may soon speed it enormously, but smell remains the least understood of our senses, and least amenable to digital manipulation. It’s so subjective that human tasters and perfumiers will retain an advantage over automated solutions for far longer than most professions, and I don’t expect to have to consult my laptop when I’m mixing our own custom bath oil from my little box of tubes of neroli, rose, ylang-ylang and sandalwood oils (plus several other secret ingredients).  

[Dick Pountain believes that a rose by any other name would smell mostly of β-phenylethanol] 

ARTYFICIAL INTELLIGENCE

Dick Pountain /Idealog 361/ 08 Aug 2024 01:04

Who’d have thought that AI could become so boring so quickly? I feel a rather urgent obligation to devote a column to what’s been happening, but it appears to be happening faster than I can type. Last month NVidia’s stock rose faster, and then fell faster than a SpaceX test shot. Then Amazon reports that its Kindle self-publishing platform has been flooded by such a torrent of AI-generated bodice-rippers that it’s having to impose a limit of only three books a day on its customers.

Because Amazon stops short of banning AI-generated content altogether, ChatGPT is rapidly becoming a cancer on the body of the publishing industry in more ways than just crap e-novels. When I’m reviewing a book I often look online at other people’s reviews, not to plagiarise them but to see other opinions. Recently though I’ve witnessed spammers using ChatGPT to spatter the net with worthless AI-generated paraphrases and summaries of best-selling real books, which make it almost impossible to find any proper critiques. And it’s not only words. The hand-crafted goods site Etsy recently told The Atlantic Magazine that it’s being swamped by AI-generated tee-shirts, mugs and other merchandise which employ ChatGPT to optimise their Google search rankings and crowd out real producers.    

In a previous column I was worried about abuse of AI deep-faked photographs to compromise political opponents, but that’s turned out to have been somewhat wide of the mark because it requires a certain political seriousness on the part of the perpetrators. What’s happened instead is that Midjourney and its ilk are now enablers of pure fantasy and pop-surrealism. They allow everyone and anyone to produce memes and professional looking posters that make merely spray-painting slogans on a wall feel like something from a previous century. 

When I first tried out Stable Diffusion a year or so ago I was amused by the way its limitations generated such hilariously surreal images, but I’m not laughing now. The recent UK wave of far-right activity against immigrants and asylum seekers has been organised online via Telegraph, The-Platform-Which-Used-To-Be-Called-Twitter, TikTok and other social media, and an important part of this rallying process is a new genre of surreal nationalistic propaganda memes. Popular content components of such productions are squadrons of Spitfires, St George Cross flags, knight-crusader figures in mediaeval armour and British Lions (often wearing tee-shirts but no trousers and playing cricket), all meant as symbols of that old Britain they believe has been stolen from us. GenAI tools enable them to churn out infinite combinations of these icons in glorious Marvel-comic colour and for minimal effort. It’s worse still in the USA where these same tools are being used to depict Donald Trump with a six-pack and Hulk-like musculature, occasionally with the golden wings of an archangel and a blazing sword. Visual satire has a long history from Cruikshank, Rowlandson and Gillray, to Georg Grosz, Otto Dix and Ralph Steadman, but it always required a graphical skill that has now been entirely eliminated, and the purpose is no longer satire but adulation.

It feels as though we’re currently in the ‘phony war’ phase of a marathon battle between states and AI companies over regulation of the internet. The AI side continues to bluster about transforming the world’s economy with soon-to-be invented AGI, while also admitting that it will need to use about half of the world’s electricity supply to do so (and investing in fusion research). The state was until very recently clueless about the threat posed by widely available generative AI tools – though the wave of violence in the UK seems to have awakened them smartly – and they’re also quite chary about imposing content regulation, over quite legitimate concerns about freedom of speech. The owners of online content – publishers, television and film companies – form a third force standing on the sidelines watching the impending battle. They’re furious that the AI companies have already scraped a sizable chunk of their properties without payment, but also acutely aware that there might be a profit opportunity here somewhere (who wouldn’t like robot authors and actors that you don’t need to pay?). 

As for us poor authors, artists, actors and other creators we can only just watch aghast, while some members of the general public perhaps see possibilities to gain quick entrance to the so-called creative world without the bother of arduously learning a skill (hence all those Amazon three-a-day novels). How this will all pan out is beyond anyone’s (even GPT 4.5’s) ability to predict. Too many variables, like who becomes US president in November, and too many hero/villains like Trump, Musk and Altman with hidden and volatile agendas. My guess is that the stock market might just call a halt to hostilities, and quite soon…              

[Dick Pountain only uses ChatGPT as a party-trick to horrify arty friends]


ARTY FACTS

Dick Pountain /Idealog 363/ 05 Oct 2024 03:05 When I’m not writing this column, which let’s face it is most of the time, I perform a variety...