Tuesday 11 June 2019

STRINGING ALONG

Dick Pountain/ Idealog294/ 6th January 2019 16:47:19

I do love strings. I don’t mean balls of string, or G-strings, or particle physics String Theory, or puppet strings. I mean that plain, simple old data structure, a load of old ASCII arranged in a row, the second type we all learn after numbers. “Hello World!” is a string, strings are the way that computers talk to us. OK, nowadays they all contain a chip that can turn strings into sounds, but that’s just to humour us - inside they talk strings, all the words I’m typing here are strings.

The first computer language I learned was Basic, on a Commodore 4K PET back in 1979, before the advent of bitmapped screens. Being a wordy sort of person, and having no graphics available, it was the string functions that grabbed my imagination. One of the first programs I wrote was a nonsense poetry generator, which created seriously mad lines like:

Should a truffle smoothly stink?
Can its pickled stomach think?
The pepper capers over your floor.

Yes, well. Soon I got a better, CP/M, computer and learned Forth, Lisp and Pascal, but none of these were really much better for mangling strings: they all had short limits on length, and similar numeric, array-oriented functions to find stuff. Then I discovered SNOBOL. Never widely popular and now almost forgotten, this language was entirely dedicated to string processing (the name stands for StriNg Oriented and symBOlic Language). It uses patterns that look very like Backus-Naur expressions, rather than numeric indices, for complex substring searches – it’s perhaps still unsurpassed for this purpose, but the rise of Unix and Perl made regular expressions the more popular solution.

I couldn’t see myself using SNOBOL for everything - it’s not great for numeric work - so I decided to write my own string functions that faintly mimicked the way it works. I called them before() and after() and they do what it says on the tin, so before(“pterodactyl”,”rod”) returns “pte”, whereas after(“pterodactyl”,”rod”) returns “actyl”’. I found these so useful that in every new language I learn, they’re the first things I implement. I’ve done them in Basic, Forth, Pascal, Lisp, POP-11, Ruby, Python and more. I published Turbo Pascal versions in a Byte column, and was gratified to find other programmers using them a few years later. Maybe they’re what will be on my blue plaque (just joking). Python of course provides a string method split() to do this – "pterodactyl".split("rod")[1] returns ”actyl”, but I’m now so attached to my before and after that I still prefer them.

Sometime around 1990 I encountered NIFE, the Non Interactive File Editor, from a small Bristol software house called Cadspa (now defunct). This was a DOS command-line program that took any number of text files, plus a file of NIFE commands, and wrote the results back to files. It was scorchingly fast and handled files of almost unlimited size. It employed a Prolog-like, declarative syntax, a sequence of ‘IF...THEN’ statements that could access all parts of each word, text line and all character types. Over the next decade I performed herculean feats with it, updating huge databases when the London phone numbers changed, helping a friend add Ventura Publisher tags to a book in 365 volumes, and alongside my own Turbo Pascal word-sort program to extract a keyword list from 18-years-worth of Byte issues, when I was writing The Penguin Dictionary of Computing. I reckon it saved me more than a year’s work there. Much missed because, like Turbo Pascal, it no longer runs after Windows 8.

In several previous columns I’ve mentioned my computer music composition system and – surprise, surprise – that works entirely on strings. When I started designing many years ago I had to choose a data structure to represent musical notes, and strings seemed, to me at least, an ideal solution. I could have settled for conventional musical notation and represented tunes as strings of the letters A,B,C,D,E,F and G. But the output of my system is MIDI, not musical notation, so instead I employ all the ASCII characters to represent the 127 pitches that MIDI can play. What’s more pitch, duration, volume and start-time get stored as separate strings so they can be manipulated independently. Python is just brilliant for handling this: sometimes tuples (pitch, time, duration, volume) are what’s needed, other times I might want to mangle pitch, or another of the streams, alone.

Computers stopped being ‘all about 0s and 1s’ for me when I quit writing 8088 assembler, and they’re only really about numbers once a year when I do my accounts. The rest of the time they’re all about strings, or ‘words’ if you must...




PHONEY VALUES

Dick Pountain/ Idealog293/ 6th December 2018 11:37:08

For all that I grouse about it, Facebook can sometimes be fun. Last week I posted about a small tech victory. My dear old HTC Desire phone had been playing up – go-slows, total freezes – and no amount of purging apps would fix it. I started eyeing up various Motos on Amazon, but just before committing decided to try one last fix: a full Factory Reset. All my data and most of my apps came back automatically from Google within 15 minutes, the phone is now rock-solid and faster than when if first got it full of EE crap. But the response on Facebook should have been predictable: an avalanche of advice from friends recommending replacements costing up to £1000.

I protested that I don’t care about phones: I use mine mostly as a Spotify and Citymapper platform while out walking, and for occasional texts. Shininess, Gorilla Glass, metal cases (I prefer plastic, which bounces instead of denting or scratching), curved screens or the dreaded notch mean nothing. If they ever give phones away in cornflake packets, I might start eating cornflakes. This admission caused great consternation, even concern for my immortal soul.

It’s not that I lack the aesthetic impulse, or the strong compulsions it can create. When, rarely, I see some object, new or old, that really appeals – a guitar, a vase, a chair, a hat – I’ll buy it with little concern for price. What I don’t do is buy things just because they’re new, or because they’re expensive. This may put me at the periphery of mainstream consumer society, but it doesn’t make me any better than others like some ascetic monk. Actually it might make me rather worse: all that matters is that a thing please me, rather than the effect that owning it will have on others.

Way back in 1899, in ‘The Theory Of The Leisure Class’, American economist Thorstein Veblen described the significance of luxury goods that defy the normal rules of pricing by becoming more desirable the more expensive they are – because possession of them is recognised as a sign of high social status. Economists still use the term ‘Veblen Goods’, for Lamborghinis, Beluga caviar, emerald tiaras and the like which have this effect. Now I’m not saying that a £1000 mobile phone is a Veblen Good: the latest flagship models do have superb abilities that go a long way toward justifying that price (just not abilities that I need or want). When you make the cases of gold, encrusted with diamonds, then they become Veblen Goods.

Theories of value have come a long way since 1899 though. In 2014 French sociologists Luc Boltanski and Arnaud Esquerre published a paper called ‘The Economic Life Of Things’ which advances a novel theory of how stuff gets priced. They separate the universe of things onto three planes that they call Standard, Collection and Asset, each plane having its own different criteria of value. Standard is the everyday world of mass-produced things, purchased new and which steadily depreciate with use. Collection is the world of old things that become collectable and so gain value again as they age. Asset is the world of jewels, buildings, fine art, purchased not merely for use but as a store of value, a hedge against inflation, even an alternative currency for the super-rich. Here price is what the market will bear, without upper limit (think Picassos and Van Goghs).

The originality of Boltanski and Esquerre’s approach is seen best through diagrams: for each plane they construct a different 2-dimensional chart space onto which you could plot real objects. The Standard space has a horizontal axis from Disposable to Durable, while the vertical axis runs from Distinctive to Generic. Smartphones lie somewhere in the distinctive/disposable quadrant while Mercedes cars are in distinctive/durable. Ballpoint pens are generic/disposable while stainless steel mixing bowls are generic/durable, geddit? In the Collection space the axes are Memento to Memorabilia and Singular to Multiple: your grandfather’s wristwatch is memento whereas Winston Churchill’s watch is memorabilia, and both could be slightly multiple. A painting by an unknown artist is memento/singular whereas one by a famous artist, or a 1950s Hermes handbag, is memorabilia/singular. Your collection of Marvel Comics is memento/very-multiple-indeed.

For Assets the axes are Unpredictable Price to Predictable Price and Liquidity to Immobility. Stocks and shares are liquid/unpredictable, Old Masters and rare stamps liquid/predictable, Tuscan villas are immobile/unpredictable whereas national treasures and monuments immobile/predictable (sale often regulated by treaty). Now imagine an app that scrapes Amazon, eBay, Christies and Sothebys catalogues, using this system to calculate the price (if not the value) of everything: could be the next Google...




HIT THE PANIC BUTTON?

Dick Pountain/ Idealog292/ 2nd November 2018 13:38:17

According to a recent Microsoft press release, their research indicates that almost half of British companies think that their current business models will cease to exist in the next five years thanks to AI, but 51% of them don’t have an AI strategy. While I could describe that as panic-mongering, I won’t. It’s more like straightforward marketing: since Microsoft is currently heavily promoting its AI Academy, AI development platforms and training courses, it’s merely AI bread-and-butter. But the idea of subtly encouraging panic for economic ends is of course as old as civilisation itself.

In his fascinating book ‘On Deep History and the Brain’, US historian Daniel Lord Smail described the way that all social animals - from ants to wolves to bonobos to humans - organise into societies by deliberately manipulating the brain chemistry of themselves and their fellows. This they do by a huge variety of means: pheromones; ingesting drugs; performing dances and rituals; inflicting violence; and for us humans, telling stories (including stories on Facebook about AI). It’s recently been discovered that bees and ants create the division of labour that characterises their societies - queens lay eggs, drones fertilise them, workers and soldiers do everything else - by a remarkably simple mechanism.The queen emits pheromones that alter insulin levels in her ‘subordinates’ (though it’s arguable that she’s actually their prisoner) which changes their feeding habits and body type.

And stories do indeed modify the brain chemistry of human listeners, because everything we think and say is ultimately a matter of brain chemistry: that’s what brains are, electro-chemical computers for processing experience of the world. The chemical part of this processing is what we call ‘emotion’, and the most advanced research in cognitive psychology is revealing more and more about the way that emotion and thought are intertwined and inseparably linked. Which is why AI, despite all the hype and panic, remains so ultimately dumb.

All animals (and plants too) have perceptual systems that sample information from their immediate environment. But animals also have emotions, which are like co-processors that inspect this perceived information to detect threats and opportunities. They attach value to the perceived information - is it edible, or sexy, or dangerous, or funny - which is something that cannot easily inferred from a mere bitmap. The leading Affective Neuroscientist Jaak Panksepp discovered seven emotional subsystems in the mammalian brain, each mediated by its own system of neuropeptide hormones: he called them SEEKING (dopamine), RAGE and FEAR (adrenaline and cortisol), LUST and PLAY (sex hormones), CARE and PANIC (oxytocin, opioids and more). Neuroscientist Antonio Damasio further proposes that memories get labelled with the chemical emotional state prevailing when they were laid down, so that when recalled they bring with them values: good, bad, sad, funny and so on.

AI systems could be, probably will be, eventually enabled to fake some kinds of emotional response, but in order to really feel they’d need to have something at stake. Our brains store a continually updated model of the outside world, another of our own body and its current internal state, and continually process the intersection of these two models to see what is threatening or beckoning to us. Meanwhile our memory stores a more or less complete record of our lives to-date along with the values of the things that have happened to us. All our decisions result from integrating these data sources. To provide anything equivalent for an AI system will be staggeringly costly in memory and CPU: the most sophisticated self-driving vehicle is less than a toy by comparison.

Which is not to say that AI is useless, far from it. Just as simpler computers excel at arithmetic or graphics, AI systems can excel at kinds of reasoning in which we are slow or error-prone, precisely due to the emotional content of our own reason. Once we stop pretending that they’re intelligent in the same way as us (or ever can be), and acknowledge that they can be given skills that complement our own, then AIs become tools as essential as spreadsheets are now. The very name Artificial Intelligence positively invites this confusion, so we’d perhaps better call it Artificial Reasoning or something like that.

And we need to stop pressing the panic button before we can acknowledge these limits of AI. If you design an AI that fires people in order to increase profits, it will. If you design it to kill people, it will. But the same is true of human accountants and soldiers. Lacking emotions, an AI can never have its own interests or ambitions, so it can never be as good or as bad as we can. And if we fail to fit it with a fail-safe off switch then it’s our own stupid fault.






SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...