Dick Pountain/Idealog 265/08 August 2016 09:54
Just back from a holiday in Southern France where I spent much of my time reclining in a cheap, aluminium-framed lounger under an olive tree in the 39° heat, reading a very good book. One day I went down to my tree and found a bizarre creature sitting on the back of my lounger: it was however not any member of the Pokémon family but "merely" a humble cicada. The contrast between a grid of white polythene strips woven on an aluminium frame and the fantastic detail of the creature's eyes and wings could have been an illustration from the book I was reading, "Endless Forms Most Beautiful" by Sean B. Carroll.
Endless Forms is about Evo Devo (Evolutionary Developmental Biology), written by one of its pioneers, making it the most important popular science book since Dawkin's Blind Watchmaker, and in effect a sequel. Dawkins explained the "Modern Synthesis" of evolutionary theory with molecular biology: the structure of DNA, genes and biological information. Carroll explains how embryology was added to this mix, finally revealing the precise mechanisms via which DNA gets transcribed into the structure of actual plants and animals. It's all quite recent - the Nobel (for Medicine) was awarded only in 1995, to Wieschaus, Lewis and Nüsslein-Volhard - and Carroll's book was published in 2006. That I waited 10 years to read it was sheer cowardice, now bitterly regretted because Carroll makes mind-bendingly complex events marvellously comprehensible. And I'm writing about it here because Evo Devo is all about information, real-time computation and Nature's own 3D-printer.
I've written in previous columns about how DNA stores information, ribosomes transcribe some of it into the proteins from which we're built, and how much of it (once thought "junk") is about control rather than substance. Evo Devo reveals just what all that junk really does, and Charles Darwin should have been alive to see it. DNA does indeed encode the information to create our bodies, but genes that encode structural proteins are only a small part of it: most is encoded in switches that get set at "runtime", that is, not at fertilisation of the egg but during the course of embryonic development. These switches get flipped and flopped in real time, by proteins that are expressed only for precise time periods, in precise locations, enabling 3D structures to be built up. Imagine some staggeringly-complex meta-3D printer in which the plastic wire is itself continuously generated by another print-head, whose wire is generated by another, down to ten or more levels: and all this choreographed in real time, so that what gets printed varies from second to second. My cowardice did have some basis.
However the even more stunning conclusion of Evo Devo is that, thanks to such deep indirection and parameterisation, the entire diversity of life on this planet can be generated by a few common genes, shared by everything from bacteria to us. Biologists worried for years that there just isn't enough DNA to encode all the different shapes, sizes, colours of life via different genes, and sure enough there isn't. Instead it's encoded in relatively few common genes that generate proteins that set switches in the DNA during embryonic development, like self-modifying computer code. And evolutionary change mostly comes from mutations in these switches rather than the protein-coding genes. Life is actually like a 3D printer controlled by FPGAs (Field Programmable Gate Arrays) that are being configured by self-modifying code in real time. Those of you with experience of software or hardware design should be boggling uncontrollably by now.
Carroll explains, with copious examples from fruit flies to chickens, frogs and humans, how mutations in a single switch can make the difference between legs and wings, mice and chickens, geese and centipedes. He christens the set of a few hundred common body-building genes, preserved for over 500 million years, the Tool Kit, and the mutable set of switches that make up so much of DNA are the operating system for it. I'll not lie to you: his book is more challenging than Blind Watchmaker (most copies of which remain unread) but if you're at all curious about where we come from it's essential.
But please forgive me if I can't raise much excitement for Pokémon Go. Superimposing two-dimensional cartoon figures onto the real world is a bit sad when you're confronted by real cicadas. Some of these species have evolved varying hibernation cycle times to prevent predators relying on them as a food source. To imagine that our technology even approaches the amazing complexity of biological life is purest hubris, an idolatry that mistakes pictures of things for things themselves, and logic for embodied conciousness. If the urge to "collect 'em all" is irresistible, why not become an entomologist?
My columns for PC Pro magazine, posted here six months in arrears for copyright reasons
Friday, 17 February 2017
Thursday, 12 January 2017
NONSENSE AND NONSENSIBILITY
Dick Pountain/Idealog 264/06 July 2016 10:38
When Joshua Brown's Tesla Model S smashed right through a trailer-truck at full speed, back in May in Florida, he paid a terrible price for a delusion that, while it may have arisen first in Silicon Valley, is now rapidly spreading among the world's more credulous techno-optimists. The delusion is that AI is now sufficiently advanced for it to be permitted to drive motor vehicles on public roads without human intervention. It's a delusion I don't suffer from, not because I'm smarter than everyone else, but simply because I live in Central London and ride a Vespa. Here, the thought that any combination of algorithms and sensors short of a full-scale human brain could possibly cope with the torrent of dangerous contigencies one faces is too ludicrous to entertain for even a second - but on long, straight American freeways it could be entertained for a while, to Joshua's awful misfortune.
The theory behind driverless vehicles is superficially plausible, and fits well with current economic orthodoxies: human beings are fallible, distractable creatures whereas computers are Spock-like, unflappable and wholly rational entities that will drive us more safely and hence save a lot of the colossal sums that road accidents cost each year. And perhaps more significant still, they offer to ordinary mortals one great privilege of the super-rich, namely to be driven about effortlessly by a chauffeur.
The theory is however deeply flawed because it inherits the principal delusion of almost all current AI research, namely that human intelligence is based mostly on reason, and that emotion is an error condition to be eradicated as far as possible. This kind of rationalism arises quite naturally in the computer business, because it tends to select mathematically-oriented, nerdish character types (like me) and because computers are so spectacularly good, and billions of times faster than us, at logical operations. It is however totally refuted by recent findings in both cognitive science and neuroscience. From the former, best expressed by Nobel laureate Daniel Kahneman, we learn that the human mind mostly operates via quick, lazy, often systematically-wrong assumptions, and it has to be forced, kicking and screaming, to apply reason to any problem. Despite this we cope fairly well and the world goes on. When we do apply reason it as often as not achieves the opposite of our intention, because of the sheer complexity of the environment and our lack of knowledge of all its boundary conditions.
That makes me a crazy irrationalist who believes we're powerless to predict anything and despises scientific truth then? On the contrary. Neuroscience offers explanations for Kahneman's findings (which were themselves the result of decades of rigorous experiment). Our mental processes are indeed split, not between logic and emotion as hippy gurus once had it, but between novelty and habit. Serious new problems can indeed invoke reason, perhaps even with recourse to written science, but when a problem recurs often enough we eventually store an heuristic approximation of its solution as "habit" which doesn't require fresh thought every time. It's like a stored database procedure, a powerful kind of time-saving compression without which civilisation could never have arisen. Throwing a javelin, riding a horse, driving a car, greeting a colleague, all habits, all fit for purpose most of the time.
Affective neuroscience, by studying the limbic system, seat of the emotions, throws more light still. Properly understood, emotions are automatic brain subsystems which evolved to deal rapidly with external threats and opportunities by modifying our nervous system and body chemistry (think fight-and-flight, mating, bonding). What we call emotions are better called feelings, our perceptions of these bodily changes rather than the chemical processes that caused them. Evidence is emerging, from the work of Antonio Damasio and others, that our brains tag each memory they deposit with the emotional state prevailing at the time. Memories aren't neutral but have "good" or "bad" labels, which get weighed in the frontal cortex whenever memories are recalled to assist in solving a new problem. In other words, reason and emotion are completely, inextricably entangled at a physiological level. This mechanism is deeply involved in learning (reward and punishment, dopamine and adrenalin), and even perception itself. We don't see the plain, unvarnished world but rather a continually-updated model in our brain that attaches values to every object and area we encounter.
This is what makes me balk before squeezing my Vespa between that particular dump-truck and that particular double-decker bus, and what would normally tell you not to watch Harry Potter while travelling at full speed on the freeway. But it's something no current AI system can duplicate and perhaps never will: that would involve driverless vehicles being trained for economically-unviable periods using value-aware memory systems that don't yet exist.
When Joshua Brown's Tesla Model S smashed right through a trailer-truck at full speed, back in May in Florida, he paid a terrible price for a delusion that, while it may have arisen first in Silicon Valley, is now rapidly spreading among the world's more credulous techno-optimists. The delusion is that AI is now sufficiently advanced for it to be permitted to drive motor vehicles on public roads without human intervention. It's a delusion I don't suffer from, not because I'm smarter than everyone else, but simply because I live in Central London and ride a Vespa. Here, the thought that any combination of algorithms and sensors short of a full-scale human brain could possibly cope with the torrent of dangerous contigencies one faces is too ludicrous to entertain for even a second - but on long, straight American freeways it could be entertained for a while, to Joshua's awful misfortune.
The theory behind driverless vehicles is superficially plausible, and fits well with current economic orthodoxies: human beings are fallible, distractable creatures whereas computers are Spock-like, unflappable and wholly rational entities that will drive us more safely and hence save a lot of the colossal sums that road accidents cost each year. And perhaps more significant still, they offer to ordinary mortals one great privilege of the super-rich, namely to be driven about effortlessly by a chauffeur.
The theory is however deeply flawed because it inherits the principal delusion of almost all current AI research, namely that human intelligence is based mostly on reason, and that emotion is an error condition to be eradicated as far as possible. This kind of rationalism arises quite naturally in the computer business, because it tends to select mathematically-oriented, nerdish character types (like me) and because computers are so spectacularly good, and billions of times faster than us, at logical operations. It is however totally refuted by recent findings in both cognitive science and neuroscience. From the former, best expressed by Nobel laureate Daniel Kahneman, we learn that the human mind mostly operates via quick, lazy, often systematically-wrong assumptions, and it has to be forced, kicking and screaming, to apply reason to any problem. Despite this we cope fairly well and the world goes on. When we do apply reason it as often as not achieves the opposite of our intention, because of the sheer complexity of the environment and our lack of knowledge of all its boundary conditions.
That makes me a crazy irrationalist who believes we're powerless to predict anything and despises scientific truth then? On the contrary. Neuroscience offers explanations for Kahneman's findings (which were themselves the result of decades of rigorous experiment). Our mental processes are indeed split, not between logic and emotion as hippy gurus once had it, but between novelty and habit. Serious new problems can indeed invoke reason, perhaps even with recourse to written science, but when a problem recurs often enough we eventually store an heuristic approximation of its solution as "habit" which doesn't require fresh thought every time. It's like a stored database procedure, a powerful kind of time-saving compression without which civilisation could never have arisen. Throwing a javelin, riding a horse, driving a car, greeting a colleague, all habits, all fit for purpose most of the time.
Affective neuroscience, by studying the limbic system, seat of the emotions, throws more light still. Properly understood, emotions are automatic brain subsystems which evolved to deal rapidly with external threats and opportunities by modifying our nervous system and body chemistry (think fight-and-flight, mating, bonding). What we call emotions are better called feelings, our perceptions of these bodily changes rather than the chemical processes that caused them. Evidence is emerging, from the work of Antonio Damasio and others, that our brains tag each memory they deposit with the emotional state prevailing at the time. Memories aren't neutral but have "good" or "bad" labels, which get weighed in the frontal cortex whenever memories are recalled to assist in solving a new problem. In other words, reason and emotion are completely, inextricably entangled at a physiological level. This mechanism is deeply involved in learning (reward and punishment, dopamine and adrenalin), and even perception itself. We don't see the plain, unvarnished world but rather a continually-updated model in our brain that attaches values to every object and area we encounter.
This is what makes me balk before squeezing my Vespa between that particular dump-truck and that particular double-decker bus, and what would normally tell you not to watch Harry Potter while travelling at full speed on the freeway. But it's something no current AI system can duplicate and perhaps never will: that would involve driverless vehicles being trained for economically-unviable periods using value-aware memory systems that don't yet exist.
ASSAULT AND BATTERY
Dick Pountain/Idealog 263/07 June 2016 11:26
Batteries, doncha just hate them? For the ten thousandth time I forgot to plug in my phone last night so that when I grabbed it to go out it was dead as the proverbial and I had to leave it behind on charge. My HTC phone's battery will actually last over two days if I turn off various transceivers but life is too short to remember which ones. And phones are only the worst example, to the extent that I now find myself consciously trying to avoid buying any gadget that requires batteries. I do have self-winding wristwatches, but as a non-jogger I'm too sedentary to keep them wound and they sometimes stop at midnight on Sunday (tried to train myself to swing my arms more when out walking, to no effect). I don't care for smartwatches but I did recently go back to quartz with a Bauhaus-stylish Braun BN0024 (design by Dietrich Lubs) along with a whole card-full of those irritating button batteries bought off Amazon that may last out my remaining years.
It's not just personal gadgets that suffer from the inadequacy of present batteries: witness the nightmarish problems that airliner manufacturers have had in recent years with in-flight fires caused by the use of lithium-ion cells. It's all about energy density, as I wrote in another recent column (issue 260). We demand ever more power while away from home, and that means deploying batteries that rely on ever more energetic chemistries, which begin to approach the status of explosives. I'm sure it's not just me who feels a frisson of anxiety when I feel how hot my almost-discharged tablet sometimes becomes.
Wholly new battery technologies look likely in future, perhaps non-chemical ones that merely store power drawn from the mains into hyper-capacitors fabricated using graphenes. Energy is still energy, but such ideas raise the possibility of lowering energy *density* by spreading charge over larger volumes - for example by building the storage medium into the actual casing of a gadget using graphene/plastic composites. Or perhaps hyper-capacitors might constantly trickle-charge themselves on the move by combining kinetic, solar and induction sources.
As always Nature found its own solution to this problem, from which we may be able to learn something, and it turns out that distributing the load is indeed it. Nature had an unfair advantage in that its design and development department has employed every living creature that's ever existed, working on the task for around 4 billion years, but intriguingly that colossal effort came up with a single solution very early on that is still repeated almost everywhere: the mitochondrion.
Virtually all the cells of living things above the level of bacteria contain both a nucleus (the cell's database of DNA blueprints from which it reproduces and maintains itself) and a number of mitochondria, the cell's battery chargers which power all its processes by burning glucose to create (ATP) adenosine triphosphate, the cellular energy fuel. Mitochondria contain their own DNA, separate from that in the nucleus, leading evolutionary biologists to postulate that billions of years ago they were independent single-celled creatures who "came in from the cold" and became symbiotic components of all other cells. Some cells like red blood cells, simple containers for haemoglobin, contain no mitochondria while others, like liver cells which are chemical factories, contain thousands. Every cell is in effect its own battery, constantly recharged by consuming oxygen from the air you breath and glucose from the food you eat to drive these self-replicating chargers, the mitochondria.
So has Nature also solved the problems of limited battery lifespan and loss of efficiency (the "memory effect")? No it hasn't, which is why we all eventually die. However longevity research is quite as popular among the Silicon Valley billionaire digerati as are driverless cars and Mars colonies, and recent years have seen significant advances in our understanding of mitochondrial aging. Enzymes called sirtuins stimulate production of new mitochondria and maintain existing ones, while each cell's nucleus continually sends "watchdog" signals to its mitochondria to keep them switched on. The sirtuin SIRT1 is crucial to this signalling, and in turn requires NAD (nicotinamide adenine dinucleotide) for its effect, but NAD levels tend to fall with age. Many of the tricks shown to slow aging in lab animals - calorie-restricted diets, dietary components like resveratrol (red wine) and pterostilbene (blueberries) - may work by encouraging the production of more NAD.
Now imagine synthetic mitochondria, fabricated from silicon and graphene by nano-engineering, millions of them charging a hyper-capacitor shell by burning a hydrocarbon fuel with atmospheric oxygen. Yes, you'll simply use your phone to stir your tea, with at least one sugar. I await thanks from the sugar industry for this solution to its current travails...
Batteries, doncha just hate them? For the ten thousandth time I forgot to plug in my phone last night so that when I grabbed it to go out it was dead as the proverbial and I had to leave it behind on charge. My HTC phone's battery will actually last over two days if I turn off various transceivers but life is too short to remember which ones. And phones are only the worst example, to the extent that I now find myself consciously trying to avoid buying any gadget that requires batteries. I do have self-winding wristwatches, but as a non-jogger I'm too sedentary to keep them wound and they sometimes stop at midnight on Sunday (tried to train myself to swing my arms more when out walking, to no effect). I don't care for smartwatches but I did recently go back to quartz with a Bauhaus-stylish Braun BN0024 (design by Dietrich Lubs) along with a whole card-full of those irritating button batteries bought off Amazon that may last out my remaining years.
It's not just personal gadgets that suffer from the inadequacy of present batteries: witness the nightmarish problems that airliner manufacturers have had in recent years with in-flight fires caused by the use of lithium-ion cells. It's all about energy density, as I wrote in another recent column (issue 260). We demand ever more power while away from home, and that means deploying batteries that rely on ever more energetic chemistries, which begin to approach the status of explosives. I'm sure it's not just me who feels a frisson of anxiety when I feel how hot my almost-discharged tablet sometimes becomes.
Wholly new battery technologies look likely in future, perhaps non-chemical ones that merely store power drawn from the mains into hyper-capacitors fabricated using graphenes. Energy is still energy, but such ideas raise the possibility of lowering energy *density* by spreading charge over larger volumes - for example by building the storage medium into the actual casing of a gadget using graphene/plastic composites. Or perhaps hyper-capacitors might constantly trickle-charge themselves on the move by combining kinetic, solar and induction sources.
As always Nature found its own solution to this problem, from which we may be able to learn something, and it turns out that distributing the load is indeed it. Nature had an unfair advantage in that its design and development department has employed every living creature that's ever existed, working on the task for around 4 billion years, but intriguingly that colossal effort came up with a single solution very early on that is still repeated almost everywhere: the mitochondrion.
Virtually all the cells of living things above the level of bacteria contain both a nucleus (the cell's database of DNA blueprints from which it reproduces and maintains itself) and a number of mitochondria, the cell's battery chargers which power all its processes by burning glucose to create (ATP) adenosine triphosphate, the cellular energy fuel. Mitochondria contain their own DNA, separate from that in the nucleus, leading evolutionary biologists to postulate that billions of years ago they were independent single-celled creatures who "came in from the cold" and became symbiotic components of all other cells. Some cells like red blood cells, simple containers for haemoglobin, contain no mitochondria while others, like liver cells which are chemical factories, contain thousands. Every cell is in effect its own battery, constantly recharged by consuming oxygen from the air you breath and glucose from the food you eat to drive these self-replicating chargers, the mitochondria.
So has Nature also solved the problems of limited battery lifespan and loss of efficiency (the "memory effect")? No it hasn't, which is why we all eventually die. However longevity research is quite as popular among the Silicon Valley billionaire digerati as are driverless cars and Mars colonies, and recent years have seen significant advances in our understanding of mitochondrial aging. Enzymes called sirtuins stimulate production of new mitochondria and maintain existing ones, while each cell's nucleus continually sends "watchdog" signals to its mitochondria to keep them switched on. The sirtuin SIRT1 is crucial to this signalling, and in turn requires NAD (nicotinamide adenine dinucleotide) for its effect, but NAD levels tend to fall with age. Many of the tricks shown to slow aging in lab animals - calorie-restricted diets, dietary components like resveratrol (red wine) and pterostilbene (blueberries) - may work by encouraging the production of more NAD.
Now imagine synthetic mitochondria, fabricated from silicon and graphene by nano-engineering, millions of them charging a hyper-capacitor shell by burning a hydrocarbon fuel with atmospheric oxygen. Yes, you'll simply use your phone to stir your tea, with at least one sugar. I await thanks from the sugar industry for this solution to its current travails...
ALGORITHMOPHOBIA
Dick Pountain/Idealog 262/05 May 2016 11:48
The ability to explain algorithms has always been a badge of nerdhood, the sort of trick people would occasionally ask you to perform when conversation flagged at a party. Nowadays however everyone thinks they know what an algorithm is, and many people don't like them much. Algorithms seem to have achieved this new familiarity/notoriety because of their use by social media, especially Google, Facebook and Instagram. To many people an algorithm implies the computer getting a bit too smart, knowing who you are and hence treating you differently from everyone else - which is fair enough as that's mostly what they are supposed to be for in this context. However what kind of distinction we're talking about does matter: is it showing you a different advert for trainers from your dad, or is it selecting you as a target for a Hellfire missile?
Some newspapers are having a ball with algorithm as synonym for the inhumane objectivity of computers, liable to crush our privacy or worse. Here are two sample headlines from the Guardian over the last few weeks: "Do we want our children taught by humans or algorithms?", and "Has a rampaging AI algorithm really killed thousands in Pakistan?" Even the sober New York Times deemed it newsworthy when Instagram adopted an algorithm-based personalized feed in place of its previous reverse-chronological feed (a move made last year by its parent Facebook).
I'm not algorithmophobic myself, for the obvious reason that I've spent years using, analysing, even writing a column for Byte, about the darned things, but this experience grants me a more-than-average awareness of what algorithms can and can't do, where they are appropriate and what the alternatives are. What algorithms can and can't do is the subject of Algorithmic Complexity Theory, and it's only at the most devastatingly boring party that one is likely to be asked to explain that. ACT can tell you about whole classes of problem for which algorithms that run in managable time aren't available. As for alternatives to algorithms, the most important is permitting raw data to train a neural network, which is the way the human brain appears to work: the distinction being that writing an algorithm requires you to understand a phenomenon sufficiently to model it with algebraic functions, whereas a neural net sifts structures from the data stream in an ever-changing fashion, producing no human-understandable theory of how that phenomenon works.
Some of the more important "algorithms" that are coming to affect our lives are actually more akin to the latter, applying learning networks to big data sets like bank transactions and supermarket purchases to determine your credit rating or your special offers. However those algorithms that worry people most tend not to be of that sort, but are algebraically based, measuring multiple variables and applying multiple weightings to them to achieve ever more appearance of "intelligence". They might even contain a learning component that explicitly alters weightings on the fly, Google's famous PageRank algorithm being an example .
The advantage of such algorithms is that they can be tweaked by human programmers to improve them, though this too can be a source of unpopularity: every time Google modifies PageRank a host of small online businesses catch it in the neck. Another disadvantage of such algorithms is that they can "rot" by decreasing rather than increasing in performance over time, prime examples being Amazon's you-might-also-like and Twitter's people-you-might-want-to-follow. A few years ago I was spooked by the accuracy of Amazon's recommendations, but that spooking ceased after it offered me a Jeffrey Archer novel: likewise when Twitter thought I might wish to follow Jimmy Carr, Fearne Cotton, Jeremy Clarkson and Chris Moyles.
Flickr too employs a secret algorithm to measure the "Interestingness" of my photographs: number of views is one small component, as is the status of people who favourited it (not unlike PageRank's incoming links) but many more variables remain a source of speculation in the forums. I recently viewed my Top 200 pictures by Interestingness for the first time in ages and was pleasantly surprised to find the algorithm much improved. My Top 200 now contains more manipulated than straight-from-camera, pictures; three of my top twenty are from recent months and most from the last year; all 200 are pix I'd have chosen myself; their order is quite different from "Top 200 ranked by Views", that is, what other users prefer. As someone who takes snapshots mostly as raw material for manipulation, the algorithm is now suggesting that I'm improving rather than stagnating, and closely approximates my own taste, which I find both remarkable and encouraging. The lesson? Good algorithms in appropriate contexts are good, bad algorithms in inappropriate contexts are bad. But you already knew that didn't you...
The ability to explain algorithms has always been a badge of nerdhood, the sort of trick people would occasionally ask you to perform when conversation flagged at a party. Nowadays however everyone thinks they know what an algorithm is, and many people don't like them much. Algorithms seem to have achieved this new familiarity/notoriety because of their use by social media, especially Google, Facebook and Instagram. To many people an algorithm implies the computer getting a bit too smart, knowing who you are and hence treating you differently from everyone else - which is fair enough as that's mostly what they are supposed to be for in this context. However what kind of distinction we're talking about does matter: is it showing you a different advert for trainers from your dad, or is it selecting you as a target for a Hellfire missile?
Some newspapers are having a ball with algorithm as synonym for the inhumane objectivity of computers, liable to crush our privacy or worse. Here are two sample headlines from the Guardian over the last few weeks: "Do we want our children taught by humans or algorithms?", and "Has a rampaging AI algorithm really killed thousands in Pakistan?" Even the sober New York Times deemed it newsworthy when Instagram adopted an algorithm-based personalized feed in place of its previous reverse-chronological feed (a move made last year by its parent Facebook).
I'm not algorithmophobic myself, for the obvious reason that I've spent years using, analysing, even writing a column for Byte, about the darned things, but this experience grants me a more-than-average awareness of what algorithms can and can't do, where they are appropriate and what the alternatives are. What algorithms can and can't do is the subject of Algorithmic Complexity Theory, and it's only at the most devastatingly boring party that one is likely to be asked to explain that. ACT can tell you about whole classes of problem for which algorithms that run in managable time aren't available. As for alternatives to algorithms, the most important is permitting raw data to train a neural network, which is the way the human brain appears to work: the distinction being that writing an algorithm requires you to understand a phenomenon sufficiently to model it with algebraic functions, whereas a neural net sifts structures from the data stream in an ever-changing fashion, producing no human-understandable theory of how that phenomenon works.
Some of the more important "algorithms" that are coming to affect our lives are actually more akin to the latter, applying learning networks to big data sets like bank transactions and supermarket purchases to determine your credit rating or your special offers. However those algorithms that worry people most tend not to be of that sort, but are algebraically based, measuring multiple variables and applying multiple weightings to them to achieve ever more appearance of "intelligence". They might even contain a learning component that explicitly alters weightings on the fly, Google's famous PageRank algorithm being an example .
The advantage of such algorithms is that they can be tweaked by human programmers to improve them, though this too can be a source of unpopularity: every time Google modifies PageRank a host of small online businesses catch it in the neck. Another disadvantage of such algorithms is that they can "rot" by decreasing rather than increasing in performance over time, prime examples being Amazon's you-might-also-like and Twitter's people-you-might-want-to-follow. A few years ago I was spooked by the accuracy of Amazon's recommendations, but that spooking ceased after it offered me a Jeffrey Archer novel: likewise when Twitter thought I might wish to follow Jimmy Carr, Fearne Cotton, Jeremy Clarkson and Chris Moyles.
Flickr too employs a secret algorithm to measure the "Interestingness" of my photographs: number of views is one small component, as is the status of people who favourited it (not unlike PageRank's incoming links) but many more variables remain a source of speculation in the forums. I recently viewed my Top 200 pictures by Interestingness for the first time in ages and was pleasantly surprised to find the algorithm much improved. My Top 200 now contains more manipulated than straight-from-camera, pictures; three of my top twenty are from recent months and most from the last year; all 200 are pix I'd have chosen myself; their order is quite different from "Top 200 ranked by Views", that is, what other users prefer. As someone who takes snapshots mostly as raw material for manipulation, the algorithm is now suggesting that I'm improving rather than stagnating, and closely approximates my own taste, which I find both remarkable and encouraging. The lesson? Good algorithms in appropriate contexts are good, bad algorithms in inappropriate contexts are bad. But you already knew that didn't you...
Monday, 19 September 2016
IDENTITY CRISIS
Dick Pountain/Idealog 261/04 April 2016 13:15
It's a cliche, but none the less true, that many IT problems are caused by the unholy rapidity of change in our industry. However I've just had an irritating lesson in the opposite case, where sometimes things that remain the same for too long can get you into trouble. It all started last week when a friend in Spain emailed to say that mail to my main dick@dickpountain.co.uk address was bouncing, and it soon turned into a tragicomedy.
A test mail showed mail was indeed broken, and that my website at www.dickpountain.co.uk had become inaccessible too. Something nasty had happened to my domain. This wasn't without precedent as I wrote here exactly a year ago in PC Pro 249 about the way Google's tightened security had busted my site. The first step, equivalent in plumbing terms to lifting the manhole lid, is to log into the website of my domain registrar Freeparking, which I duly attempted only to be rudely rebuffed. Their website has been completely redesigned and my password no longer worked - message says it was too weak so they'd automatically replaced it with a stronger one. A stronger one that unfortunately they'd omitted to tell me.
So, click on the "Forgot Password" button where it asks me to enter the email address my account was opened with. Trying all four addresses I've used over the last decade, one after the other, garners nothing but "password change request failed". Send an email to Freeparking support, who reply within the hour (my experience with them has always been good). Unfortunately their reply is that my domain has expired. Gobsmacked, because for the eight years I've had the domain they've always sent me a renewal reminder in plenty of time. Flurry of emails establishes that there's still time to renew before the domain name gets scrubbed or sold, but to do that I have to get into my account. Can they tell me the password? No they can't, but they can tell me the right email address is my BTinternet address. Go back to Forgot Password and use that, but still no password reset mail.
At this point I must explain my neo-byzantine email architecture. Gmail is my main hub where I read and write all my mail. It gathers POP mail from my old Aol and Cix addresses, and dickpountain.co.uk is simply redirected into it, but all mail redirected from dickpountain.co.uk also gets copied to my BTinternet account as a sort of backup. Unfortunately, with the domain expired, that feedback loop appears to have broken too.
By now I'm starting to gibber under my breath, and the word "nightmare" has cropped up in a couple of messages to support. I ask them to change the email address on my account to my Gmail address, but they can't do that without full identity verification, so I send a scan of my passport, they change the address, and password resets still don't arrive... Now desperate, I try once more entering each of four old addresses and make this discovery: three of them say "password reset request failed", but Aol actually says nothing. Whirring of mental cogs, clatter of deductions: the account address is actually Aol and reset requests are going there, but Gmail isn't harvesting them. Go to Aol.com, and try to access my email account (which has been dormant for best part of a decade) and am told that due to "irregular activities" the account has been locked. I now know that a Lenovo Yoga doesn't show teeth marks...
Another whole email correspondence ensues with aolukverify@aol.com, with both a passport and water bill required this time, and I get back into the Aol account, where sure enough are all the password reset mails, as well as Freeparking's renewal reminders. I get back into Freeparking account and after a bit of nonsense involving PayPal, VISA and HSBC I renew my domain. Don't get me wrong, I'm not complaining about the serious way both companies treated my security, and their support people were both helpful and efficient. There's really no moral to this tale beyond the 2nd Law of Thermodynamics: ignore something for long enough and entropy will break it while your back is turned.
The only thing is, it sparked off a horrible fantasy. It's the year 2025 and President Trump is getting ratty as the end of his second term approaches. Vladimir Putin, who has married one of his ex-wives, makes one nasty jibe too many over the phone and Donald presses the Big Red button - then thinks better of it and goes to the Big Green Reset button. He can't remember the password, and on pressing Forgot Password the memorable question is the middle name of his second wife (case-and-accent-sensitive)...
It's a cliche, but none the less true, that many IT problems are caused by the unholy rapidity of change in our industry. However I've just had an irritating lesson in the opposite case, where sometimes things that remain the same for too long can get you into trouble. It all started last week when a friend in Spain emailed to say that mail to my main dick@dickpountain.co.uk address was bouncing, and it soon turned into a tragicomedy.
A test mail showed mail was indeed broken, and that my website at www.dickpountain.co.uk had become inaccessible too. Something nasty had happened to my domain. This wasn't without precedent as I wrote here exactly a year ago in PC Pro 249 about the way Google's tightened security had busted my site. The first step, equivalent in plumbing terms to lifting the manhole lid, is to log into the website of my domain registrar Freeparking, which I duly attempted only to be rudely rebuffed. Their website has been completely redesigned and my password no longer worked - message says it was too weak so they'd automatically replaced it with a stronger one. A stronger one that unfortunately they'd omitted to tell me.
So, click on the "Forgot Password" button where it asks me to enter the email address my account was opened with. Trying all four addresses I've used over the last decade, one after the other, garners nothing but "password change request failed". Send an email to Freeparking support, who reply within the hour (my experience with them has always been good). Unfortunately their reply is that my domain has expired. Gobsmacked, because for the eight years I've had the domain they've always sent me a renewal reminder in plenty of time. Flurry of emails establishes that there's still time to renew before the domain name gets scrubbed or sold, but to do that I have to get into my account. Can they tell me the password? No they can't, but they can tell me the right email address is my BTinternet address. Go back to Forgot Password and use that, but still no password reset mail.
At this point I must explain my neo-byzantine email architecture. Gmail is my main hub where I read and write all my mail. It gathers POP mail from my old Aol and Cix addresses, and dickpountain.co.uk is simply redirected into it, but all mail redirected from dickpountain.co.uk also gets copied to my BTinternet account as a sort of backup. Unfortunately, with the domain expired, that feedback loop appears to have broken too.
By now I'm starting to gibber under my breath, and the word "nightmare" has cropped up in a couple of messages to support. I ask them to change the email address on my account to my Gmail address, but they can't do that without full identity verification, so I send a scan of my passport, they change the address, and password resets still don't arrive... Now desperate, I try once more entering each of four old addresses and make this discovery: three of them say "password reset request failed", but Aol actually says nothing. Whirring of mental cogs, clatter of deductions: the account address is actually Aol and reset requests are going there, but Gmail isn't harvesting them. Go to Aol.com, and try to access my email account (which has been dormant for best part of a decade) and am told that due to "irregular activities" the account has been locked. I now know that a Lenovo Yoga doesn't show teeth marks...
Another whole email correspondence ensues with aolukverify@aol.com, with both a passport and water bill required this time, and I get back into the Aol account, where sure enough are all the password reset mails, as well as Freeparking's renewal reminders. I get back into Freeparking account and after a bit of nonsense involving PayPal, VISA and HSBC I renew my domain. Don't get me wrong, I'm not complaining about the serious way both companies treated my security, and their support people were both helpful and efficient. There's really no moral to this tale beyond the 2nd Law of Thermodynamics: ignore something for long enough and entropy will break it while your back is turned.
The only thing is, it sparked off a horrible fantasy. It's the year 2025 and President Trump is getting ratty as the end of his second term approaches. Vladimir Putin, who has married one of his ex-wives, makes one nasty jibe too many over the phone and Donald presses the Big Red button - then thinks better of it and goes to the Big Green Reset button. He can't remember the password, and on pressing Forgot Password the memorable question is the middle name of his second wife (case-and-accent-sensitive)...
Monday, 8 August 2016
OLOGIES AND ACIES
Dick Pountain/ Idealog 260 /09 March 2016 13:42
I imagine many readers are well old enough to remember BT's 1988 TV advert starring Maureen Lipman, where she comforted her grandson for his bad exam results by pointing out that he'd passed an "ology" (even if it was just sociology). I've never obtained an ology myself, only an "istry" or two, but in any case I'm actually rather more interested in "acies": literacy, numeracy and a couple of others that have no acy name.
Not a day goes by without me being thankful for receiving an excellent scientific education. A couple of decades ago I'd have thought twice before admitting that, but no longer because pop science has become a hugely important part of popular culture, from TED talks to sci-fi movies, via miles and bookshelf miles of explanatory books on cosmology, neuroscience, genetics, mathematics, particle physics, even a few ologies. Being a nerd is now a badge of honour. But my thankfulness has little to do with any of that, and more to do with the way basic numeracy, plus a knowledge of statistics ("riskacy"?) and energetics ("ergacy"?) help me understand everything that life throws at me, from everyday accidents and illnesses, through politics to my entire philosophical outlook.
Take for example relationships with doctors. As an aging male I'm on the receiving end of a variety of government-sponsored preventive medicine initiatives, aimed at reducing the incidence of heart attack, stroke, diabetes and other disorders. After an annual battery of tests I'm encouraged to consider taking a variety of drugs, but before agreeing I ask my GP to show me the test results on his PC screen, both as annual historical graphs and raw figures compared to recommended ranges. When shown my thyroid hormone level marginally out of range, I can argue about experimental error and standard deviations, and win since my doctor's no statistician. This process has lead me take lisinopril for my blood pressure, but refuse statins for my marginal cholesterol and ditto for thyroxin.
Numeracy, particularly concerning percentages and rates of change (ie. calculus) is becoming essential to an understanding of just about everything. If some website tells you that eating hot dogs increases your risk of stomach cancer by 20%, you need to be able to ask from what base-rate: 0.000103 rising to 0.000124 doesn't sound nearly so scary. Western citizens face a risk of death from terrorism way below that from being in a car crash, but those risks *feel* very different subjectively. We accept driving risk more readily than dying from an inexplicable act of violence, our politicians know this and so over-react to terrorism and under-react to road safety. But the "acy" that's most poorly distributed of all concerns energetics.
Perhaps a minority of scientists, and almost no lay people, understand the laws of thermodynamics in theory, let alone have an intuitive grasp that could be usefully applied to everyday life. Thanks to the pop science boom, everyone knows Einstein's formula E = MC², but that's only marginally relevant to everyday life since we don't ride nuclear-powered cars or busses, and our bodies run on chemical rather than nuclear reactions. Hence the confusion among would-be dieters over counting calories: does water have any calories?, do carrots have more than celery?
Some variables that really do matter for an energetic understanding of life are energy density and the rate at which energy gets converted from one form to another. You could place a bowl of porridge on the table alongside a piece of dynamite that contains the same number of calories. Dynamite has around 300 times the energy density of porridge so it will be a small piece. More important though, the calories in the porridge (as starches, sugars, protein) get converted to muscular effort and heat rather slowly by your digestive system, while a detonator turns the dynamite's calories into light, heat and sound very quickly indeed. But grind wheat or oats to a fine-enough powder, mix with air as a dust cloud, and deadly industrial explosions can occur.
Energy density calculations affect the mobile device business greatly, both when seeking new battery technologies and considering the safety of existing ones like lithium-ion (just ask Boeing). As for transport, fossil fuels and climate change, they're equally crucial. Electric cars like Tesla are just about do-able now, but electric airliners aren't and may never be, because the energy density of batteries compared to hydrocarbons is nowhere near enough. And when people fantasise online about the possibility of transporting the human race to colonise Mars, energetics is seldom properly discussed. We all ultimately live off energy (and negative entropy) that we receive from sunlight, but Mars is much further away. Try working out the energetics of "terraforming" before you book a ticket...
I imagine many readers are well old enough to remember BT's 1988 TV advert starring Maureen Lipman, where she comforted her grandson for his bad exam results by pointing out that he'd passed an "ology" (even if it was just sociology). I've never obtained an ology myself, only an "istry" or two, but in any case I'm actually rather more interested in "acies": literacy, numeracy and a couple of others that have no acy name.
Not a day goes by without me being thankful for receiving an excellent scientific education. A couple of decades ago I'd have thought twice before admitting that, but no longer because pop science has become a hugely important part of popular culture, from TED talks to sci-fi movies, via miles and bookshelf miles of explanatory books on cosmology, neuroscience, genetics, mathematics, particle physics, even a few ologies. Being a nerd is now a badge of honour. But my thankfulness has little to do with any of that, and more to do with the way basic numeracy, plus a knowledge of statistics ("riskacy"?) and energetics ("ergacy"?) help me understand everything that life throws at me, from everyday accidents and illnesses, through politics to my entire philosophical outlook.
Take for example relationships with doctors. As an aging male I'm on the receiving end of a variety of government-sponsored preventive medicine initiatives, aimed at reducing the incidence of heart attack, stroke, diabetes and other disorders. After an annual battery of tests I'm encouraged to consider taking a variety of drugs, but before agreeing I ask my GP to show me the test results on his PC screen, both as annual historical graphs and raw figures compared to recommended ranges. When shown my thyroid hormone level marginally out of range, I can argue about experimental error and standard deviations, and win since my doctor's no statistician. This process has lead me take lisinopril for my blood pressure, but refuse statins for my marginal cholesterol and ditto for thyroxin.
Numeracy, particularly concerning percentages and rates of change (ie. calculus) is becoming essential to an understanding of just about everything. If some website tells you that eating hot dogs increases your risk of stomach cancer by 20%, you need to be able to ask from what base-rate: 0.000103 rising to 0.000124 doesn't sound nearly so scary. Western citizens face a risk of death from terrorism way below that from being in a car crash, but those risks *feel* very different subjectively. We accept driving risk more readily than dying from an inexplicable act of violence, our politicians know this and so over-react to terrorism and under-react to road safety. But the "acy" that's most poorly distributed of all concerns energetics.
Perhaps a minority of scientists, and almost no lay people, understand the laws of thermodynamics in theory, let alone have an intuitive grasp that could be usefully applied to everyday life. Thanks to the pop science boom, everyone knows Einstein's formula E = MC², but that's only marginally relevant to everyday life since we don't ride nuclear-powered cars or busses, and our bodies run on chemical rather than nuclear reactions. Hence the confusion among would-be dieters over counting calories: does water have any calories?, do carrots have more than celery?
Some variables that really do matter for an energetic understanding of life are energy density and the rate at which energy gets converted from one form to another. You could place a bowl of porridge on the table alongside a piece of dynamite that contains the same number of calories. Dynamite has around 300 times the energy density of porridge so it will be a small piece. More important though, the calories in the porridge (as starches, sugars, protein) get converted to muscular effort and heat rather slowly by your digestive system, while a detonator turns the dynamite's calories into light, heat and sound very quickly indeed. But grind wheat or oats to a fine-enough powder, mix with air as a dust cloud, and deadly industrial explosions can occur.
Energy density calculations affect the mobile device business greatly, both when seeking new battery technologies and considering the safety of existing ones like lithium-ion (just ask Boeing). As for transport, fossil fuels and climate change, they're equally crucial. Electric cars like Tesla are just about do-able now, but electric airliners aren't and may never be, because the energy density of batteries compared to hydrocarbons is nowhere near enough. And when people fantasise online about the possibility of transporting the human race to colonise Mars, energetics is seldom properly discussed. We all ultimately live off energy (and negative entropy) that we receive from sunlight, but Mars is much further away. Try working out the energetics of "terraforming" before you book a ticket...
GAME ON?
Dick Pountain/ Idealog 259 /08 February 2016 11:16
I wouldn't ever describe myself as a gamer, but that's not to say that I've never played any computer games. On the contrary, I was once so hooked on Microsoft's "FreeCell" version of solitaire that I would download lists of solutions and complexity analyses by maths nerds. I was almost relieved when those miserable sods at Redmond removed it from Windows 7, and have resisted buying any other version. Long, long before that I played text adventures like Zork (under CP/M), Wizardry (crude graphics, on Apple II but nevertheless highly addictive), and graphic shooters like Doom. I even finished the hilariously grisly Duke Nukem. I still play a single game - the gorgeous French "stretchy" platform game Contre Jour - on my Android tablet.
So, with this rap-sheet, how can I claim not to be gamer? Because I lost all interest in shoot-'em-ups after Duke Nukem, never got into the modern generation of super-realistic shooters like Call of Duty and Grand Theft Auto, and have never purchased a computer for its game-play performance. I do realise that this cuts me off from a major strand of popular culture among today's youth, and *the* major source of current entertainment industry revenues, but I had no idea just how far I'd cut myself off until I read an article in the Guardian last week.
Called "Why my dream of becoming a pro gamer ended in utter failure" (http://gu.com/p/4fzjt/sbl), this fascinating article by tech reporter Alex Hern came as a revelation to me. First of all, I had only the vaguest idea that computer games were being played for money, but secondly I was utterly clueless as to exactly *how* these games are being monetised. The games Hern played aren't GTA-style shooters but up-to-date versions of that mean old Wizardry I used to play, in which play proceeds by casting spells, chosen from a range of zillions. The strategy of these games, played online against human opponents, lies in carefully choosing the deck of spell cards you'll deploy, and in how and when to deploy each one. In game-theoretic terms this is fairly close to Poker, revolving around forming a mental map of your opponents' minds and strategies. And like Poker, these games (for example Hearthstone, which Hern tried) are played in championship series with huge cash prizes of £100,000, but as he soon realized, only one person gets that pay-off and the rest get nothing for a huge expenditure of playing effort. Instead the way most pro gamers get a regular, but more modest, pay-off is by setting up a channel on social-network Twitch, on which people watch you play while being shown paid-for ads.
I'll say that again in case it hasn't sunk in. You're playing a computer simulation of imaginary spell casting, against invisible opponents via a comms link, and people are paying to watch. This intrigues me because it fits so beautifully into a new analysis of modern economies - one might call it the "Uberisation Of Everything" - that I'm, along with many others, trying to explore. Everyone has recently been getting all whooped up about robots stealing our jobs, but for many young people the miserable jobs on offer are no longer worth protecting, and they dream instead of getting rich quickly by exploiting what talents they were born with: a pretty face, a fine voice, a strong imagination, in football, in hip-hop, or... in streaming Hearthstone.
IT lies at the very heart of this phenomenon. Long before robots get smart enough to do all human jobs, computers are assisting humans to do jobs that once required enormous, sometimes lifelong, effort to learn. Uber lets you be a taxi-driver without doing "The Knowledge"; a synthesiser makes you into an instant keyboard player and auto-tune a viable singer; an iPhone can make you a movie director; and Twitch can make you a Poker, or Hearthstone, or Magic pro. The casino aspect of all this, that your luck might make you instantly rich so you don't have to work, merely mirrors the official morality of the finance sector, where young dealers can make billion dollar plays and end up driving Ferraris (and very rarely in jail, LIBOR-fiddling notwithstanding). This is the economics, not of the Wild West itself, but of Buffalo Bill's Wild West Show. Over recent decades the media have so thoroughly exposed us all to the lifestyles of billionaires that now everyone aspires to be a star at something, work is regarded as the curse that Oscar Wilde always told us it was, and money (lots of it) is seen as the primary means to purchase pleasure and self-esteem. The Protestant Work Ethic that motivated our parents or grandparents is being flushed spiralling down the pan...
I wouldn't ever describe myself as a gamer, but that's not to say that I've never played any computer games. On the contrary, I was once so hooked on Microsoft's "FreeCell" version of solitaire that I would download lists of solutions and complexity analyses by maths nerds. I was almost relieved when those miserable sods at Redmond removed it from Windows 7, and have resisted buying any other version. Long, long before that I played text adventures like Zork (under CP/M), Wizardry (crude graphics, on Apple II but nevertheless highly addictive), and graphic shooters like Doom. I even finished the hilariously grisly Duke Nukem. I still play a single game - the gorgeous French "stretchy" platform game Contre Jour - on my Android tablet.
So, with this rap-sheet, how can I claim not to be gamer? Because I lost all interest in shoot-'em-ups after Duke Nukem, never got into the modern generation of super-realistic shooters like Call of Duty and Grand Theft Auto, and have never purchased a computer for its game-play performance. I do realise that this cuts me off from a major strand of popular culture among today's youth, and *the* major source of current entertainment industry revenues, but I had no idea just how far I'd cut myself off until I read an article in the Guardian last week.
Called "Why my dream of becoming a pro gamer ended in utter failure" (http://gu.com/p/4fzjt/sbl), this fascinating article by tech reporter Alex Hern came as a revelation to me. First of all, I had only the vaguest idea that computer games were being played for money, but secondly I was utterly clueless as to exactly *how* these games are being monetised. The games Hern played aren't GTA-style shooters but up-to-date versions of that mean old Wizardry I used to play, in which play proceeds by casting spells, chosen from a range of zillions. The strategy of these games, played online against human opponents, lies in carefully choosing the deck of spell cards you'll deploy, and in how and when to deploy each one. In game-theoretic terms this is fairly close to Poker, revolving around forming a mental map of your opponents' minds and strategies. And like Poker, these games (for example Hearthstone, which Hern tried) are played in championship series with huge cash prizes of £100,000, but as he soon realized, only one person gets that pay-off and the rest get nothing for a huge expenditure of playing effort. Instead the way most pro gamers get a regular, but more modest, pay-off is by setting up a channel on social-network Twitch, on which people watch you play while being shown paid-for ads.
I'll say that again in case it hasn't sunk in. You're playing a computer simulation of imaginary spell casting, against invisible opponents via a comms link, and people are paying to watch. This intrigues me because it fits so beautifully into a new analysis of modern economies - one might call it the "Uberisation Of Everything" - that I'm, along with many others, trying to explore. Everyone has recently been getting all whooped up about robots stealing our jobs, but for many young people the miserable jobs on offer are no longer worth protecting, and they dream instead of getting rich quickly by exploiting what talents they were born with: a pretty face, a fine voice, a strong imagination, in football, in hip-hop, or... in streaming Hearthstone.
IT lies at the very heart of this phenomenon. Long before robots get smart enough to do all human jobs, computers are assisting humans to do jobs that once required enormous, sometimes lifelong, effort to learn. Uber lets you be a taxi-driver without doing "The Knowledge"; a synthesiser makes you into an instant keyboard player and auto-tune a viable singer; an iPhone can make you a movie director; and Twitch can make you a Poker, or Hearthstone, or Magic pro. The casino aspect of all this, that your luck might make you instantly rich so you don't have to work, merely mirrors the official morality of the finance sector, where young dealers can make billion dollar plays and end up driving Ferraris (and very rarely in jail, LIBOR-fiddling notwithstanding). This is the economics, not of the Wild West itself, but of Buffalo Bill's Wild West Show. Over recent decades the media have so thoroughly exposed us all to the lifestyles of billionaires that now everyone aspires to be a star at something, work is regarded as the curse that Oscar Wilde always told us it was, and money (lots of it) is seen as the primary means to purchase pleasure and self-esteem. The Protestant Work Ethic that motivated our parents or grandparents is being flushed spiralling down the pan...
Subscribe to:
Posts (Atom)
POD PEOPLE
Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...
-
Dick Pountain/Idealog 277/05 August 2017 11:05 I'm not overly prone to hero-worship, but that's not to say that I don't have a...
-
Dick Pountain /Idealog 360/ 07 Jul 2024 11:12 Astute readers <aside class=”smarm”> which of course to me means all of you </aside...
-
Dick Pountain /Idealog 363/ 05 Oct 2024 03:05 When I’m not writing this column, which let’s face it is most of the time, I perform a variety...