Monday, 24 August 2020

A QUANTUM OF SOLACE?

 Dick Pountain/ Idealog304/ 3rd Nov 2019

When Google announced, on Oct 24th, that it has achieved 'quantum supremacy' -- that is, has performed a calculation on a quantum computer faster than any conventional computer could ever do -- I was forcefully reminded that quantum computing is a subject I've been avoiding in this column for 25 years. That prompted a further realisation that it's because I'm sceptical of the claims that have been made. I should hasten to add that I'm not sceptical about quantum mechanics per se (though I do veer closer to Einstein than to Bohr, am more impressed by Carver Mead's Collective Electrodynamics  than by Copenhagen, and find 'many worlds' frankly ludicrous). Nor am I sceptical of the theory of quantum computation itself, though the last time I wrote about it was in Byte in 1997.  No, what I'm sceptical of are the pragmatic engineering prospects for its timely implementation. 

The last 60 years saw our world transformed by a new industrial revolution in electronics, gifting us the internet, the smartphone, Google searches and Wikipedia, Alexa and Oyster cards. The pace of that revolution was never uniform but accelerated to a fantastic extent from the early 1960s thanks to the invention of CMOS, the Complementary Metal-Oxide-Semiconductor fabrication process. CMOS had a property shared by few other technologies, namely that it became much, much cheaper and faster the smaller you made it, resulting in 'Moore's Law', that doubling of power and halving of cost every two years that's only now showing any sign of levelling off.  That's how you got a smartphone as powerful as a '90s supercomputer in your pocket.  CMOS is a solid-state process where electrons whizz around metal tracks deposited on treated silicon, which makes it amenable to easy duplication by what amounts to a form of printing. 

You'll have seen pictures of Google's Sycamore quantum computer that may have achieved 'supremacy' (though IBM is disputing it). It looks more like a microbrewery than a computer. Its 56 quantum bits are indeed solid state, but they're superconductors that work at microwave frequencies and near absolute zero immersed in liquid helium. The quantum superpositions upon which computation depends collapse at higher temperatures and in the presence of radio noise, and there's no prospect that such an implementation could ever achieve the benign scaling properties of CMOS. Admittedly a single qubit can in theory do the work of millions of CMOS bits, but the algorithms that need to be devised to exploit that advantage are non-intuitive and opaque, the results of computation are difficult to extract correctly and will require novel error-correction techniques that are as yet unknown and may not exist. It's not years but decades, or more, from practicality.

Given this enormous difficulty, why is so much investment going into quantum computing right now? Thanks to two classes of problem that are provenly intractable on conventional computers, but of great interest to extremely wealthy sponsors. The first is the cracking of public-key encryption, a high priority for the world's intelligence agencies which therefore receives defence funds.  The second is the protein-folding problem in biochemistry. Chains of hundreds of amino-acids that constitute enzymes can fold and link to themselves in a myriad different ways, only one of which will produce the proper behaviour of that enzyme, and that behaviour is the target for synthetic drugs. Big Pharma would love a quantum computer that could simulate such folding in real time, like a CAD/CAM system for designing monoclonal antibodies. 

What worries me is that the hype surrounding quantum computing is of just the sort that's guaranteed to bewitch technologically-illerate politicians, and it may be resulting in poor allocation of computer science funding. The protein folding problem is an extreme example of the class of optimisation problems -- others are involved in banking, transport routing, storage allocation, product pricing and so on -- all of which are of enormous commercial importance and have been subject to much research effort. For example twenty years ago constraint solving was one very promising line of study. When faced with an intractably large number of possibilities, apply and propagate constraints to severely prune the tree of possibilities rather than trying to traverse it all. The promise of quantum computers is precisely that, assuming you could assemble enough qubits, they could indeed just test all the branches, thanks to superposition. In recent years the flow of constraint satisfaction papers seems to have dwindled: is this because the field has struck an actual impass, or because the chimera of imminent quantum computers is diverting effort? Perhaps a hybrid approach to these sorts of problem might be more productive, say hardware assistance for constraint solving, plus deep learning, plus analog architectures, and anticipating shared quantum servers as one, fairly distant, prospect rather than the only bet.    


THE SKINNER BOX

 Dick Pountain/ Idealog 303/ 4th October 2019 10:27:48

We live in paranoid times, and at least part of that paranoia is being provoked by advances in technology. New techniques of surveillance and prediction cut two ways: they can be used to prevent crime and to predict illness, but they can also be abused for social control and political repression – which of these one sees as more important is becoming a matter of high controversy. Those recent street demonstrations in Hong Kong highlighted the way that sophisticated facial recognition tech, when combined with CCTV built into special lamp-posts can enable a state to track and arrest individuals at will. 

But the potential problems go way further than this, which is merely an extension of current law-enforcement technology. Huge advances in AI and Deep Learning are making it possible to refine thise more subtle means of social control often referred to as ‘nudging’. To nudge means getting people to do what you want them to do, or what is deemed good for them, not by direct coercion but by clever choice of defaults that exploit people’s natural biases and laziness (both of which we understand better than ever before thanks to the ground-breaking psychological research of Daniel Kahneman and Amos Tversky).   

The arguments for and against nudging involve some subtle philosophical principles, which I’ll try to explain as painlessly as possible. Getting people to do “what’s good for them” raises several questions: who decides what’s good; is their decision correct; even if it is, do we have the right to impose it, what about free will? Liberal democracy (which is what we still do just about have, certainly compared to Russia or China) depends upon citizens being capable of making free decisions about matters important to the conduct of their own lives, but what if advertising, or addiction, or those intrinsic defects of human reasoning that Kahneman uncovered, so distort their reckoning as to make them no longer meaningfully free – what if they’re behaving in ways contrary to their own expressed interests and injurious to their health? Examples of such behaviours, and the success with which we’ve dealt with them, might be compulsory seat belts in cars (success), crash helmets for motorcyclists (success), smoking bans (partial success), US gun control (total failure).


 


Such control is called “paternalism”, and some degree of it is necessary to the operation of the state in complex modern societies, wherever the stakes are sufficiently high (as with smoking) and the costs of imposition, in both money and offended freedom, are sufficiently low. However there are libertarian critics who reject any sort of paternalism at all, while an in-between position, "libertarian paternalism", claims that the state has no right to impose but may only nudge people toward correct decisions, for example over opting-in versus opting-out of various kinds of agreement – mobile phone contracts, warranties, mortgages, privacy agreements. People are lazy and will usually go with the default option, careful choice of which can nudge rather than compel them to the desired decision. 



The thing is, advances in AI are already enormously amplifying the opportunities for nudging, to a paranoia-inducing degree. The nastiest thing I saw at the recent AI conference in King’s Cross was an app that reads shoppers’ emotional states using facial analysis and then 


raises or lowers the price of items offered to them on-the-fly! Or how about Ctrl-Lab’s app that non-invasively reads your intention to move a cursor (last week Facebook bought the firm). Since vocal chords are muscles too, that non-invasive approach might conceivably be extended with even deeper learning to predict your speech intentions, the voice in your head, your thoughts…

I avoid both extremes in such arguments about paternalism. I do believe that climate crisis is real and that we’ll need to modify human behaviour a lot in order to survive, so any help will be useful. On the other hand I was once an editor at Oz magazine and something of a libertarian rebel-rouser in the ‘60s. In a recent Guardian interview, the acerbic comedy writer Chris Morris (‘Brass Eye’, ‘Four Lions’) described meeting an AA man who showed him the monitoring kit in his van that recorded his driving habits. Morris asked “Isn’t that creepy?” but the man replied “Not really. My daughter’s just passed her driving test and I’ve got half-price insurance for her. A black box recorder in her car and camera on the dashboard measures exactly how she drives and her facial movements. As long as she stays within the parameters set by the insurance company, her premium stays low.” This sort of super-nudge comes uncomfortably close to China’s punitive Social Credit system: Morris called it a “Skinner Box”, after the American behaviourist BF Skinner who used one to condition his rats…



Tuesday, 14 April 2020

I SECOND THAT EMOTION

Dick Pountain/ Idealog302/ 2nd September 2019 10:24:10

Regular readers might have been surprised when I devoted my previous two columns to AI, a topic about which I’ve often expressed scepticism here. There’s no reason to be, because it’s quite possible to be impressed by the latest advances in AI hardware and software while remaining a total sceptic about the field’s more hubristic claims. And a book that reinforces my scepticism has arrived at precisely the right time, ‘The Strange Order of Things’ by Antonio Damasio. He’s a Portuguese-American professor of neuroscience who made his name researching the neurology of the emotions, and the critical role they play in high-level cognition (contrary to current orthodoxy).

Damasio also has a deep interest in the philosophy of consciousness, to which this book is a remarkable contribution. Damasio admires the 17th-century Dutch philosopher Baruch Spinoza, who among his many other contributions made a crucial observation about biology – that the essence of all living creatures is to create a boundary between themselves and the outside world within which they try their hardest to maintain a steady state, the failure of which results in death.  
Spinoza called this urge to persevere ‘conatus’, but Damasio prefers the more modern name ‘homeostasis’.

The first living creatures – whose precise details we don’t, and may never, know – must have emerged by enclosing a tiny volume of the surrounding sea water with a membrane of protein or lipid, then controlling the osmotic pressure inside to prevent shrivelling or bursting. And they became able to split this container in two to reproduce themselves. Whether one calls this contatus or homeostasis, it is the original source of all value. When you’re trying to preserve your existence by maintaining your internal environment, it becomes necessary to perceive threats and benefits to it and to act upon them, so mechanisms that distinguish ‘good’ from ‘bad’ are hard-wired not merely into the first single celled creatures, but into every cell of multicellular creatures – ourselves included – that evolved from them. The simplest bacteria possess ‘senses’ that cause them to swim toward food or away from harmful chemicals, or to and from light, or whatever.

Damasio’s work traces the way that, as multicellular creatures evolved, not only does this evaluation mechanism persist in every individual cell, but as more complex body forms evolved into separate organs, then nervous systems, then brains, this evaluation became expressed in ever higher-level compound systems. In our case it’s the duty of the limbic system within our brain, which controls what we call ‘emotions’ using many parallel networks of electrical nerve impulses and chemical hormone signals. And Damasio believes that our advanced abilities to remember past events, to predict future events, and to describe things and events through language, are intimately connected into this evaluatory system. There’s no such thing as a neutral memory or word, they’re always tagged with some emotional connotation, whether or not that ever gets expressed in consciousness.

And so at last I arrive at what this has to do with AI, and my scepticism towards its strongest claims. Modern AI is getting extraordinarily good at emulating, even exceeding, our own abilities to remember, to predict and describe, to listen and to speak, to recognise patterns and more. All these are functions of higher consciousness, but that isn’t the same thing as intelligence. AI systems don’t and can’t have any idea what they are for, whereas in our body every individual cell knows what it’s for, what it needs (usually glucose), and when to quit. You have five fingers because while in your mother’s womb, the cells in between them killed themselves (look up ‘apoptosis’) for the sake of your hand.

It’s not so much the question of whether robots could ever feel – which obsesses sci-fi authors – but of whether any AI machine could ever truly reproduce itself. Every cell of a living creature contains its own battery (the mitochondrion), its own blueprint (the DNA) and its own constructor (the ribosomes), it’s an ultimate distributed self-replicating machine honed by 3.5 billion years of evolution. When you look at an AI processor chip under the microscope it may superficially resemble the cellular structure of a plant or animal with millions of transistors for cells, but they don’t contain their own power sources, nor the blueprints to make themselves, and they don’t ‘know’ what they’re for. The chip requires an external wafer fab the size of an aircraft hangar to make it, the robot’s arm requires an external 3D printer to make it. As Damasio shows so brilliantly, the homeostatic drive of cells permeates our entire bodies and minds, and those emotional forces that rationalists reject as ‘irrational’ are just our body trying to look after itself.

INTELLIGENCE ON A CHIP

Dick Pountain/ Idealog 301/ 4th August 2019 12:59:28

I used to write a lot about interesting hardware back in the 1980s. I won a prize for my explanation of HP’s PA-RISC pipeline, and a bottle of fine Scotch from Sir Robin Saxby for a piece on ARM’s object-oriented memory manager. Then along came x486, the CPU world settled into a rut/groove, and Byte closed. I never could summon the enthusiasm to follow Intel’s cache and multi-core shenanigans.

Last month’s column was about the CogX AI festival in King’s Cross, but confined to software matters: this month I’m talking hardware, which to me looks as exciting as in that ‘80s RISC revolution. Various hardware vendors explained the AI problems their new designs are intended to solve, which are often about ‘edge computing’. The practical AI revolution that’s going on all around us is less about robots and self-driven cars than about phone apps.

You’ll have noticed how stunningly effective Google Translate has become nowadays, and you may also have Alexa (or one of its rivals) on your table to select your breakfast playlist. Google Translate performs the stupendous amounts of machine learning and computation it needs on remote servers, accessed via The Cloud. On the other hand stuff like voice or face recognition on your phone are done by the local chipset. Neither running Google Translate on your phone, nor running face recognition in the cloud make any sense because bandwidth isn’t free, and latency is more crucial in some applications than in others. The problem domain has split into two – the centre and the edge of the network.

Deep learning is mostly done using convolutional neural networks that require masses of small arithmetic operations to be done very fast in parallel. GPU chips are currently favoured for this job since, although originally designed for mashing pixels, they’re closer to what’s needed than conventional CPUs. Computational loads for centre and edge AI apps are similar in kind but differ enormously in data volumes and power availability, so it makes sense to design different processors for the two domains. While IBM, Intel and other big boys are working to this end, I heard two smaller outfits presenting innovatory solutions at CogX.

Nigel Toon, CEO of Cambridge-based Graphcore, aims at the centre with the IPU-Pod, a 42U rack-mount board that delivers 16 PetaFLOPs of mixed-precision convolving power. This board holds 128 of Graphcore’s Colossus GC2 IPUs (Intelligence Processor Unit) each delivering 500TFlops and running up to 30,000 independent program threads in parallel in 1.2GB of on-chip memory. This is precisely what’s needed for huge knowledge models, keeping the data as close as possible to the compute power to reduce bus latency – sheer crunch rather than low power is the goal.

Mike Henry, CEO of US outfit Mythic was instead focussed on edge processors, where low power consumption is absolutely crucial. Mythic has adopted the most radical solution imaginable, analog computing: their IPU chip contains a large analog compute array, local SRAM memory to stream data between the network’s nodes and a single-instruction multiple-data (SIMD) unit for processing operations the analog array can’t handle. Analog processing is less precise than digital – which is why the computer revolution was digital – but for certain applications that doesn’t matter. Mythic’s IPU memory cells are tunable resistors in which computation happens in-place as input voltage turns to output current according to Ohm’s Law, in effect multiplying an input vector by a weight matrix. Chip size and power consumption are greatly reduced by keeping data and compute in the same place, hence wasting less energy and real-estate in A-to-D conversion. (This architecture may be more like the way biological nerves work too).

To effectively use the extra intelligence these chips promise we’ll need more transparent user interfaces, and the most exciting presentation I saw at CogX was ‘Neural Interfaces and the Future of Control’ by Thomas Reardon (who once developed Internet Explorer for Microsoft, but has since redeemed himself). Reardon is a pragmatist who understands that requiring cranial surgery to interface to a computer is a bit of a turn-off (we need it like, er, a hole in the head) and his firm Ctrl-Labs has found a less painful way.

Whenever you perform an action your brain sends signals via motor neurons to the requisite muscles, but when you merely think about that action your brain rehearses it by sending both the command and an inhibitory signal. Ctrl-Labs uses a non-invasive electromyographic wristband that captures these motor neuron signals from the outside and feeds them to a deep-learning network – their software then lets you control a computer by merely thinking the mouse or touchscreen actions without actually lifting a finger. Couch-potato-dom just moved up a level.

Wednesday, 8 January 2020

TIME SERVED

Dick Pountain/ Idealog 300/ 5th July 2019 07:51:08

My 300th column is a milestone that deserves a meaty subject, and luckily the recent CogX 2019 festival provides a perfect one. I reckoned I’d served my time in the purgatory that is the computer show: for 15 years I spent whole weeks at CeBIT in Hannover, checking out new tech for Byte magazine; I’ve done Comdex in Vegas (stayed in Bugsy Siegel’s horrendous Flamingo where you walked through a mile of slots to get to your room); I’ve flown steerage to Taipei to help judge product awards at Computex. I thought I was done, but CogX had two irresistible attractions: it brought together the world’s top AI researchers for three days of serious talk, and it was in King’s Cross, a short walk down the canal from my house, in what’s now called ‘The Knowledge Quarter’ - British Library, Francis Crick Institute, Arts University (Central and St Martins rolled into one brand-new campus), YouTube HQ, with Google’s still under construction.

CogX was the first proper show to be held in the futuristic Coal Drops Yard development and it was big – 500 speakers on 12 stages, some in large geodesic tents. It was also silly expensive at £2000 for three-days-all-events or £575 per day (naturally I had a press pass). Rain poured down on the first day causing one wag to call it ‘Glastonbury for nerds’, but it was packed with standing-room-only at every talk I attended. A strikingly young, trendy and diverse crowd, most I imagine being paid for by red-hot Old Street startups. Smart young people who don’t aspire to be DJs or film stars now seem to aim at AI instead. This made me feel like an historic relic, doffing my cap on the way out, but in a nice way.

Perhaps you suspect, as I did beforehand, that the content would be all hype and bullshit, but you’d be very wrong. It was tough to choose a dozen from the 500. I went mostly for the highly techy or political, skipping the marketing and entreprenurial ones, and the standard of talks, panel discussions and organisation was very impressive. This wasn’t a conference about how Machine Learning (ML) or deep learning work, that’s now sorted. These folk have their supercomputers and ML tools that work, it’s about what they’re doing with them, and whether they should be and who’s going to tell them.

David Ferrucci (formerly IBM Watson, now Elemental Cognition) works on natural language processing and making ML decisions more transparent, using a strategy that combines deep-learning and database search with interactive tuition. Two women buy mint plants: one puts hers on her windowsill where it thrives, the other in a dark room where it doesn't. His system threw out guesses and questions until it understood that plants need light, and that light comes through windows. Second story: two people buy wine, one stores it in the fridge, the other on the windowsill where it spoils. More guesses, more questions, his system remembers what it learned from the mint story, deduces that light is good for plants but bad for wine. To make machines really smart, teach them like kids.

Professor Maja Pantić (Affective and Behavioural Computing at Imperial College, head of Samsung AI lab in Cambridge) told a nice story about autism and a nasty one about supermarkets. They’ve found that autistic children lack the ability to integrate human facial signals (mouth, eyes, voice etc), become overwhelmed and terrified. An AI robotic face can separate these signals into a format the child can cope with. On the other hand supermarkets can now use facial recognition software to divine the emotional state of shoppers and so change the prices charged to them on the fly. Deep creepiness.

Eric Beinhofer (Oxford Martin School) and César Hidalgo (MIT Media Lab) gave mind-boggling presentations on the way AI is now used to build colossal arrays of virtual environments – rat mazes, economic simulations, war games – on which to train other ML systems, thus exponentially reducing training times and improving accuracy. Stephen Hsu (Michigan State) described how ML is now learning from hundreds of thousands of actual human genomes to identify disease mutations with great accuracy, while Helen O’Neill (Reproductive and Molecular Genetics, UCL) said combining this with CRISPR will permit choosing not merely the gender but many others traits of unborn babies, maybe within five years. A theme that emerged everywhere was, ‘we are already doing unprecedented things that are morally ambiguous, even God-like, but the law and the politicians haven’t a clue. Please regulate us, please tell us what you want done with this stuff’. But CogX contained way too much for one column, more next month about extraordinary AI hardware developments and what it’s all going to mean.

Link to YouTube videos of my choices: https://www.youtube.com/playlist?list=PLL4ypMaasjt-_K14PNE6YQRQIlmBhHoZE

[Dick Pountain found the food in King’s Cross, especially the coffee, more interesting than Hannover Messe or Vegas, somewhat less so than snake wine in Taipei ]


ENERGY DRINKERS

Dick Pountain/ Idealog 299/ 7th June 2019 09:52:38

If you enjoy programming as much as I do, you're likely to have encountered the scenario I'm about to describe. You're tackling a really knotty problem that involves using novel data types and frequent forays into the manuals: you eventually crack it, then glance at the clock and realise that hours have passed in total concentration, oblivious to all other influences. You feel suddenly thirsty, rise to make a cup of tea and feel slightly weak at the knees, as if you'd just run a mile. That's because using your brain so intensively is, energetically, every bit as demanding as running: it uses up your blood glucose at a ferocious rate. Our brains burn glucose faster than any of our other organs, up to 20% of our total consumption rate.

How come? The brain contains no muscles, doesn't do any heavy lifting or moving: all it does is shift around images, ideas and intentions which surely must be weightless? Not so. I'll admit that a constant refrain of mine in this column has been that the brain isn't a digital computer in the way the more naive AI enthusiasts believe, but that's not to say that it therefore isn't a computing device of a very different (and not fully understood) kind - it most certainly is, and computing consumes energy. Even though a bit would appear to weigh nothing, the switching of silicon gates or salty-water neurons requires energy to perform and is less than 100% efficient. Rather a lot of energy actually if a lot of bits or neurons are involved, and their very tiny nature makes it easy to assemble very, very large numbers of them in a smallish space.

This was brought home to me yesterday in a most dramatic way via the MIT Technology Review (https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes) which describes recent research into the energy consumption of state-of-the-art Natural Language Processing (NLP) systems, of the sort deployed online and behind gadgets like Alexa. Training a single really large deep-learning system consumes colossal amounts of energy, generating up to five times the CO2 emitted by a car (including fuel burned) over its whole lifetime. How could that be possible? The answer is that it doesn't all happen on the same computer, which would be vaporised in a millisecond. It happens in The Cloud, distributed all across the world in massively-parallel virtual machine arrays working on truly humongous databases, over-and-over-again as the system tweaks and optimises itself.

We've already had a small glimpse of this fact through the mining of BitCoins, where the outfits that profit from this weirdly pathological activity have to balance the millions they mine against equally enormous electricity bills, and must increasingly resort to basing their servers in the Arctic or sinking them into lakes to water cool them. Yes indeed, computing can consume a lot of energy when you have to do a lot of it, and the deceptive lightness of a graphical display hides this fact from us: even live-action-real-shoot-em-up games nowadays demand multiple supercomputer-grade GPUs.

It was a hero of mine, Carver Mead, who first made me think about the energetics of computing in his seminal 1980 book 'Introduction to VLSI Systems'. Chapter 9 on the 'Physics of Computational Systems' not only explains, in thermodynamic terms, how logic gates operate as heat engines, but also employs the 2nd Law to uncover the constraints on consumption for any conceivable future computing technology. In particular he demolished the hope of some quantum computing enthusiasts for 'reversible computation' which recovers the energy used: he expected that would use more still.

The slice of total energy usage that goes into running the brains of the eight billion of us on the planet is way less than is used for transport or heating, thanks to six billion years of biological evolution that forged our brain into a remarkably compact computing device. That evolution changed our whole anatomy, from the bulging brain-case to the wide female pelvis needed to deliver it, and it also drove us to agriculture - extracting glucose fast enough to run it reliably forced us to invent cooking and to domesticate grass seeds.

Now AI devices are becoming important to our economies, and the Alexa on your table makes that feel feasible, but vast networks of energy-guzzling servers lie behind that little tube. Silicon technology just can't squeeze such power into the space our fat-and-protein brains occupy. Alongside the introduction of the electric car, we're about to learn some unpleasant lessons concerning the limits of our energy generation infrastructure.

[Dick Pountain's favourite energy drink is currently Meantime London Pale Ale]


Saturday, 2 November 2019

THE NET CLOSES

Dick Pountain/ Idealog298/ 3rd May 2019 14:21:19

To say that the shine has worn off Social Media would be something of an understatement: in fact we appear to be on the verge of what sociologist Stan Cohen memorably labelled a ‘moral panic’ that might end up severely curtailing the freedom of web communication in some countries (though probably not this one). I’ve mentioned before that I review books for another, rather dustier, journal than PC Pro, and I’ve just completed a blockbuster on three that describe the darker side of the internet: Martin Moore’s ‘Democracy Hacked’, Susan Landau’s ‘Listening In’ and Matthew Hindman’s ‘The Internet Trap’. I still feel slightly sick and keep looking over my shoulder more than is healthy. Moore’s book in particular is an eye-opener, and one that I heartily recommend.

One of his central chapters traces the history of online culture all the way from the hippy ideals of the Whole Earth Catalog, through Perry Barlow, Stallman, ‘information wants to be free’ and Open Source, right up through its discovery by libertarian billionaires Koch brothers, Peter Thiel and Richard Mercer, to Cambridge Analytica, Trump and Brexit. The irony is of course that we got the internet we wished for, free and unfettered, but that didn’t make it a force for good (or even for middling).

In the bad old days our information was controlled by large corporations, from News International to the BBC, who imposed their own values via the professional journalists they employed to write, speak and film it. In short they policed the information stream for ‘our own good’. Nowadays we’re free as birds to communicate anything we want to whoever we want thanks to the marvellous WWW. On YouTube I can indulge my cravings, as confessed last month, for Japanese street-food and rusty old tools, and can post my own avant-garde computer-generated music for no-one to listen to. I can swap pithy witticisms with my hundreds of ‘friends’ on Facebook, listen to almost any music in the world instantly on Spotify, and buy all my electronic bits on Amazon instead of schlepping all the way to Maplins (who they put out of business). Of course YouTube happens to be owned by Google, a bigger and more profitable corporation than any of those old ones, and unlike them neither it nor Facebook polices anything very much, enabling all the world’s most dangerous nutjobs to get heard on the same terms as sensible folk.

Matthew Hindman’s book is more wonkish than Moore’s, and hence may possibly interest readers of this column more. He performs experiments on large web traffic datasets, like the results of the Netflix Prize Competition, which reinforce the idea that though the internet may open up production and dissemination of information to everyone, it inexorably siphons all the revenue into a handful of new monopolies every bit as powerful as the old. This is due not only to ‘network effects’, which he believes have been overemphasised, but mostly to ‘stickiness’ – the tendency of users to become loyal to one website thanks to the mental cost of switching. His experiments quantify just how hard and expensive stickiness is to achieve, so that only very large companies, which admittedly got that big through network effects, can afford it through better design and faster response than competitors. Their server farms are every bit as huge and expensive as the factories of previous industrial revolutions.

A monopoly on stickiness inevitably leads to attention-based business models, where user information is harvested to target programmatic adverts. Everything we buy, read or watch provides information that is sold to advertisers, and Google and Facebook between them collect 70% of this colossal revenue. Hindman worries about the effect on news gathering and dissemination: local papers could once attract sufficient ad revenue, thanks to their targeted readerships, but the Net lets digital giants grab practically all of it and push the locals out of business. Digital news sites that are replacing them, like BuzzFeed and Vice are financed by investors and major brand advertisers – lacking a tradition of separation between editorial and business, their tiny in-house staffs generate ‘native’ ads that look like editorial (and often go viral), while deleting or redacting anything that might offend advertisers.

Susan Landau’s book is an equally excellent overview of hacking, encryption and surveillance issues that I won’t need to explain in such detail to readers of Davey Winder’s Pro column (she was an expert witness for Apple over the FBI’s request to decrypt those terrorists’ iPhone). Guy Debord once remarked that “Formerly one only conspired against an established order. Today, conspiring in its favor is a new and flourishing profession.” These authors agree, and urge us to curb the power of the corporations while the choice is still ours to make.

[If Dick Pountain had a penny for every penny he’d made from online content, he wouldn’t have a penny]





POD PEOPLE

Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...