Tuesday 23 September 2014

SOMETHING WENT WRONG

Dick Pountain/ Idealog 237/ 07 April 2014 11:48

I'm writing this column on a new computer on 8th April, which may or may not come to be known as Save Microsoft's Ass Day. For the benefit of any members of exotic tribes, like OSX or Linux users, it's the day on which the major update to Windows 8.1 is being released, the fate of which might determine the fate of Microsoft. I don't have the update myself, and will be waiting to see whether it bricks everyone's PCs before I indulge (in itself a testament to the esteem in which MS is currently held).

But, I hear you thinking, didn't he say just a few columns ago that he may never buy another Windows PC? He did indeed, but then he succumbed to an unforgiveable fit of nostalgia and misplaced loyalty and did precisely that. No sooner had I written that previous column than the hard disk on my trusty 7-year-old Viao began to show symptoms of approaching retirement, and it became wise to put my money where my mouth had been. I shopped around, was sorely tempted to "go commando" with an Asus Transformer, and devilishly tempted by the sheer beauty of Google's hi-def Chromebook, but in the end I stuck with Windows for a combination of pragmatic and sentimental reasons. I'm already beginning to regret it.

The pragmatic reason was that I don't completely trust the cloud. I'm happy enough to exploit it and have done business from foreign shores using only my Nexus 7 and an internet connection, but I want a local copy of all my significant data too. Android can sort of hack that, but it's not what it was designed for (the fact that a file manager is a third-party app gives you the clue). The sentimental reason was the 20 years I've spent fiddling with, tweaking, boosting, wrestling, writing software for, swearing at and rescuing Windows. Hate it, certainly, but it was also great mental exercise on a par with playing chess or planning a guerilla war. So I caved, bought a Lenovo Yoga 2 running Windows 8 and plunged ahead into the quagmire. I resolved to upgrade immediately to Windows 8.1 before getting too settled (ha!) so went to the Store, only to find no upgrade there. A traipse through the forums revealed that it won't be visible on some PCs until you manually apply two other upgrades called K123blah and K123blech. So far so hostile, in fact downright unprofessional, by both MS and Lenovo.

With 8.1 in place I started to investigate the Metrosexual interface and found I didn't mind it so much as many other commentators, since I'm now totally atuned to Android and touch. Tiles make quite a good substitute for the Start Menu I never used, having always preferred desktop icons. Things I do most - mail, calendar, writing, reading, Googling and note-taking - all fit onto the first Start screen, always available via the Windows key as I work in desktop view. But irritation set in once I discovered there aren't any official "Modern Interface" versions of most of the programs I use (like Gmail, Blogger, Flickr, YouTube, iPlayer). You can fiddle this by viewing them in a browser window and making tiles from their URLs, if you don't mind using Internet Explorer, which I do mind. Using Firefox, as I used to, you can't (and in any case it runs too slowly to be usable). Using Chrome, as I now do, it's hidden under a menu that another traipse through the forums revealed. Then one-by-one, those tiled Win8 apps I could find started to break down. It happened to the Facebook app, to a calendar app I paid, admittedly pence, for which no longer logs into Google, and to an icon editor that no longer saves its files. What's really nice though is that to avoid giving anxiety and offence (a cardinal sin in the modern world) software vendors are adopting a new, softer and content-free error message: "Something Went Wrong". No shit Sherlock.

As I edit Jon Honeyball's column each month I quail before his volcanic wrath against the Windows 8 ecosystem, but I now realise Jon has actually been pretty moderate: the quality of apps in the Windows Store is so abysmal it actually makes me nervous, like wandering down the wrong alley in an unfamiliar city and seeing people lurk in dark doorways. Win8 apps now cause me the same unease I feel whenever forced to use a Macintosh, a loss of control and bewilderment at its sheer opacity. Fortunately the Google Apps button in Chrome lets me turn my cute little Yoga into something resembling the Chromebook I almost bought! Windows 8.1 update will need to be *really, really* something...

Sunday 10 August 2014

THOSE PESKY ATOMS

Dick Pountain/PC Pro/Idealog 236  06/03/2014

Foot-to-ground contact is pretty important to us motorcyclists so we get picky about our boots. I favour an Australian stockman's style that can pass for a fashionable Chelsea Boot in polite society. Having worn out my second pair of R.M.Williams after 15 years yesterday I went shopping for new. I checked Russell & Bromley and John Lewis on the web, then set off to town to try some on. Why didn't I buy online? Because I need to try boots on, and you can neither upload your feet nor download boots over the web, which still only handles bits, not pesky atoms. I'd never consider buying boots from Amazon, though I did go there after my purchase to snivel quietly about the £8 I could have saved...

Russell B's lovely boots were too narrow for my broad feet and John Lewis didn't have the ones advertised on their website, so I ended up buying Blundstones (which are fab and half the price of R.M.Williams) from the Natural Shoe Store. Later that day I realised there's a moral to this gripping tale, as I was reading John Lewis's announcement of its massive new IT project: an Oracle-based ERP (Enterprise Resource Planning) system that will completely integrate the firms' online and physical stores, including all the Waitrose grocers. No cost was quoted for this four-year project - scheduled to run in 2017 - but it will certainly be both expensive and risky.

Manufacturers have only a slightly better record than public-sector institutions when it comes to screwing up big IT: in recent years major corporations from Avon to Hershey and Levi's Jeans have lost fortunes botching or cancelling ERP projects. If anyone can pull it off it might be John Lewis, whose overall competence is renowned. It first pioneered Click & Collect, where you choose a product on the website and then collect it from your nearest store, though all its competitors now do the same. But C&C is only one permutation people use to bridge the bits/atoms gap. Some folk research products in the bricks-and-mortar store, then go home and order from the website. Some fear online shopping and prefer to order by phone from a human. As for browsing the site, they might use a mobile, tablet or a PC. Hence the new buzzword is "omni-channel", and it matters enormously because all of these modes of e-commerce will fail - like my boot purchase - if stock in stores isn't accurately reflected on the website. That demands a whole new level of integration of stock-control and delivery systems, which for a grocery operation like Waitrose that delivers perishable foodstuffs will be ferociously hard. The new project is ambitious indeed.

This is clearly the new frontline of online retailing. There are more and more items like TV sets, clothes, shoes, high-end acoustic musical instruments, possibly furniture and fabrics, that people won't be satisfied to buy from a purely online outlet like Amazon but need to see and touch before choosing. Admittedly a lot of people go to bricks-and-mortar stores to browse, then go home an buy from Amazon, but the stores are getting wise to this. I imagine that John Lewis's new system, assuming it works, is intended to make it so easy to buy-as-you-handle that you won't want Amazon. Meanwhile Amazon and Google are both leaking weirdly futuristic plans for delivering atoms rather than bits independently of the Post Office or courier service. Amazon's vision involves flocks of quadcopter drones, delivering your purchases down the chimney like the storks of legend. Google, with its feet more firmly on the ground, buys up robotics firms: I particularly like their galloping headless-heifer robot, which would make quite a stir as it rumbled round the corner into our street towing a sofa (especially if chased by a squawking flock of quadcopters... )

Omens are gathering that the power of silicon valley giants has peaked, just as the oil, coal and railway barons' power did in the 1900s: even the US Right is getting chippy about the amount of tax they avoid (which means taking more tax from civilians); among Democrats there are populist stirrings about their almost-jobless business model and exploitation of interns; and the San Francisco bus protests are seriously tarnishing their public image. And all that Kurtzweilian Transhumanist/Matrix/Singularity nonsense looks more and more like a religious cult, a post-modern reinvention of a Protestant Millennium. We might spend a lot of time watching movies and listening to music in bit-land but we're never going to live there full-time because we're made of atoms, we eat atoms, breath atoms and wear atoms. And bricks-and-mortar shops have a head start when it comes to distributing atoms in bulk: just watch them start the fight back.

WHAT'S A MOOC?

Dick Pountain/PC Pro/Idealog 235  05/02/2014

Fans of Scorsese's movie "Mean Streets" must certainly remember the pool-hall scene where Jimmy is called a "mook" and responds by asking what that means (we never quite find out). I was irresistibly reminded of this scene as I read an excellent recent column by John Lanchester in the London Review of Books (http://www.lrb.co.uk/v35/n22/john-lanchester/short-cuts) in which he discusses the  MOOC (Massive Online Open Course), a type of distance learning increasingly being offered by US universities. In Lanchester's case what attracted him to a MOOC was a Harvard course on Food Science given by Ferran AdriĆ , famous chef of the now defunct Spanish super-restaurant El Bulli. Not many years ago gaining admission to Harvard lectures would  have cost even more than dinner at El Bulli, but he was able to sign up for SPU27 and take the course on his iPad for free .

SPU27 forms part of a joint project for online learning between Stanford, Harvard and MIT called EdX. (Stanford pioneered the MOOC several years ago via iTunes, though of course our own Open University a far earlier pioneer using the ancient medium of analog terrestrial television). The idea of such courses is that they can "flip the classroom", so that instead of attending lectures students view them online and do their coursework work at home, visiting the campus only very occasionally to be tested and discuss difficulties. Advantages for the university are substantial: it can save on the cost of maintaining physical lecture halls and presumably stretch lecturers' salaries over far more students than can be fitted into a theatre. Lanchester forsees MOOCs becoming ever more important as university admission fees escalate while prospective students' earning-power falls, but he also forsees them putting some universities out of business altogether. For a MOOC, as for any other online content provider, attracting custom will depend upon effective viral marketing and hiring star performers like Ferran AdriĆ  or Bruce Sterling.

As for the quality of MOOC tuition, Lanchester found SPU27 harder and more rigorous than he'd expected, though he does acknowledge a loss of personal interaction among students and lecturers. But the fact that MOOCs are tolerable at all is testament, as if any more were needed, to a computer/telecoms revolution that's now entered the post-PC phase. Many MOOC students will  probably prefer to watch their lectures on a large-screen smart TV at home, on a tablet in the park or on the bus to their day-job. Last year in a column about Alan Kay and his Dynabook (Idealog 223) I felt obliged to point out that increasing monopolisation of copyrighted media content by big corporations was becoming an obstacle to its fullest implementation. Well, MOOCs offer one more source of free high-quality educational content, presumably subsidised by those high fees paid by physically-attending students. Free tuition could even revive that old idea of education for its own sake, rather than just for a job.

Market competition is working pretty well to reduce the cost and increase capabilities of the hardware you need. My first-generation LCD TV died the other day and I found the cheapest replacement was a 29" LED model from LG. Its picture quality is a revelation, in both sharpness and colour fidelity, but I was a bit sceptical about its smartness. I needn't have worried because it immediately found my home Wi-Fi and I was watching YouTube and reading Gmail within minutes. It finds my laptop too and plays content from its hard disk. Like Mr Honeyball I find the on-screen keyboard deeply depressing, but I've found some solace through an LG TV Remote Android app that lets me enter text into most forms and search-boxes via gesture typing, Bluetooth keyboard or even speech. It doesn't work with Google Docs though, as TV and tablet keyboards get hopelessly tangled. I've added my own 500 gig external hard drive to the LG's USB port for rewind and programme recording, and paired TV and Nexus to stream YouTube content directly without need for a Chromecast. And it came with built-in Netflix, Lovefilm and iPlayer, but irritatingly not 4oD.

Despite its weaknesses I can still easily imagine watching lectures on smart TV and answering multiple-choice test questions via the tablet. Data formats are no longer really a problem as I can shovel PDFs, JPEGs, MP3 and MP4s with ease between Windows, Android and TV. Google's new free Quickoffice handles the Microsoft Office formats pretty well (and I keep DocsToGo as backup for anything they can't). It feels as though a chilling wind of change is blowing right through the Stanford campus all the way to Redmond, and I seriously wonder whether I'll ever buy another Windows PC. I hope that doesn't make me a mook (whatever that means).

Wednesday 16 July 2014

ALGORITHMIC BEAUTY

Dick Pountain/ Idealog 234/ 7th Jan 2014

The day I was due to write this column I had the good fortune to be visiting the Caribbean island of Bequia, and very nice it was too, with sun, sea, sailing boats, flowers and tropical fruit. However so advanced is my pathological nerd-dom that the subject it inspires me to write about is fractal geometry, rather than gastronomy, fishing or the miraculous qualities of modern sailing vessels.

Actually to a proper nerd this connection is pretty straightforward. We sailed to Bequia across a sea covered in fractal waves and spattered with fractal foam and spray, under a sky full of fractal clouds. And the land is covered by a profusion of the most fractal plants imaginable, from palms to frangipanis to ferns and back again. In fact it was while inspecting some palm fronds on the beach that I was suddenly reminded of a book that impressed me very much when it came out a quarter of a century ago, called "The Algorithmic Beauty of Plants" by Przemyslaw Prusinkiewicz and Aristid Lindenmayer (Springer 1990). The authors, with expertise in mathematics, biology and computer graphics, set out to model the forms found in real plants using a system of fractal geometry call L-systems, which mimics the development of a plant. It operates with a smallish number of parameters that can be varied to produce a vast range of startlingly realistic plant forms - stems, leaves flowers and all.

Their key insight was that the form a plant exhibits is not a static fact but something dynamic, generated during the process of growth, and in this respect they brought the brilliant work of D'Arcy Thompson into the computer age. That insight can be summed up by saying that each form implicitly contains its own history. Of course Prusinkiewicz and Lindenmayer were only simulating such a history inside a computer, but the results are so realistic one can't help wondering whether they provide a clue to the way Nature itself does it.

Clearly Nature doesn't type in the algorithms that Prusinkiewicz and Lindenmayer describe in C++ code, nor even in pseudo-Pascal. There is no need to postulate a nerdish Intelligent Designer with beard, keyboard and Tux-the-penguin teeshirt. All that Nature has available to work with is chemistry, and dynamic chemistry at that. Such chemistry is now pretty well understood thanks to the likes of Ilya Prigogine, who explained how factors like gradients of concentration or temperature can cause a chemical system to oscillate regularly in time, like a clock. As a plant stem is sprouting biochemical processes inside each of its cells cause the levels of certain growth hormones to vary cyclically over *time*, with the result that leaves pop up in a *spatial* sequence along its length. Put another way, Nature is its own hybrid digital/analog computation system, in which the rates of such chemical cycles, following various power laws, cause behaviour that somewhat resembles Prusinkiewicz and Lindenmayer's algorithms.

And the way plants and animals vary those growth parameters is only very loosely determined by the quaternary digital code of their DNA. A class of genes called "homeobox", present in all multicellular lifeforms, determine only the broadest divisions within the creature's form, like its number of limbs or body segments - all the finer details get determined by these semi-autonomous chemical cycles and various epigenetic factors.

One of the stronger arguments the Creationist and Intelligent Design brigades can muster is that the fierce complication of the way nature looks and operates is too great to all be encoded statically in the finite amout of DNA. But in fact it doesn't have to be all so encoded. Indeed the whole metaphor of a designer working from a total blueprint misses the way that Nature actually works. Nature and evolution are dynamic, non-deterministic systems in which stuff continually happens and affects other stuff, and this couldn't possibly be captured in any static plan. The Deist notion of a God who just pressed the Start button and then withdrew forever is far closer to the truth than any active designer.

Nature's "blueprint" is more like a thick wad of blueprints for tiny clockwork protein machines that, when set to work, rush around interacting with one another and with their external environment, and the end result is all this marvellous beauty and diversity that we see. If you can get your head around the idea of (never the details of) such fantastically complex chains of causality, they are actually far more marvellous than any hypothesised Intelligent Designer. In fact having to invent such a creator, while useful and necessary during the infancy of our species, has nowadays become merely a lazy copout that insults our human ability to understand the world we live in.

Sunday 15 June 2014

EATING CROW

Dick Pountain/PC Pro/Idealog 233 08/12/2013

I'm an adventurous cook and eater, deeply into all kinds of exotic plants, game and offal, but even so crow is something I don't care to eat much of. But eat crow I now must, at least metaphorically, because I need to admit in public that Bill Gates is the best, most serious and responsible of all the digital moguls. That was already becoming pretty obvious by his choice of occupation upon leaving the helm of Microsoft. Promoting the development of vaccines is an unglamorous and pragmatic way to really improve the lot of humanity, far removed from the flamboyant political rhetoric of so many liberal pop and movie stars. Pooling resources with Warren Buffet of Omaha rather than  Hollywood or Wall Street pointed in the same direction, and he's often on the opposite side from the other Silicon Valley moguls when it comes to taxation.

But what has finally prompted me to this corvine repast is Gates's public advocacy of Canadian energy expert Vaclav Smil. A recent Wired magazine interview with Smil (http://www.wired.com/wiredscience/2013/11/vaclav-smil-wired/) opens by quoting Gates as saying: “There is no author whose books I look forward to more than Vaclav Smil”. So, not trendy sci-fi authors or self-help gurus, but someone whose books better promote understanding of the most critical problems we face than anyone else I know.

To be fair to myself, though I've often been critical of Microsoft's design ideas, marketing methods and quality control, I'd never been a hater of Gates the person (unlike those commentators who paint him as a kind of Antichrist). I met him a few times in early days and could talk to tech talk to him - he's a man who's written code - and it's always been clear to me that however adept he became at building a huge business corporation, there lurked within him the heart of a true nerd. And you really need to be something of a nerd to appreciate Smil's works because he's relentlessly scientific and unromantic, interested only in trying to find out what's actually happening, in estimating actual risks rather than preaching or scaremongering.

I first encountered Smil's work in 2008 when asked to review his magisterial "Global Trends and Catastrophes: The Next Fifty Years" for The Political Quarterly  (http://dickpountain-politicalquarterly.blogspot.com/2012/07/global-catastrophes-and-trends-next.html). I realised immediately that here was someone different. Smil is Professor of Environment at the University of Manitoba, but he eludes all the normal cliched labels: neither a green nor a technology booster, neither a capitalist nor an anti-capitalist, he's a *systems* man, identifying and analysing the various systems via which we strive to survive. He explains how much we know and don't know about their operation, what works and what doesn't. And he quantifies everything, especially risk (by comparing with the baseline rate of "general mortality", which for us Westerners is around 0.000001 deaths per person-exposure-hour).

I had to warn potential readers that they won't follow Smil's better arguments unless they're comfortable with log/log scale graphs, but by persevering they'd learn that death in hospital from preventable medical error is a greater risk than smoking, terrorism or car crash, and that young black citizens of Philadelphia could *reduce* their chance of death from gunshot by joining the army. He rarely offers concrete policy proposals, just more rational ways to make decisions: "There is so much we do not know, and pretending otherwise is not going to make our choices clearer or easier... we repeatedly spend enormous resources in the pursuit of uncertain (even dubious) causes and are repeatedly unprepared for real threats and unexpected events". He doesn't moralise or preach and is sceptical of those who do: he argues only from science.

In the Wired interview I mentioned above Smil argues that the demise of manufacturing in the UK and USA will doom us not only intellectually but creatively too, because innovation is tied to the process of making things. When asked whether IT jobs can replace the lost manufacturing jobs he replies somewhat scornfully "No, of course not. These are totally fungible jobs. You could hire people in Russia or Malaysia—and that’s what companies are doing." He admires Germany and Switzerland for maintaining strong manufacturing sectors and apprenticeship programs: "because you started young and learned from the older people, your products can’t be matched in quality. This is where it all starts". He points out that Apple commands such huge profit margins that it could manufacture in the USA without destroying its business model, if only it dared stand up to Wall Street: "Apple! Boy, what a story. No taxes paid, everything made abroad—yet everyone worships them." He doesn't say whether or not he admires Microsoft, but we know that Bill Gates certainly admires him.

A FUTURE RE-IMAGINED

Dick Pountain/PC Pro/Idealog 232  06/11/2013

Once upon a time, in the '80s and early '90s, I used to write highly technical columns about future computer technologies, both hardware and software - stuff like asynchronous or transport-triggered processor architectures, object-oriented memory managers, parallel processing algorithms and much more. I don't do that stuff much nowadays, but the reason is not that my interest has waned but rather that the total triumph of Intel and Microsoft condemned many of those esoteric research avenues to become dead ends. And my aging brain rebels when asked to study the detailed cache architecture of Intel's next CPU generation. That doesn't mean I've lost interest entirely, merely that I can now afford to wait for breakthroughs that might just change the whole game, which don't happen very often (and don't always deliver). Over the last 10 years I've covered just three such technologies, namely spintronics, diamond-film quantum dots and graphenes. It still gives me a bit of a thrill when a new one arrives, as it just has with the "memristor".

If you've heard of the concept it will almost certainly be thanks to Hewlett Packard's occasional announcements that it's working on memristor-based storage devices, and hopes to have 100Terabyte drives available in around five years. But memristors, if they can be made to work economically - which is not yet certain - promise more than storage. If they are able to function like transistors too they could render possible the first wholly-integrated computer architectures in which CPU, local memory and mass storage are all built from very similar basic units.

So what exactly is a memristor? As its name suggests, it's an electronic component that combines the attributes of a resistor and a memory. What that means in HP's application is a kind of resistive RAM, in which the memory cells are resistors that retain a memory of the current that last flowed through them. Pass a current through the cell one way and its resistance increases, pass a current the opposite way and resistance decreases, but crucially the cell *preserves its last state* and so acts as a non-volatile memory. You can read 0s and 1s by measuring the resistance of cells. HP's memristor cells are implemented by two sets of parallel wires, one platinum and one titanium, at right angles and separated by twin layers of titanium dioxide. One layer is pure oxide and a good insulator, while the other is lightly doped to deplete it of some oxygen atoms. Where they cross a current passed through the junction causes oxygen "holes" to migrate into the undoped layer, reducing the resistance of both layers: when the current is reversed they migrate back again. Two things to note about this scheme: the substrate doesn't need to be silicon, and the cells are so simple they can be made very small indeed, giving huge packing densities.

HP hopes to make 3nm memristors that switch in a nanosecond, as fast as DRAM and denser than flash memory, which they could replace. All very good, but the memristor concept goes deeper. The Chinese-American non-linear circuit theorist Leon O. Chua has suggested that the memristor is an expected fourth fundamental electronic device - the better-known ones being the resistor, capacitor and inductor - that links magnetic flux to electric charge. It should be able to do more than just make memory cells. Other researchers have shown that memristors can be made using different physical effects, like magnetic spin or polarisation, and from different materials like organic polymers. It's also been suggested that memristor cells can be combined into "crossbar latches", a novel kind of logic gate that can function like a transistor in a processor architecture. Most suggestive of all is that memristors remember in a way similar to the reinforcement of synaptic connections in the animal brain, and there are teams working on using them in neural learning networks.

Bundling all such speculations together sets my imagination running riot. Throw in a few other leading-edge technologies like graphenes with their miraculous conductivity and photosensitivity, micropipelines for asynchronous operation, and analog resistive networks as used by Carver Mead for his chip that emulates the human retina. Now imagine future flexible plastic materials that can collect solar power and store it in their own integral hypercapacitors, that contain petabytes of memory integrated with distributed processing elements, that can see using grids of nano-lenses that mimic the insect compound eye, can move using grids of nano-actuators and can learn using integrated neural networks. Sci-fi authors have forseen such materials in their imagined futures for many, many years, and though we may be nowhere near achieving them, we're beginning to see dimly where the route might start. Perhaps memristors are a crucial step onto that route.  

Saturday 22 March 2014

LOOKING GOOD!

Dick Pountain/PC Pro/Idealog231 - 06/10/2013

The reception of Apple's iPhone 5c and 5s models nudges me to revisit a previous theme of this column: the almost obsessive-compulsive way that looks have come to dominate the world of electronic gadgets. Most of the Apple faithful were aghast at the horrid plasticness of the iPhone 5c: totemic phones, just like Gollum's "precious", need to be shiny and metallic. Let's forget that quality plastic is a far more practical material for phones, less dentable and scratchable, as demonstrated by the fact that virtually all iPhone owners immediately cover up their precious with a plastic case.

I'm not a phone fan anyway, but I am a keen, Flickr-bothering photographer, and I can see the same process at work in the field of cameras. Pocketable compact cameras have improved astonishingly over recent years, with 20x or even 30x zoom lenses and 18+ megapixel sensors, so that unless you're a professional sports or wildlife photographer, and if you mostly put your pics online and don't make large paper prints, they can pretty well replace an entry-level DSLR. Problem is they don't look good, or rather they don't make *you* look good, that is, like a professional. The camera manufacturers' marketing departments soon spotted this vulnerability, and a new breed of retro-styled camera is now flooding onto the market.

Made to look as far as possible like 1930s Leicas, some of these cameras feature interchangeable lenses while others have full-frame sensors and fixed focal-length (that is, non-zoom) lenses: what they tend to share are price tags that push up toward £1000, at a time when seriously capable compacts cost below £250. The founding moment of this trend was probably Olympus's 2009 ad campaign for its retro-styled E-P1 model, under the slogan "Don't be a tourist". There you have the rationale stated barely: this is no longer about how convenient or capable the device is, but how it makes you look to other people. Gadgets as badges of status, symbols that can distinguish you from the rest of the crowd.

This is becoming a matter of life and death for the electronics industry. Camera manufacturers were facing a dramatic sales drop for compact, point-and-shoot cameras, as most young people prefer to use their ever-more-capable mobile phones as cameras. There's a flourishing industry in add-on lenses and imaging apps for the iPhone, while Instagram completely displaces Flickr for the iPhone generation, and none of this generates any revenue for Canon or Nikon. Commanding a premium price for these retro cameras that make people look like professionals could become a life-saver.

It goes without saying that this domination of the aesthetic has been the rule in other consumer sectors for many years, in the case of the garment industry for centuries. Luxury cars nowadays are all capable of broadly similar performance, so high that it can't possibly be unleashed on public roads, only on Top Gear. Hence they are chosen mostly on looks, the prestige of their brandname and a reassuringly huge price tag. And I won't even try to analyse the women's handbag sector, where some devices can cost even more than a Leica M9 (and they don't even take pictures).

I've always been a modernist in the field of design, a believer in Louis Sullivan's dictum that "form ever follows function", a lover of everything spare, elegant and mass-produced. I ride a Vespa PX125, I play a Fender Stratocaster and I own a Parker 51 that I pick up around once every five years to discover that the ink has dried up. The electronics industry is of course the ultimate expression of such modernism: the economics of the silicon foundry depend on huge production runs, while VLSI chip layouts are beautiful examples of spare necessity, with every wire routed rationally. There's therefore a sort of irony, but also an inevitability, about the way that laptops, tablets, phones, cameras and other devices built using such chips are becoming subject to fashion in much the same way as clothes.

The irony exists in the fact that it's computer-aided design and 3D-printing that will make it increasingly possible for us to have various different cosmetic outer shells, in small production runs, masking the same set of internal silicon "organs". Love it or loathe it, the iPhone 5c with its handbag-matching colours is a step along that road. But were I working for Microsoft or Dell or Sony, which thank God I'm not, my every waking thought right now would be devoted to discovering how to make laptops and ultrabooks look *less* like tablets or coloured sweeties and more like things that a professional might take into the jungle or a war zone (HINT: just painting the case khaki won't do it...)

Wednesday 5 March 2014

EVERYONE'S AN EXPERT

Dick Pountain/PC Pro/Idealog 230 06/09/2013

Richard Dawkins' meme theory has always interested me as a metaphor, though I only partly accept it. Ideas do propagate from mind to mind and certainly they can mutate during that passage, certainly some do survive while others perish. It's the nature of the selection process I'm not sure about. Simple ideas like "the world is round" can be related directly to physical reality but big ideas like "Christianity" or "Islam" can't, and what's more they're too big, various and vaguely defined to even be treated as coherent entities. (Even so, the horrible fate of his concept - appropriated by giggling netizens to describe pictures of talking cats - seems a bit harsh). 

There is however a class of very simple meme that fascinates me and that is the cliche or verbal tic: a phrase like "back in the day" that appears from nowhere and enters ubiquitous usage for a few years before fading away again. It seems to me that such phrases can be analysed to reveal useful truths about peoples' attitudes to the world. My current favourite tic is "it's not  perfect". If I had a quid for every time I've read this phrase in reviews and online comments over the last two years, I'd perhaps have the deposit on a small hybrid automobile.

What does "it's not perfect" tell us about the way people are thinking? Two things really. Firstly they may believe that perfection exists and is worth seeking out (which is most likely untrue). Secondly, and far more importantly, that they not only know the object under review has flaws, but they're desperate to let everyone *know* that they know. That's because they will be judged and mercilessly ridiculed if they fail to mention a single flaw. They may be accused of being naive, sloppy, biassed, even a "fanboi" or a "shill", by the online masses, who are all experts and feel they could have written the review better themselves.   

I'm not just talking about reviews of consumer electronic items here. One of my more pathetic weaknesses is inhabiting the Comment Is Free forums on the Guardian website, mostly those on political topics (where I pick my way gingerly through the ferocious troll fights) but also those on science. For example the other day the paper carried an account of new research on a possible connection between the increasing incidence of Alzeimer's Disease and improved public hygiene (http://www.theguardian.com/society/2013/sep/04/alzheimers-disease-link-hygiene), all to do with reduced immune efficiency.

I don't want to discuss the quality of that research, but rather the quality of the debate in the attached CiF forum. A large proportion of the commenters had their own theory or critique of this research, 99% of which I'm guessing were based on no actual laboratory or library research, nor any medical qualifications, nor even having read the original paper. Typical first lines ran like: "Actually Alzheimers is linked to medical stupidity"; "The scientists might be onto something here"; "Yet another poorly thought out piece of research"; "I don't buy this theory at all".

Actually this theory isn't for sale, it's being put forward for peer review by people qualified in the field (after a decade or more of scientific training) but once it makes it onto the web, courtesy of the Guardian's Society section, it gets exposed to the bracing winds of our new anti-elitist, hyper-democratic intellectual virtual world. Or to mix metaphors more thoroughly, it's all becoming one big intellectual rugby game in which everyone feels entitled to run with ball for a while, regardless of stature or agility.  

The easy availability of information on any subject whatever - via Google, Wikipedia and the rest - fosters this delusion that we're all now entitled to criticise anything, anywhere and have our critiques (noun, not verb, please) treated as equally important. We second-guess the designers of computers, architects of buildings, medical researchers, based on no real evidence but a raging egoism inflated by promiscuous online reading. To be sure, at the moment it's all just harmless hot air confined to the sphere of online forums, but one can't help worrying what effect it might have in future if kids grow up believing that professional training, peer review and other such institutions designed to protect the quality of information are just oppressive elitism.

Worse still they may come to regard knowledge itself as a competitive game of "I know more than you do". We humans are a fundamentally social species, but this net-reinforced individualism tends to make us into an anti-social species who see life as a zero-sum game with everyone else as a competitor. Actually knowledge is a team game in which you have to learn rules, collaborate with others and practice regularly.

I COMMENT, YOU TROLL, THEY HATE-SPEAK

Dick Pountain/PC Pro/Idealog 229  02/08/2013

I've been reading a lot of anthropology recently. Not sure why, something about the current state of the world makes me want to know more about the workings of the pre-civilised mind. David Graeber's excellent paper "Toward an Anthropological Theory of Value" has a fascinating section about the ancient Maori and their worldview, in which I found one  item particularly provocative. That Maori custom of sticking the tongue out during their haka war dance, so familiar to all Rugby fans, always strikes us as a gesture of cheekiness or insult, because that's what it now means in most European cultures. That isn't what it originally meant to the Maori though: when aimed at an enemy during a battle it meant "You are meat, and I'm going to eat you", and true to their word, if they defeated you they might well have done so. For some reason this put me in mind of internet trolls.

There's recently been a surge of outrage about trolling on Twitter, sparked initially by rape threats against Labour MP Stella Creasy and feminist campaigner Caroline Criado-Perez, then amplified by bomb threats against various female journalists including the Guardian's Hadley Freeman. This stuff plays directly into those debates about internet censorship (Cameron's anti-porn filters) and freedom of speech, all of which constitute such a moral quagmire that one enters it very cautiously indeed. I've always been largely in favour of the freedom to robustly criticise, in any medium at all, since to take the opposite view would mean to shut up and toe the line, to accept things the way they are.

However in recent years this issue has become very much more complicated after various laws against "hate-speech" have been enacted. These laws make certain kinds of speech, most often racial insults, into prosecutable crimes, and that raises two very difficult points: firstly is it permissible to ban any form of speech, as opposed to action, at all (the pure freedom-of-speech argument); secondly, how do you gauge the degree of offensiveness of a speech act (necessary in order to decide whether it's prosecutable or not)? 

The pro freedom-of-speech argument can be defended in abstract philosophical terms, but in effect it always depends upon the old adage that "sticks and stones may break my bones, but words can never hurt me": that is, that verbal threats are not the same as the actions which they threaten, and do not cause the same damage. That's certainly true: the threat of rape is not as harmful as the act of rape and the threat to bomb doesn't kill or demolish buildings. However that's not to say that they cause no damage at all. One result of the recent revolution in neuroscience is confirmation that fear and anxiety do indeed cause physical damage to people. These primitive emotions are very useful from an evolutionary point of view: fear keeps you from stepping off cliffs or picking up rattlesnakes, while anxiety forms part of the necessary binding force between mammals and their highly-dependent offspring. However both operate by releasing corticosteroid hormones that have all kinds of nasty long-term effects if repeated too often, high blood pressure, hardening of arteries and much more. Like fire-extinguishers they're necessary and welcome during an emergency (putting out a fire) but they make a mess of the furniture and are not to be played with.  

Trolling is precisely playing with the fire extinguishers. It's meant to induce anxiety, fear or confusion in order to dissuade the victim from some attitude or action of which the troll disapproves. To that extent it's a form of politics and to that same extent is of a kind with terrorism, since both seek to achieve political ends by inducing fear. The crucial difference is that terrorists don't just speak but act: they don't just stick out their tongues but really do eat you. None of this is news of course, because bandits, tyrants, robber barons and military officers have known for millenia that you can bend a population to your will by terrorising them.

In fact there's now a whole new discipline that views our efforts to manipulate each others' emotions as the driving force of history. We manipulate our own emotions with music, dance, art and drugs: why else would alcohol, tea, coffee, sugar, tobacco, opium figure so highly in the history of trade? We manipulate others' emotions with scary stories (religion), clever rhetoric and the threat of violence. Democratic governments insist that we delegate the actual use of force to our police and army - and whether or not they can demand we also give up the threat of force remains a very fraught question - but never believe that threats do *no* harm.

Tuesday 14 January 2014

SOCIAL UNEASE

Dick Pountain/PC Pro/Idealog 228 07/07/2013

There's just been what one participant called a "food fight" on Facebook over which is better, Facebook or Google+ in which I, against my better judgment, became involved. It was a civilised enough debate with no name-calling or bad language and our conclusion (if it deserves such a description) after 122 comments was that everyone stays on Facebook because everyone stays on Facebook. Put less tautologically, people use FB because all their friends also use it rather than Google+, which is an example of what economists call a "network effect".

A network effect is seen whenever the value of a product or service depends upon the number of others using it. The textbook example is always the telephone: when first invented there was no-one else with a phone to talk to, so its value was zero, but the more people who acquired phones the more useful it became. Network effects generate huge inertia against change. Once it had become established, any technically superior competitor to the telephone would have been out of luck all the way up to advent of SMS and email (over 100 years after Bell's patent).

The main part of our debate though was about whether Google+ is in fact technically superior to Facebook, and on that point I was agnostic because I dislike them both almost equally intensely. Facebook's user interface drives me crazy, with important menus scattered at random among its different sections, or hidden behind tiny, unlabelled greyed-out arrows. Google+ certainly looks a lot cleaner, but that's partly because there's far less comment activity there - its two-column display would get pretty messy given a 122-comment debate. I do like the fact that it doesn't have all the useless crap that FB puts in its left and right-hand columns, and which I never use.

For me the thrill is gone from both networks, and social networks just aren't delivering anything much any longer. It's not for lack of trying. I currently have open (not necessarily active) accounts on Flickr, Facebook, Google+, Twitter, OpenDemocracy, Reddit, Tumblr, SoundCloud, the Guardian's CiF and LibraryThing, and terminated accounts at GoodReads and Quora. I still like to publish my pictures on Flickr because it's the only way anyone else ever gets to see them (ditto with music on SoundCloud) but there's no discussion, just people liking them. On Facebook you might get discussions, but often at a pretty unfocussed level because any passer-by can join in with irrelevancies or outright trolling. Robert Scobie the guy whose post started that food fight, argues in https://plus.google.com/+Scobleizer/posts/RLrG3vCTjkD that being less popular is a Google plus:

"Second, Google+ has a better interest-based community. Since most of our high-school friends, family members, etc, disdain Google+ we've built new social graphs that usually have more to do with our interests than with our social connections. "

Well perhaps he has, but most of the communities I've looked at have been pretty random or just IT company PR. In the food fight Jack Schofield shrewdly observed that perhaps I'm pining for the focus of 1990s' Cix conferences and Ameol's threaded discussions (Jack and I go back a long way) and he may be right. Perhaps a mailing list is still the best way to pursue anything that *really* interests you, with invited people who're close enough for sensible discussion but not necessarily agreement. OpenDemocracy is the nearest I get to that.

I think the problem is one of boundaries. A social network like Facebook isn't a real community because you don't work or eat or sleep together, which leaves it as what anthropologists would call a ritual space where you perform merely symbolic acts to placate other people (or gods). I've noticed Facebook recently becoming dominated by three different kinds of such ritual behaviour: "preening", where you show off to other people how well you're doing; "grooming" where you wish your friends happy birthday or tell them how much you lurve them; and "preaching" to convert them to some cause or other. Sharing stuff - pictures, videos or articles - simply because you think its good and that everyone might enjoy it appears on the wane as Facebook fragments into millions of smaller personal networks with few tastes in common.

To see whether the grass is greener elsewhere I posted a few items to my hitherto unused Tumblr account. A few reblogs of my ruined Scottish castle pix, but nothing much until I posted a photo of a spectacularly gnarled tree in Kew Gardens. That attracted a swarm of admirers with names beginning with moth-, dream-, meadow- or nymph-, and checking out some of their own blogs transported me straight back to the Oz era. Flowers, nudity and psychedelic squirly stuff. Fair enough, what goes around, comes around. 

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...