Dick Pountain/Idealog 258/06 January 2016 14:02
Regular readers might have gathered by now that music ranks equal first - alongside photography and programming - among my favourite recreations. In fact I combine all three in various ways, for example by applying filters to process my pictures, and by writing code to generate musical compositions. It's the latter that concerns me in this column. Around 14 years ago I first became interested in computer composition, and was inspired to write my own MIDI interface in Turbo Pascal v4 (my language squeeze of the time).
Rather than generating real-time music, this unit let me output MIDI files from Pascal programs, which could then be played in my sequencer of choice. I messed around for a while trying to do US-style minimalism (think Adams, Glass, Reich, Riley) constructing complex fugues and phase-change tunes that no human could play. The results never really satisfied me, partly because General MIDI instruments sounded pretty crap through the sound-cards of that era, and because I regularly ran up against shortage of memory problems, using what remained a more or less 16-bit development system. I put the project aside for around 10 years until another spurt of enthusiasm arrived (still using Turbo, but now running in a DOS box on a Pentium/Windows XP system). That time around I made some tunes that were sufficiently convincing to put up on SoundCloud, but there were still nagging problems.
Basically the structure of compiled Pascal confined me to writing quite short tunes. Using fixed length strings and arrays as my main data structures made long-range structure, like successive movements on a varying theme, just too cumbersome to achieve, but creating separate short movements and splicing them together by hand was cheating. My whole intention was to write single programs that generated pieces of recognizable music, interesting if not necessarily pleasant.
Around this time I finally shucked off my (increasingly anachronistic) addiction to Turbo Pascal and fell wildly for Ruby, as documented in previous columns, but never did quite get around to rewriting my composing system in it. There things rested again, until as described a couple of columns ago I remade acquaintance with the hitherto spurned Python language. In order to get up to speed in it I rewrote my venerable Poker program - first effort in Basic on Commodore Pet circa 1980; next in Delphi under Windows; last in Ruby circa 2002 - which translated with surprising ease into Python. Brimming with confidence I thought, it's now or never, and got stuck into rewriting my music system.
I struggled at first because the kind of bare-metal-bit-twiddling (curse you MIDI Variable-Length Quantity!) that's so easy in Turbo Pascal is far from obvious using Python's arbitrary-precision integers. Scanning the forums I soon found a GNU-licensed library by Mark Conway Wirt that does exactly what my old TP one did though, and I was away. Writing the higher level parts proved a revelation. Python's powerful dynamic sequence types the tuple, the list and the dictionary, enabled me to do away with fixed-length arrays and memory allocation altogether, and let me completely redesign the system.
The raw materials of my music remain strings representing sequences of pitch, time, duration and volume values, but now my top-level primitive called MIDIseq.phrase sucks in four such strings, like a ribosome chewing RNA, and chops them up into 4-tuples which are far more efficient and flexible for further processing. All of a sudden, thanks merely to a different set of data structures, my long-range structure problems went away: both horizontal (melody) and vertical (harmony) structures are now essentially without limit. I can write functions to generate random strings, reverse them, invert them, mix and combine them, even evolve them. Python's lambda functions let me generate novel musical scales and apply them on the fly, while iterators offer a fabulously compact way to encode long stretches of melody.
I could hardly be more impressed by this text-book example of what the more savvy computer scientists have been telling us for decades, namely that programs equal algorithms plus data structures, not just algorithms. This deep truth is in serious danger of being lost nowadays, partly thanks to some of the truly awful languages the market has foisted upon us, and partly due to the TED generation's rather naive awe about algorithms. In popular journalism algorithms are all we ever hear about: Google's new search algorithm, the latest AI algorithms, what's the algorithm for a conscious robot, and worse inanities. What neuroscience actually teaches us about the way the brain works is that it's hardly an algorithmic engine at all, and depends rather little on sequential processing. It's really more like a big, soft, fatty mass of fabulously clever data structures. But enough of that, back to my "Contracerto in Z Flat Minor"...
[Dick Pountain is sorely tempted to enter an Internet Of Things fridge for the 2016 Eurovision Song Contest]
My columns for PC Pro magazine, posted here six months in arrears for copyright reasons
Thursday, 9 June 2016
COMMAND AND CONTROL
Dick Pountain/ Idealog 257/ 04 December 2015 09:34
During those awful last weeks of November 2015, with the bombing of a Russian passenger jet, the Paris shootings and the acrimonious debate over UK airstrikes in Syria, it was very easy to overlook a small but important news story, an Indonesian report into the loss of AirAsia flight QZ8501. In December 2014 that plane crashed into the Java Sea with loss of all 162 passengers and crew, and recovered "black box" data has enabled investigators to come to a firm, but disturbing, conclusion about the cause. It was what you might call software-assisted human error.
A broken solder joint in the Airbus 320-200's rudder travel limit sensor (probably frost damage) sent a series of error signals that caused the autopilot to turn itself off and forced the crew to take back manual control. Flustered by this series of messages the flight crew made a fatal error: the captain ordered the co-pilot, who had the controls, to "pull down", intending to reduce altitude. It was an ambiguous command which the co-pilot misinterpreted by *pulling* back on the stick, sending QZ8501 soaring up to its maximum height of 38,000 feet followed by a fatally irretrievable stall. The report recommends that international aviation authorities issue a new terminology guide to regularise commands in such emergencies, but I reckon this was a problem of more than just the words. Like all modern jets the Airbus 320 is fly-by-wire, with only electronic links from stick to control surfaces, and in current designs there's little mechanical feedback through the stick ("haptic" feedback is planned for the next generation, due from 2017). I'd guess that co-pilot received few cues to the enormity of his error by way of his hand on the stick. It's not enough to *be* in control, you have to *feel* in control.
I've been intrigued by the psychology of man-machine interaction ever since I saw ergonomic research by IBM, back in the '80s, that showed that any computer process which takes longer than four seconds to complete without visual feedback makes a user fear that it's broken. That insight lead to all the progress-bars, hour-glasses and spinning hoops we've become so tediously familiar with since. An obscure and controversial branch of psychology called Perceptual Control Theory (PCT) can explain such phenonomena, by contesting the principal dogma of modern psychology, namely that we control our behaviour by responding directly to external stimuli (fundamental to both old-style Behaviourism and newer Cognitive versions). PCT says we don't directly control our behaviour at all, we modify our *perceptions* of external stimuli through subconcious negative feedback loops that then indirectly modify our behaviour.
A classic example might be riding a bike: you don't estimate the visual angle of the frame from vertical and then adjust your posture to fit, you minimise your feeling of falling by continuously and unconsciously adjusting posture. Similar mechanisms apply to all kinds of actions and learning processes and I was easy to convince because from childhood I've always hated skating (roller, ice, skiing), but I still love riding motorbikes at speed. There's no contradiction: when skating my feet feel out of control, whereas when biking they don't. Just some quirk of my inner ear. However this all has some fairly important consequences for current debates about robotics, and driverless cars in particular.
The recent spate of celebrity doom-warnings about AI and robot domination are all directed against current assumptions that fully autonomous machines and vehicles are both desirable and inevitable. But maybe they're neither? The sad fate of AirAsia QZ8501 suggests both that over-reliance on the autopilot is severely reducing the ability of human crews to respond to emergencies, and also that it would be good to simulate the sort of mechanical feedback that pilots of old received through the stick, so they instinctively feel when they're steering into danger. All autonomous machines should be fitted, by law, with full manual-override that permits actual (or, grudgingly, simulated) mechanical control. Boosters of driverless cars will retort that computers react far faster than humans and can be programmed to drive more responsibly, which is quite true until they go wrong, which they will. Perhaps we need at last to augment Isaac Asimov's three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
4. A robot must if instructed drop dead, even where such an order conflicts with the First, Second, and Third Laws.
[Dick Pountain believes that any application vendor whose progress-bar sticks at 99% for more than four seconds should suffer videoed beheading]
During those awful last weeks of November 2015, with the bombing of a Russian passenger jet, the Paris shootings and the acrimonious debate over UK airstrikes in Syria, it was very easy to overlook a small but important news story, an Indonesian report into the loss of AirAsia flight QZ8501. In December 2014 that plane crashed into the Java Sea with loss of all 162 passengers and crew, and recovered "black box" data has enabled investigators to come to a firm, but disturbing, conclusion about the cause. It was what you might call software-assisted human error.
A broken solder joint in the Airbus 320-200's rudder travel limit sensor (probably frost damage) sent a series of error signals that caused the autopilot to turn itself off and forced the crew to take back manual control. Flustered by this series of messages the flight crew made a fatal error: the captain ordered the co-pilot, who had the controls, to "pull down", intending to reduce altitude. It was an ambiguous command which the co-pilot misinterpreted by *pulling* back on the stick, sending QZ8501 soaring up to its maximum height of 38,000 feet followed by a fatally irretrievable stall. The report recommends that international aviation authorities issue a new terminology guide to regularise commands in such emergencies, but I reckon this was a problem of more than just the words. Like all modern jets the Airbus 320 is fly-by-wire, with only electronic links from stick to control surfaces, and in current designs there's little mechanical feedback through the stick ("haptic" feedback is planned for the next generation, due from 2017). I'd guess that co-pilot received few cues to the enormity of his error by way of his hand on the stick. It's not enough to *be* in control, you have to *feel* in control.
I've been intrigued by the psychology of man-machine interaction ever since I saw ergonomic research by IBM, back in the '80s, that showed that any computer process which takes longer than four seconds to complete without visual feedback makes a user fear that it's broken. That insight lead to all the progress-bars, hour-glasses and spinning hoops we've become so tediously familiar with since. An obscure and controversial branch of psychology called Perceptual Control Theory (PCT) can explain such phenonomena, by contesting the principal dogma of modern psychology, namely that we control our behaviour by responding directly to external stimuli (fundamental to both old-style Behaviourism and newer Cognitive versions). PCT says we don't directly control our behaviour at all, we modify our *perceptions* of external stimuli through subconcious negative feedback loops that then indirectly modify our behaviour.
A classic example might be riding a bike: you don't estimate the visual angle of the frame from vertical and then adjust your posture to fit, you minimise your feeling of falling by continuously and unconsciously adjusting posture. Similar mechanisms apply to all kinds of actions and learning processes and I was easy to convince because from childhood I've always hated skating (roller, ice, skiing), but I still love riding motorbikes at speed. There's no contradiction: when skating my feet feel out of control, whereas when biking they don't. Just some quirk of my inner ear. However this all has some fairly important consequences for current debates about robotics, and driverless cars in particular.
The recent spate of celebrity doom-warnings about AI and robot domination are all directed against current assumptions that fully autonomous machines and vehicles are both desirable and inevitable. But maybe they're neither? The sad fate of AirAsia QZ8501 suggests both that over-reliance on the autopilot is severely reducing the ability of human crews to respond to emergencies, and also that it would be good to simulate the sort of mechanical feedback that pilots of old received through the stick, so they instinctively feel when they're steering into danger. All autonomous machines should be fitted, by law, with full manual-override that permits actual (or, grudgingly, simulated) mechanical control. Boosters of driverless cars will retort that computers react far faster than humans and can be programmed to drive more responsibly, which is quite true until they go wrong, which they will. Perhaps we need at last to augment Isaac Asimov's three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
4. A robot must if instructed drop dead, even where such an order conflicts with the First, Second, and Third Laws.
[Dick Pountain believes that any application vendor whose progress-bar sticks at 99% for more than four seconds should suffer videoed beheading]
Friday, 22 April 2016
THE PYTHON HAS SPOKEN
Dick Pountain/Idealog256 /06 November 2015 13:09
One of the more frequent topics of this column has been my interest in programming, both as an important economic activity and, for me, a pleasing pastime: I refuse to call it a "hobby". My last such column was about Scratch, an innovative Lego-like visual programming language for teaching children (and adults) invented at MIT, and since then Kevin Partner has written an excellent PC Pro feature about Scratch (issue 253, p58, Nov 2015). In that column I confessed how I quickly I'd become hooked by Scratch's blocky metaphor, despite some major limitations, and at the very end mentioned that some Scratchers in Berkeley had extended the language to remove these limitations, and called it Snap! It was more or less inevitable that some wet weekend would arrive when I'd take a deeper, non-cursory look at Snap!, and when it did I was re-hooked several times over.
Snap! looks pretty much like Scratch, similar enough to execute most Scratch programs with little or no alteration, but it adds several missing features that make it a more grown-up language. In place of Scratch's table-like arrays it has dynamic lists as first-class, named objects; it has local variables, not merely globals like Scratch; and it supports "continuations" that enable you to pass one block as a parameter to another block, a feature from advanced functional languages like Scheme and Haskell. Given its similarity to Scratch, learning Snap! was, er, a snap and within a day I was looking for serious programs to convert. I settled on several I'd written several years ago in Ruby, including a visual matrix calculator and a visual simulation of a simple eco-system called "Critters". Both went easily into Snap! and worked well, "Critters" ending up as one single smallish block, thanks to the Scratch/Snap! concept of "sprites" (animated screen objects) which took care of all the graphics with no coding on my part.
Anyhow, the point I'm working up to is that this conversion job made a deep impression on me. It may not have escaped you that programming languages have something in common with religions, in that they can attract fanatical adherents who become impervious to criticism. My casual flip from Scratch to Snap! was a sort of apostasy and I'd enjoyed it, so I began sifting through my other Ruby projects, but in the process discovered the wreckage of an abandoned Poker program in Python. I'd flirted with Python back in 2002 but quickly abandoned it in favour of Ruby, on almost entirely aesthetic grounds. In those days I was a fanatical object-orientation nut and I hated Python's OOP syntax which involves prefixing just about everything with "self", something that Ruby managed to do without.
I've become more ecumenical since, and that weekend of Snap! coding had revived my enthusiasm for Lisp-style list programming, so to my own surprise I decided to finish that Poker program in Python rather than convert it. I downloaded a more recent version, WinPython 64-bit 3.4.3, and set to work. Instead of pursuing the pure OOP architecture I'd started with - card as a class, hand as a class, player as a class and so on - I rewrote the lower levels using Python's "tuples" instead. Each card is just a tuple (facevalue, suit), while a deck is a 52-piece tuple of cards. Soon I'd rewritten the whole thing using only tuples and lists, and never mentioned a class until I reached the player level. My code shrank to half its size, ran many times faster, and more importantly it worked, which the original never had properly. (Crucial to this success was Python's "list comprehension" construct, a fiendishly clever one-line trick for building lists by pattern matching).
This taught me a serious lesson, one that might help you too. Rigidly adhering to a single methodology can be counterproductive: recognise which tools are right for each job and use them, regardless of dogma. Cards, decks and hands didn't need or deserve to be objects (no inheritance required) while players did because they have many attributes like a hand, a bankroll, a strategy, a personality. When I wrote the original program in OOP style I'd actually obscured its structure, in favour of a class hierarchy the language imposed on me. I'll never stop claiming that good programming is an art, one in which an eye for structure is even more important than a head for logic. Factoring, that is breaking down your code into smaller chunks in the best way, and choosing the right data structures, are far closer to musical composition than they are to science. What a pity that, unlike mathematics which occasionally spawns a Hollwood movie, beauty in programming can only ever be recognised by a handful of fellow programmers.
Biog: Dick = [Idealog for Idealog in PCPro if editor_approval == True]
One of the more frequent topics of this column has been my interest in programming, both as an important economic activity and, for me, a pleasing pastime: I refuse to call it a "hobby". My last such column was about Scratch, an innovative Lego-like visual programming language for teaching children (and adults) invented at MIT, and since then Kevin Partner has written an excellent PC Pro feature about Scratch (issue 253, p58, Nov 2015). In that column I confessed how I quickly I'd become hooked by Scratch's blocky metaphor, despite some major limitations, and at the very end mentioned that some Scratchers in Berkeley had extended the language to remove these limitations, and called it Snap! It was more or less inevitable that some wet weekend would arrive when I'd take a deeper, non-cursory look at Snap!, and when it did I was re-hooked several times over.
Snap! looks pretty much like Scratch, similar enough to execute most Scratch programs with little or no alteration, but it adds several missing features that make it a more grown-up language. In place of Scratch's table-like arrays it has dynamic lists as first-class, named objects; it has local variables, not merely globals like Scratch; and it supports "continuations" that enable you to pass one block as a parameter to another block, a feature from advanced functional languages like Scheme and Haskell. Given its similarity to Scratch, learning Snap! was, er, a snap and within a day I was looking for serious programs to convert. I settled on several I'd written several years ago in Ruby, including a visual matrix calculator and a visual simulation of a simple eco-system called "Critters". Both went easily into Snap! and worked well, "Critters" ending up as one single smallish block, thanks to the Scratch/Snap! concept of "sprites" (animated screen objects) which took care of all the graphics with no coding on my part.
Anyhow, the point I'm working up to is that this conversion job made a deep impression on me. It may not have escaped you that programming languages have something in common with religions, in that they can attract fanatical adherents who become impervious to criticism. My casual flip from Scratch to Snap! was a sort of apostasy and I'd enjoyed it, so I began sifting through my other Ruby projects, but in the process discovered the wreckage of an abandoned Poker program in Python. I'd flirted with Python back in 2002 but quickly abandoned it in favour of Ruby, on almost entirely aesthetic grounds. In those days I was a fanatical object-orientation nut and I hated Python's OOP syntax which involves prefixing just about everything with "self", something that Ruby managed to do without.
I've become more ecumenical since, and that weekend of Snap! coding had revived my enthusiasm for Lisp-style list programming, so to my own surprise I decided to finish that Poker program in Python rather than convert it. I downloaded a more recent version, WinPython 64-bit 3.4.3, and set to work. Instead of pursuing the pure OOP architecture I'd started with - card as a class, hand as a class, player as a class and so on - I rewrote the lower levels using Python's "tuples" instead. Each card is just a tuple (facevalue, suit), while a deck is a 52-piece tuple of cards. Soon I'd rewritten the whole thing using only tuples and lists, and never mentioned a class until I reached the player level. My code shrank to half its size, ran many times faster, and more importantly it worked, which the original never had properly. (Crucial to this success was Python's "list comprehension" construct, a fiendishly clever one-line trick for building lists by pattern matching).
This taught me a serious lesson, one that might help you too. Rigidly adhering to a single methodology can be counterproductive: recognise which tools are right for each job and use them, regardless of dogma. Cards, decks and hands didn't need or deserve to be objects (no inheritance required) while players did because they have many attributes like a hand, a bankroll, a strategy, a personality. When I wrote the original program in OOP style I'd actually obscured its structure, in favour of a class hierarchy the language imposed on me. I'll never stop claiming that good programming is an art, one in which an eye for structure is even more important than a head for logic. Factoring, that is breaking down your code into smaller chunks in the best way, and choosing the right data structures, are far closer to musical composition than they are to science. What a pity that, unlike mathematics which occasionally spawns a Hollwood movie, beauty in programming can only ever be recognised by a handful of fellow programmers.
Biog: Dick = [Idealog for Idealog in PCPro if editor_approval == True]
Wednesday, 2 March 2016
FREE AT LAST?
Dick Pountain/ Idealog 255/ 07 October 2015 14:14
This column has always respected, but never wholly believed in, the proposition that "information wants to be free". My reservations stem from realism, namely that at present information bears costs for creating, storing and distributing it, costs that have to be met somehow. Exactly how they're met is a matter not for technology but politics (or political economy, to be pedantic). For example I do believe that all digitised archives of science and out-of copyright literature should be available online for free, with the costs paid by the taxpayer as part of the education budget. Of course free-marketeers will dispute this and claim that it should all be charged for. On the other hand I also believe that those who want to write, make music, act or whatever for a living should be able to be paid to do so, and that right now the market is better at doing that than the state. This magazine continues to exist because you thought it worth paying six quid for, and advertisers thought it worth advertising in, and hence I get paid to write this column.
We can imagine other ways to get paid for creating new information. In the old, defunct Soviet Union the state did indeed subsidise many of the arts. More recently it's become possible to make serious money from YouTube videos, or writing a blog that takes adverts (even in rare cases by charging a subscription). What's certainly true though is that until food, housing, clothing, transport and bandwidth also become free, it's not going to be possible to perform such information-producing activities full-time *without* getting paid, but doing them only in one's spare time will in many cases - say novelists, concert pianists, film actors and directors - may lead to a drastic loss of quality in the information produced.
There is a problem with markets, namely that they don't *truly* enjoy competition and aspire to become monopolies. What's more, the massive selling-power of our mass media amplifies a tendency toward "winner-take-all" in many markets, whereby a tiny handful of authors, games, pop and movie stars attract almost all of the huge revenues, leaving bare survival for the rest. A third fact about markets is that many people take the attitude that not only does information *not* want to be free but it actually wants to charge rent: sufficient rent in fact for them to live merely by owning and hoarding it rather than producing it. This tendency toward rent-seeking is most obvious in those film corporations who lobbied for the extension of copyright to 75 years to protect their revenues from back-catalogue, but less blatantly it also lies behind a trend in the software business toward rental rather than outright sale of programs. Microsoft Office 365, Adobe CS and even an increasing number of Android apps are choosing this route of monthly or annual subscription only (I recently discovered that both my two favoured Android office suites, OfficeSuite and Polaris have gone this way).
I've just finished reading "PostCapitalism", a remarkable book by Paul Mason, the economics editor of Channel 4 News, who argues that not merely does information want to be free, but that it will inexorably cause everything else to become free too. His analysis is surprisingly plausible (even if dense and hard-work in places). Digital information is a substance unprecedented in human history as it can be reproduced for nothing, and this fact has effects far beyond the digital realm, subverting the mechanism by which prices are set. Profitable markets depend upon inequality of information, and once everyone has the same, instant, price information margins get squeezed toward zero (for evidence, see Amazon). Digital automation and robotic technologies also make it possible to reduce the amount of human labour needed to produce material goods, threatening to do away with millions of jobs and wages. Information has become at the same time too valuable and too cheap, undermining our whole economic model based on private property.
In one possible future finance rules, jobless people live on credit and all profit comes from rent and interest rather than from exploiting labour. In another the state pays everyone a living wage to voluntarily perform self-organised tasks that used to be state services. If that sounds crazy, consider as Mason does a familiar present-day example: “The biggest information product in the world - Wikipedia - is made by 27,000 volunteers, for free. If it were run as a commercial site, Wikipedia’s revenue could be $2.8bn a year. Yet Wikipedia makes no profit. And in doing so it makes it almost impossible for anybody else to make a profit in the same space.” So which future road map is crazy?
[Dick Pountain feels inclined to paraphrase St Augustine, "Give me free information... but not yet"]
This column has always respected, but never wholly believed in, the proposition that "information wants to be free". My reservations stem from realism, namely that at present information bears costs for creating, storing and distributing it, costs that have to be met somehow. Exactly how they're met is a matter not for technology but politics (or political economy, to be pedantic). For example I do believe that all digitised archives of science and out-of copyright literature should be available online for free, with the costs paid by the taxpayer as part of the education budget. Of course free-marketeers will dispute this and claim that it should all be charged for. On the other hand I also believe that those who want to write, make music, act or whatever for a living should be able to be paid to do so, and that right now the market is better at doing that than the state. This magazine continues to exist because you thought it worth paying six quid for, and advertisers thought it worth advertising in, and hence I get paid to write this column.
We can imagine other ways to get paid for creating new information. In the old, defunct Soviet Union the state did indeed subsidise many of the arts. More recently it's become possible to make serious money from YouTube videos, or writing a blog that takes adverts (even in rare cases by charging a subscription). What's certainly true though is that until food, housing, clothing, transport and bandwidth also become free, it's not going to be possible to perform such information-producing activities full-time *without* getting paid, but doing them only in one's spare time will in many cases - say novelists, concert pianists, film actors and directors - may lead to a drastic loss of quality in the information produced.
There is a problem with markets, namely that they don't *truly* enjoy competition and aspire to become monopolies. What's more, the massive selling-power of our mass media amplifies a tendency toward "winner-take-all" in many markets, whereby a tiny handful of authors, games, pop and movie stars attract almost all of the huge revenues, leaving bare survival for the rest. A third fact about markets is that many people take the attitude that not only does information *not* want to be free but it actually wants to charge rent: sufficient rent in fact for them to live merely by owning and hoarding it rather than producing it. This tendency toward rent-seeking is most obvious in those film corporations who lobbied for the extension of copyright to 75 years to protect their revenues from back-catalogue, but less blatantly it also lies behind a trend in the software business toward rental rather than outright sale of programs. Microsoft Office 365, Adobe CS and even an increasing number of Android apps are choosing this route of monthly or annual subscription only (I recently discovered that both my two favoured Android office suites, OfficeSuite and Polaris have gone this way).
I've just finished reading "PostCapitalism", a remarkable book by Paul Mason, the economics editor of Channel 4 News, who argues that not merely does information want to be free, but that it will inexorably cause everything else to become free too. His analysis is surprisingly plausible (even if dense and hard-work in places). Digital information is a substance unprecedented in human history as it can be reproduced for nothing, and this fact has effects far beyond the digital realm, subverting the mechanism by which prices are set. Profitable markets depend upon inequality of information, and once everyone has the same, instant, price information margins get squeezed toward zero (for evidence, see Amazon). Digital automation and robotic technologies also make it possible to reduce the amount of human labour needed to produce material goods, threatening to do away with millions of jobs and wages. Information has become at the same time too valuable and too cheap, undermining our whole economic model based on private property.
In one possible future finance rules, jobless people live on credit and all profit comes from rent and interest rather than from exploiting labour. In another the state pays everyone a living wage to voluntarily perform self-organised tasks that used to be state services. If that sounds crazy, consider as Mason does a familiar present-day example: “The biggest information product in the world - Wikipedia - is made by 27,000 volunteers, for free. If it were run as a commercial site, Wikipedia’s revenue could be $2.8bn a year. Yet Wikipedia makes no profit. And in doing so it makes it almost impossible for anybody else to make a profit in the same space.” So which future road map is crazy?
[Dick Pountain feels inclined to paraphrase St Augustine, "Give me free information... but not yet"]
Saturday, 16 January 2016
GET OVER IT
Dick Pountain/ Idealog 254/11 September 2015 11:39
If you reach my advanced age you'll discover that there are some irritants it's best to learn to live with because they're too much trouble to fix. For me two such irritants are Facebook and Microsoft Windows. What high hopes we had for Facebook when it first launched in the UK: we hoped it would replace the increasingly cranky Cix as the place where we Real Worlders could meet and exchange copy, but it hasn't worked out that way. (To be sure we do maintain a group on FB, but it's mostly confined to simple announcements and no copy gets posted there).
Facebook turned out to be less like a senior common room and more like a bustling, screeching market-square that drowns out all serious intent. It has the almost magical property of instantly turning everyone who enters into a moraliser or preener rather than an information provider: "look how well I'm doing", "I defy you not to weep over this baby dolphin/kitten/meerkat", "how dare you <blah> this <blah>", "how many <blahs> have *you* <blahed>?"). A conduit for outrage and opinion rather than fact, as you can see for yourself by contrasting the tone of FB comments with those on any proper tech forum: the Greek philosophers would have said it's all about doxa (belief) rather than episteme (knowledge).
Many's the time over the past years that my finger hovered over the "Delete account" button, but that impulse passed once I discovered how to switch people's feeds off without offending them by defriending (despite FB constantly changing the way you do it, as a deterrent). I now have friends running into three figures but see only two figures-worth of posts. And recently I realised that FB makes a great "doxometer": post some nascent column idea and see how much flak it attracts (the more the better). When I recently mentioned that my Windows 8.1 indexing service had run wild and filled up my entire 500GB hard disk, I received mostly Harry Enfield style "that's not how you do it" point-scoring (having already fixed the problem using real advice gleaned from tech forums). Ditto when I posted, ironically, that what I'm hearing about the Windows 10 upgrade process is turning me into an IT "anti-vaxer". And so on to the second irritant I've learned to live with, Windows 8.1.
To look at my desktop now you'd never even guess I'm running it. The tiles are gone along with all those hooky apps. My desktop is plastered with (highly-deprecated) icons, some pointing to folders full of vital utilities, while the tools I use most are all on the taskbar, Mac style. Neither you nor I would ever know this isn't Windows 7, and it works well enough to forget about (until a minor hiccup like that full disk). Automatic updates are turned off and I pick which ones to install manually from time to time, so haven't yet had 10 stuffed onto me. Will I eventually upgrade to 10? Haven't decided. Anti-vaxer jokes aside, I worry my Lenovo is old enough (2013) to be in the danger-zone for driver SNAFUs, but also a recent article on The Register (http://www.theregister.co.uk/2015/07/31/rising_and_ongoing_cost_of_windows/) makes me wonder whether Windows 10 is intended to tie us into an Adobe-style monthly subscription, software-as-service model whereby I lose control over future upgrades.
If that does prove to be the case I'll definitely defect, not to a Mac as so often recommended by kind friends on Facebook, but to some variety of Linux. You see, I've also come to understand that I actually *enjoy* wrestling with operating systems: it's a far more fun way to keep my mental muscles exercised than solving word puzzles on a Nintendo Gameboy, in a Pringle cardigan, on the sofa. I don't object to paying for software per se - I paid for Windows 8.1 in the original cost of my Lenovo - but what I do oppose is the ongoing campaign by big software vendors to extend their monopoly status by extracting a rental, rather than sale, price from their customers. This tendency toward rent-seeking runs a counter to an opposite tendency of networked digital technologies to make software ever cheaper, even free, and thereby reduce profits (which are needed to pay for R&D, not only to distribute to shareholders). We're getting into quite profound questions here, recently the subject of Paul Mason's intriguing book "Postcapitalism" which I'm currently reading. Mason believes, as do I, that the fact that digital products can be copied effectively for free tends to undermine the ability to set rational prices which lies at the heart of current market economics. But that, illuminated by the madness that is MadBid, are a subject for next month's column...
If you reach my advanced age you'll discover that there are some irritants it's best to learn to live with because they're too much trouble to fix. For me two such irritants are Facebook and Microsoft Windows. What high hopes we had for Facebook when it first launched in the UK: we hoped it would replace the increasingly cranky Cix as the place where we Real Worlders could meet and exchange copy, but it hasn't worked out that way. (To be sure we do maintain a group on FB, but it's mostly confined to simple announcements and no copy gets posted there).
Facebook turned out to be less like a senior common room and more like a bustling, screeching market-square that drowns out all serious intent. It has the almost magical property of instantly turning everyone who enters into a moraliser or preener rather than an information provider: "look how well I'm doing", "I defy you not to weep over this baby dolphin/kitten/meerkat", "how dare you <blah> this <blah>", "how many <blahs> have *you* <blahed>?"). A conduit for outrage and opinion rather than fact, as you can see for yourself by contrasting the tone of FB comments with those on any proper tech forum: the Greek philosophers would have said it's all about doxa (belief) rather than episteme (knowledge).
Many's the time over the past years that my finger hovered over the "Delete account" button, but that impulse passed once I discovered how to switch people's feeds off without offending them by defriending (despite FB constantly changing the way you do it, as a deterrent). I now have friends running into three figures but see only two figures-worth of posts. And recently I realised that FB makes a great "doxometer": post some nascent column idea and see how much flak it attracts (the more the better). When I recently mentioned that my Windows 8.1 indexing service had run wild and filled up my entire 500GB hard disk, I received mostly Harry Enfield style "that's not how you do it" point-scoring (having already fixed the problem using real advice gleaned from tech forums). Ditto when I posted, ironically, that what I'm hearing about the Windows 10 upgrade process is turning me into an IT "anti-vaxer". And so on to the second irritant I've learned to live with, Windows 8.1.
To look at my desktop now you'd never even guess I'm running it. The tiles are gone along with all those hooky apps. My desktop is plastered with (highly-deprecated) icons, some pointing to folders full of vital utilities, while the tools I use most are all on the taskbar, Mac style. Neither you nor I would ever know this isn't Windows 7, and it works well enough to forget about (until a minor hiccup like that full disk). Automatic updates are turned off and I pick which ones to install manually from time to time, so haven't yet had 10 stuffed onto me. Will I eventually upgrade to 10? Haven't decided. Anti-vaxer jokes aside, I worry my Lenovo is old enough (2013) to be in the danger-zone for driver SNAFUs, but also a recent article on The Register (http://www.theregister.co.uk/2015/07/31/rising_and_ongoing_cost_of_windows/) makes me wonder whether Windows 10 is intended to tie us into an Adobe-style monthly subscription, software-as-service model whereby I lose control over future upgrades.
If that does prove to be the case I'll definitely defect, not to a Mac as so often recommended by kind friends on Facebook, but to some variety of Linux. You see, I've also come to understand that I actually *enjoy* wrestling with operating systems: it's a far more fun way to keep my mental muscles exercised than solving word puzzles on a Nintendo Gameboy, in a Pringle cardigan, on the sofa. I don't object to paying for software per se - I paid for Windows 8.1 in the original cost of my Lenovo - but what I do oppose is the ongoing campaign by big software vendors to extend their monopoly status by extracting a rental, rather than sale, price from their customers. This tendency toward rent-seeking runs a counter to an opposite tendency of networked digital technologies to make software ever cheaper, even free, and thereby reduce profits (which are needed to pay for R&D, not only to distribute to shareholders). We're getting into quite profound questions here, recently the subject of Paul Mason's intriguing book "Postcapitalism" which I'm currently reading. Mason believes, as do I, that the fact that digital products can be copied effectively for free tends to undermine the ability to set rational prices which lies at the heart of current market economics. But that, illuminated by the madness that is MadBid, are a subject for next month's column...
TEN COMMANDMENTS OF SAFETY
Dick Pountain/Idealog 253/06 August 2015 14:58
Perhaps you were as disturbed as I was by that report, back in May, that a US hacker travelling on a Boeing airliner claimed to have penetrate its flight control system via the entertainment system's Wi-Fi, and made the plane climb and turn from his laptop. Aviation experts have since rubbished his claim (but then Mandy Rice Davies would definitely apply). It did however concentrate my mind, in a most unwelcome fashion, on the fact that all the planes we fly in nowadays employ fly-by-wire under software-control, and that my confidence in software engineers falls some way short of my confidence in mechanical engineers.
This nagging anxiety was rubbed in further by the fatal crash of a military Airbus A400M in July, after its engine control software shut down three of its four engines just after take-off. It appears that accidental erasure of some config files during installation had deprived the software of certain torque calibration parameters that it needed to monitor engine power. These (literally) vital numbers were being loaded from an external file, so in other words the safety of this aircraft was being governed by a programming practice on a par with installing Windows updates. Nice.
To me safety-critical software is about more than just fear for my own ass: it's been of concern for many years. I started out as a Forth programmer, at a time when that language was widely used in embedded control systems, and attended conferences on the subject of safety in both software and hardware architecture. Then I graduated, via Pascal and Modula-2, to becoming an admirer of Nikolaus Wirth's ideas on good programming practice, and finally on to object-oriented programming as the way to wrest control over the structure of really large programs. Object-orientation is now of course the rule, supported by every language from Javascript to Scratch, but I sometimes wonder whether it still means anything very much, or has it become a mere style to which lip-service is paid. Loading critical data from unreliable external files violates the principles of encapsulation in more ways than I can count.
I did a bit of Googling and found lots of papers about safety-critical architectures and redundant hardware systems. Redundancy is a key safety concept: you build three separate computers, with CPUs from different manufacturers running different software written by different teams - which have been demonstrated to produce the same outputs from the same inputs - then go with the majority verdict, the idea being that the *same* software or hardware bug is very, very unlikely to arise in all three. Interestingly enough, the latest of these papers seemed to be dated around 2008.
Surely it can't be that, as in so many other spheres (like, say, banking) the optimists have taken over the farm and started trusting too much? Then I stumbled across NASA's 10 rules for developing safety critical code. Now NASA tends to work with computer hardware that's several decades behind state-of-the-art but - give or take a Hubble or two - it's had fairly few disasters that were down to software. Here are its rules, severely abbreviated by me:
1: All code to have simple control flow constructs: no GOTO, direct or indirect recursion.
2: All loops to have a fixed upper bound, which can be statically proved never to be exceeded.
3: No dynamic memory allocation after initialization.
4: No function longer than 60 lines of code.
5: The assertion density of the code to average a minimum of two assertions per function.
6: Data objects must be declared at the smallest possible level of scope.
7: Calling functions must check non-void function return values and the validity of all parameters.
8: Preprocessor to be restricted to headers and simple macros. Token pasting, variable argument lists (ellipses) and recursive macro calls all forbidden.
9: Pointers to be restricted to one level of dereferencing, which mustn't be hidden inside macros or typedefs. Function pointers forbidden.
10: All code must be compiled with all warnings enabled at their most pedantic setting. All code must compile at these settings without any warnings. All code to be checked daily with at least one — preferably several — state-of-the-art static source code analyzer, and pass with zero warnings.
These rules are, realistically enough, aimed at plain old C programmers, not at trendy new languages, but they impose a degree of rigour comparable to most object-oriented languages. Their recommended heavy use of Assertions is interesting. Assertions are supported directly in Eiffel, Ada and some other languages, and can be added to C via the header "assert.h". They can specify the desired value range of some variable at some point in program execution and raise a runtime error when not met: an example might be "assert( TorqueCalibrationParameter > 0)".
Perhaps you were as disturbed as I was by that report, back in May, that a US hacker travelling on a Boeing airliner claimed to have penetrate its flight control system via the entertainment system's Wi-Fi, and made the plane climb and turn from his laptop. Aviation experts have since rubbished his claim (but then Mandy Rice Davies would definitely apply). It did however concentrate my mind, in a most unwelcome fashion, on the fact that all the planes we fly in nowadays employ fly-by-wire under software-control, and that my confidence in software engineers falls some way short of my confidence in mechanical engineers.
This nagging anxiety was rubbed in further by the fatal crash of a military Airbus A400M in July, after its engine control software shut down three of its four engines just after take-off. It appears that accidental erasure of some config files during installation had deprived the software of certain torque calibration parameters that it needed to monitor engine power. These (literally) vital numbers were being loaded from an external file, so in other words the safety of this aircraft was being governed by a programming practice on a par with installing Windows updates. Nice.
To me safety-critical software is about more than just fear for my own ass: it's been of concern for many years. I started out as a Forth programmer, at a time when that language was widely used in embedded control systems, and attended conferences on the subject of safety in both software and hardware architecture. Then I graduated, via Pascal and Modula-2, to becoming an admirer of Nikolaus Wirth's ideas on good programming practice, and finally on to object-oriented programming as the way to wrest control over the structure of really large programs. Object-orientation is now of course the rule, supported by every language from Javascript to Scratch, but I sometimes wonder whether it still means anything very much, or has it become a mere style to which lip-service is paid. Loading critical data from unreliable external files violates the principles of encapsulation in more ways than I can count.
I did a bit of Googling and found lots of papers about safety-critical architectures and redundant hardware systems. Redundancy is a key safety concept: you build three separate computers, with CPUs from different manufacturers running different software written by different teams - which have been demonstrated to produce the same outputs from the same inputs - then go with the majority verdict, the idea being that the *same* software or hardware bug is very, very unlikely to arise in all three. Interestingly enough, the latest of these papers seemed to be dated around 2008.
Surely it can't be that, as in so many other spheres (like, say, banking) the optimists have taken over the farm and started trusting too much? Then I stumbled across NASA's 10 rules for developing safety critical code. Now NASA tends to work with computer hardware that's several decades behind state-of-the-art but - give or take a Hubble or two - it's had fairly few disasters that were down to software. Here are its rules, severely abbreviated by me:
1: All code to have simple control flow constructs: no GOTO, direct or indirect recursion.
2: All loops to have a fixed upper bound, which can be statically proved never to be exceeded.
3: No dynamic memory allocation after initialization.
4: No function longer than 60 lines of code.
5: The assertion density of the code to average a minimum of two assertions per function.
6: Data objects must be declared at the smallest possible level of scope.
7: Calling functions must check non-void function return values and the validity of all parameters.
8: Preprocessor to be restricted to headers and simple macros. Token pasting, variable argument lists (ellipses) and recursive macro calls all forbidden.
9: Pointers to be restricted to one level of dereferencing, which mustn't be hidden inside macros or typedefs. Function pointers forbidden.
10: All code must be compiled with all warnings enabled at their most pedantic setting. All code must compile at these settings without any warnings. All code to be checked daily with at least one — preferably several — state-of-the-art static source code analyzer, and pass with zero warnings.
These rules are, realistically enough, aimed at plain old C programmers, not at trendy new languages, but they impose a degree of rigour comparable to most object-oriented languages. Their recommended heavy use of Assertions is interesting. Assertions are supported directly in Eiffel, Ada and some other languages, and can be added to C via the header "assert.h". They can specify the desired value range of some variable at some point in program execution and raise a runtime error when not met: an example might be "assert( TorqueCalibrationParameter > 0)".
Sunday, 13 December 2015
LOSING THE PLOT?
Dick Pountain/Idealog 252/27 June 2015 16:27
Eagle-eyed readers may have noticed that I haven't mentioned my Nexus 7 tablet in recent months, which is because, until a couple of days ago it languished in a drawer, fatally wounded by the Android 5.0 Lollipop update. (Google should have broken with its confectionery-oriented naming convention and named it "GutShot"). Lollipop rendered it so slow as to be unusable - five minutes or more to display the home screen - and even after scores of reboots, cache clearances and a factory reset, it remained not just slow but it appeared its battery had died too, taking a day to recharge and barely an hour to discharge again.
I did investigate downgrading back to 4.4 KitKat, but the procedures involved are absolutely grisly, requiring not just rooting, but downloading huge ISO image files via a PC with the ever-present chance of a failure that bricks the tablet completely: all totally unacceptable for a consumer-oriented device. (It did set me wondering how the Sale of Goods act might apply to destructive OTA upgrades that aren't reversible by normal human beings...) Instead I went to my local PC World and picked up an Asus Memo Pad 7 for £99, which I repopulated with all my apps and data within a morning, thanks to the brighter side of Google's Cloud, and has worked a treat ever since, and has a front camera and card-slot too. Then last week I discovered that Android 5.1.1 was now available for the Nexus and, with nothing to lose, installed it. A mere six months after its assassination my Nexus came back to life again, faster and slicker than the Asus, with its battery miraculously resurrected and lasting longer than originally.
There has to be a moral to this tale somewhere, but I'll be damned if I can identify it. Google's testing of 5.0 was clearly inadequate, and its lethargy in keeping us victims informed and releasing a fix not far short of criminal. But stuff like this happens on the IT battlefield all the time. A bigger issue is that it destroys confidence in the whole Over-The-Air update model which I'd come to see as the way forward. If Google (or Apple, or Microsoft) wishes to mess directly with my machine, then at the very least they'll need to provide a simple, fool-proof mechanism to unwind any damage done. But that leads on to another, deeper issue: it feels to me as though all these new generation, cloud-oriented firms, are approaching some sort of crisis of manageability. The latest phones and tablets are marvels of hardware engineering, with their cameras and motion sensors and GPS and NFC and the rest, but all these services have to be driven from and integrated into operating system kernels that date back to the 1980s, using programming languages that fall some way short of state-of-the-art. The result is a spectacular cock-up like Lollipop, or those minor memory-leaks that cause your iPad to gradually clag up until you have to reboot it.
It is of course inconceivable to start from scratch at this point in history, but I was reminded last week of what might have been when I exchanged emails, after twenty years, with Cuno Pfister, a Swiss software engineer I knew back in Byte days who used to work on Oberon/F with Niklaus Wirth in Zurich. Oberon was Wirth's successor to Modula-2, the culmination of his programming vision, and Oberon/F was a cross-platform, object-oriented framework with the language compiler at its heart, complete with garbage collection to combat memory leakage, preconditions to assist debugging, and support for a Model-View-Controller architecture. Its basic philosophy was that an operating system should become a software layer that "imports hardware and exports applications". New hardware drivers and applications were written as insulated modules, usually by extending some existing module, and they clicked into place like Lego bricks. Strong modularity and strong typing enabled 90% of errors to be caught at compile time, while garbage collection and preconditions simplified debugging the rest. It was precisely the sort of system we need to program today's tablets, but of course it could make no headway at all against the sheer inertia of Unix and C++.
What I miss most about that concept is having the programming language compiler built right into the OS. I still occasionally get the urge to write little programs, but all the tools I have are either massive overkill like Visual Studio, or command-line austerity like Ruby, and the APIs you have to learn are hideous too. I did recently discover a quite usable Android JavaScript tool called DroidScript, and the first thing I wrote in it, as is my historical habit, was a button that when pressed says "bollox"...
Eagle-eyed readers may have noticed that I haven't mentioned my Nexus 7 tablet in recent months, which is because, until a couple of days ago it languished in a drawer, fatally wounded by the Android 5.0 Lollipop update. (Google should have broken with its confectionery-oriented naming convention and named it "GutShot"). Lollipop rendered it so slow as to be unusable - five minutes or more to display the home screen - and even after scores of reboots, cache clearances and a factory reset, it remained not just slow but it appeared its battery had died too, taking a day to recharge and barely an hour to discharge again.
I did investigate downgrading back to 4.4 KitKat, but the procedures involved are absolutely grisly, requiring not just rooting, but downloading huge ISO image files via a PC with the ever-present chance of a failure that bricks the tablet completely: all totally unacceptable for a consumer-oriented device. (It did set me wondering how the Sale of Goods act might apply to destructive OTA upgrades that aren't reversible by normal human beings...) Instead I went to my local PC World and picked up an Asus Memo Pad 7 for £99, which I repopulated with all my apps and data within a morning, thanks to the brighter side of Google's Cloud, and has worked a treat ever since, and has a front camera and card-slot too. Then last week I discovered that Android 5.1.1 was now available for the Nexus and, with nothing to lose, installed it. A mere six months after its assassination my Nexus came back to life again, faster and slicker than the Asus, with its battery miraculously resurrected and lasting longer than originally.
There has to be a moral to this tale somewhere, but I'll be damned if I can identify it. Google's testing of 5.0 was clearly inadequate, and its lethargy in keeping us victims informed and releasing a fix not far short of criminal. But stuff like this happens on the IT battlefield all the time. A bigger issue is that it destroys confidence in the whole Over-The-Air update model which I'd come to see as the way forward. If Google (or Apple, or Microsoft) wishes to mess directly with my machine, then at the very least they'll need to provide a simple, fool-proof mechanism to unwind any damage done. But that leads on to another, deeper issue: it feels to me as though all these new generation, cloud-oriented firms, are approaching some sort of crisis of manageability. The latest phones and tablets are marvels of hardware engineering, with their cameras and motion sensors and GPS and NFC and the rest, but all these services have to be driven from and integrated into operating system kernels that date back to the 1980s, using programming languages that fall some way short of state-of-the-art. The result is a spectacular cock-up like Lollipop, or those minor memory-leaks that cause your iPad to gradually clag up until you have to reboot it.
It is of course inconceivable to start from scratch at this point in history, but I was reminded last week of what might have been when I exchanged emails, after twenty years, with Cuno Pfister, a Swiss software engineer I knew back in Byte days who used to work on Oberon/F with Niklaus Wirth in Zurich. Oberon was Wirth's successor to Modula-2, the culmination of his programming vision, and Oberon/F was a cross-platform, object-oriented framework with the language compiler at its heart, complete with garbage collection to combat memory leakage, preconditions to assist debugging, and support for a Model-View-Controller architecture. Its basic philosophy was that an operating system should become a software layer that "imports hardware and exports applications". New hardware drivers and applications were written as insulated modules, usually by extending some existing module, and they clicked into place like Lego bricks. Strong modularity and strong typing enabled 90% of errors to be caught at compile time, while garbage collection and preconditions simplified debugging the rest. It was precisely the sort of system we need to program today's tablets, but of course it could make no headway at all against the sheer inertia of Unix and C++.
What I miss most about that concept is having the programming language compiler built right into the OS. I still occasionally get the urge to write little programs, but all the tools I have are either massive overkill like Visual Studio, or command-line austerity like Ruby, and the APIs you have to learn are hideous too. I did recently discover a quite usable Android JavaScript tool called DroidScript, and the first thing I wrote in it, as is my historical habit, was a button that when pressed says "bollox"...
Subscribe to:
Posts (Atom)
POD PEOPLE
Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...
-
Dick Pountain/Idealog 277/05 August 2017 11:05 I'm not overly prone to hero-worship, but that's not to say that I don't have a...
-
Dick Pountain /Idealog 360/ 07 Jul 2024 11:12 Astute readers <aside class=”smarm”> which of course to me means all of you </aside...
-
Dick Pountain /Idealog 363/ 05 Oct 2024 03:05 When I’m not writing this column, which let’s face it is most of the time, I perform a variety...