Thursday, 12 January 2017

ALGORITHMOPHOBIA

Dick Pountain/Idealog 262/05 May 2016 11:48

The ability to explain algorithms has always been a badge of nerdhood, the sort of trick people would occasionally ask you to perform when conversation flagged at a party. Nowadays however everyone thinks they know what an algorithm is, and many people don't like them much. Algorithms seem to have achieved this new familiarity/notoriety because of their use by social media, especially Google, Facebook and Instagram. To many people an algorithm implies the computer getting a bit too smart, knowing who you are and hence treating you differently from everyone else - which is fair enough as that's mostly what they are supposed to be for in this context. However what kind of distinction we're talking about does matter: is it showing you a different advert for trainers from your dad, or is it selecting you as a target for a Hellfire missile?

Some newspapers are having a ball with algorithm as synonym for the inhumane objectivity of computers, liable to crush our privacy or worse. Here are two sample headlines from the Guardian over the last few weeks: "Do we want our children taught by humans or algorithms?", and "Has a rampaging AI algorithm really killed thousands in Pakistan?" Even the sober New York Times deemed it newsworthy when Instagram adopted an algorithm-based personalized feed in place of its previous reverse-chronological feed (a move made last year by its parent Facebook).

I'm not algorithmophobic myself, for the obvious reason that I've spent years using, analysing, even writing a column for Byte, about the darned things, but this experience grants me a more-than-average awareness of what algorithms can and can't do, where they are appropriate and what the alternatives are. What algorithms can and can't do is the subject of Algorithmic Complexity Theory, and it's only at the most devastatingly boring party that one is likely to be asked to explain that. ACT can tell you about whole classes of problem for which algorithms that run in managable time aren't available. As for alternatives to algorithms, the most important is permitting raw data to train a neural network, which is the way the human brain appears to work: the distinction being that writing an algorithm requires you to understand a phenomenon sufficiently to model it with algebraic functions, whereas a neural net sifts structures from the data stream in an ever-changing fashion, producing no human-understandable theory of how that phenomenon works.  

Some of the more important "algorithms" that are coming to affect our lives are actually more akin to the latter, applying learning networks to big data sets like bank transactions and supermarket purchases to determine your credit rating or your special offers. However those algorithms that worry people most tend not to be of that sort, but are algebraically based, measuring multiple variables and applying multiple weightings to them to achieve ever more appearance of "intelligence". They might even contain a learning component that explicitly alters weightings on the fly, Google's famous PageRank algorithm being an example .

The advantage of such algorithms is that they can be tweaked by human programmers to improve them, though this too can be a source of unpopularity: every time Google modifies PageRank a host of small online businesses catch it in the neck. Another disadvantage of such algorithms is that they can "rot" by decreasing rather than increasing in performance over time, prime examples being Amazon's you-might-also-like and Twitter's people-you-might-want-to-follow. A few years ago I was spooked by the accuracy of Amazon's recommendations, but that spooking ceased after it offered me a Jeffrey Archer novel: likewise when Twitter thought I might wish to follow Jimmy Carr, Fearne Cotton, Jeremy Clarkson and Chris Moyles.

Flickr too employs a secret algorithm to measure the "Interestingness" of my photographs: number of views is one small component, as is the status of people who favourited it (not unlike PageRank's incoming links) but many more variables remain a source of speculation in the forums. I recently viewed my Top 200 pictures by Interestingness for the first time in ages and was pleasantly surprised to find the algorithm much improved. My Top 200 now contains more manipulated than straight-from-camera, pictures; three of my top twenty are from recent months and most from the last year; all 200 are pix I'd have chosen myself; their order is quite different from "Top 200 ranked by Views", that is, what other users prefer. As someone who takes snapshots mostly as raw material for manipulation, the algorithm is now suggesting that I'm improving rather than stagnating, and closely approximates my own taste, which I find both remarkable and encouraging. The lesson? Good algorithms in appropriate contexts are good, bad algorithms in inappropriate contexts are bad. But you already knew that didn't you...  

Monday, 19 September 2016

IDENTITY CRISIS

Dick Pountain/Idealog 261/04 April 2016 13:15

It's a cliche, but none the less true, that many IT problems are caused by the unholy rapidity of change in our industry. However I've just had an irritating lesson in the opposite case, where sometimes things that remain the same for too long can get you into trouble. It all started last week when a friend in Spain emailed to say that mail to my main dick@dickpountain.co.uk address was bouncing, and it soon turned into a tragicomedy.

A test mail showed mail was indeed broken, and that my website at www.dickpountain.co.uk had become inaccessible too. Something nasty had happened to my domain. This wasn't without precedent as I wrote here exactly a year ago in PC Pro 249 about the way Google's tightened security had busted my site. The first step, equivalent in plumbing terms to lifting the manhole lid, is to log into the website of my domain registrar Freeparking, which I duly attempted only to be rudely rebuffed. Their website has been completely redesigned and my password no longer worked - message says it was too weak so they'd automatically replaced it with a stronger one. A stronger one that unfortunately they'd omitted to tell me.

So, click on the "Forgot Password" button where it asks me to enter the email address my account was opened with. Trying all four addresses I've used over the last decade, one after the other, garners nothing but "password change request failed". Send an email to Freeparking support, who reply within the hour (my experience with them has always been good). Unfortunately their reply is that my domain has expired. Gobsmacked, because for the eight years I've had the domain they've always sent me a renewal reminder in plenty of time. Flurry of emails establishes that there's still time to renew before the domain name gets scrubbed or sold, but to do that I have to get into my account. Can they tell me the password? No they can't, but they can tell me the right email address is my BTinternet address. Go back to Forgot Password and use that, but still no password reset mail.

At this point I must explain my neo-byzantine email architecture. Gmail is my main hub where I read and write all my mail. It gathers POP mail from my old Aol and Cix addresses, and dickpountain.co.uk is simply redirected into it, but all mail redirected from dickpountain.co.uk also gets copied to my BTinternet account as a sort of backup. Unfortunately, with the domain expired, that feedback loop appears to have broken too.

By now I'm starting to gibber under my breath, and the word "nightmare" has cropped up in a couple of messages to support. I ask them to change the email address on my account to my Gmail address, but they can't do that without full identity verification, so I send a scan of my passport, they change the address, and password resets still don't arrive... Now desperate, I try once more entering each of four old addresses and make this discovery: three of them say "password reset request failed", but Aol actually says nothing. Whirring of mental cogs, clatter of deductions: the account address is actually Aol and reset requests are going there, but Gmail isn't harvesting them. Go to Aol.com, and try to access my email account (which has been dormant for best part of a decade) and am told that due to "irregular activities" the account has been locked. I now know that a Lenovo Yoga doesn't show teeth marks...

Another whole email correspondence ensues with aolukverify@aol.com, with both a passport and water bill required this time, and I get back into the Aol account, where sure enough are all the password reset mails, as well as Freeparking's renewal reminders. I get back into Freeparking account and after a bit of nonsense involving PayPal, VISA and HSBC I renew my domain. Don't get me wrong, I'm not complaining about the serious way both companies treated my security, and their support people were both helpful and efficient. There's really no moral to this tale beyond the 2nd Law of Thermodynamics: ignore something for long enough and entropy will break it while your back is turned.

The only thing is, it sparked off a horrible fantasy. It's the year 2025 and President Trump is getting ratty as the end of his second term approaches. Vladimir Putin, who has married one of his ex-wives, makes one nasty jibe too many over the phone and Donald presses the Big Red button - then thinks better of it and goes to the Big Green Reset button. He can't remember the password, and on pressing Forgot Password the memorable question is the middle name of his second wife (case-and-accent-sensitive)...

Monday, 8 August 2016

OLOGIES AND ACIES

Dick Pountain/ Idealog 260 /09 March 2016 13:42

I imagine many readers are well old enough to remember BT's 1988 TV advert starring Maureen Lipman, where she comforted her grandson for his bad exam results by pointing out that he'd passed an "ology" (even if it was just sociology). I've never obtained an ology myself, only an "istry" or two, but  in any case I'm actually rather more interested in "acies": literacy, numeracy and a couple of others that have no acy name.

Not a day goes by without me being thankful for receiving an excellent scientific education. A couple of decades ago I'd have thought twice before admitting that, but no longer because pop science has become a hugely important part of popular culture, from TED talks to sci-fi movies, via miles and bookshelf miles of explanatory books on cosmology, neuroscience, genetics, mathematics, particle physics, even a few ologies. Being a nerd is now a badge of honour. But my thankfulness has little to do with any of that, and more to do with the way basic numeracy, plus a knowledge of statistics ("riskacy"?) and energetics ("ergacy"?) help me understand everything that life throws at me, from everyday accidents and illnesses, through politics to my entire philosophical outlook.

Take for example relationships with doctors. As an aging male I'm on the receiving end of a variety of government-sponsored preventive medicine initiatives, aimed at reducing the incidence of heart attack, stroke, diabetes and other disorders. After an annual battery of tests I'm encouraged to consider taking a variety of drugs, but before agreeing I ask my GP to show me the test results on his PC screen, both as annual historical graphs and raw figures compared to recommended ranges. When shown my thyroid hormone level marginally out of range, I can argue about experimental error and standard deviations, and win since my doctor's no statistician. This process has lead me take lisinopril for my blood pressure, but refuse statins for my marginal cholesterol and ditto for thyroxin.

Numeracy, particularly concerning percentages and rates of change (ie. calculus) is becoming essential to an understanding of just about everything. If some website tells you that eating hot dogs increases your risk of stomach cancer by 20%, you need to be able to ask from what base-rate: 0.000103 rising to 0.000124 doesn't  sound nearly so scary. Western citizens face a risk of death from terrorism way below that from being in a car crash, but those risks *feel* very different subjectively. We accept driving risk more readily than dying from an inexplicable act of violence, our politicians know this and so over-react to terrorism and under-react to road safety. But the "acy" that's most poorly distributed of all concerns energetics.

Perhaps a minority of scientists, and almost no lay people, understand the laws of thermodynamics in theory, let alone have an intuitive grasp that could be usefully applied to everyday life. Thanks to the pop science boom, everyone knows Einstein's formula E = MC², but that's only marginally relevant to everyday life since we don't ride nuclear-powered cars or busses, and our bodies run on chemical rather than nuclear reactions. Hence the confusion among would-be dieters over counting calories: does water have any calories?, do carrots have more than celery?

Some variables that really do matter for an energetic understanding of life are energy density and the rate at which energy gets converted from one form to another. You could place a bowl of porridge on the table alongside a piece of dynamite that contains the same number of calories. Dynamite has around 300 times the energy density of porridge so it will be a small piece. More important though, the calories in the porridge (as starches, sugars, protein) get converted to muscular effort and heat rather slowly by your digestive system, while a detonator turns the dynamite's calories into light, heat and sound very quickly indeed. But grind wheat or oats to a fine-enough powder, mix with air as a dust cloud, and deadly industrial explosions can occur.

Energy density calculations affect the mobile device business greatly, both when seeking new battery technologies and considering the safety of existing ones like lithium-ion (just ask Boeing). As for transport, fossil fuels and climate change, they're equally crucial. Electric cars like Tesla are just about do-able now, but electric airliners aren't and may never be, because the energy density of batteries compared to hydrocarbons is nowhere near enough. And when people fantasise online about the possibility of transporting the human race to colonise Mars, energetics is seldom properly discussed. We all ultimately live off energy (and negative entropy) that we receive from sunlight, but Mars is much further away. Try working out the energetics of "terraforming" before you book a ticket...

GAME ON?

Dick Pountain/ Idealog 259 /08 February 2016 11:16

I wouldn't ever describe myself as a gamer, but that's not to say that I've never played any computer games. On the contrary, I was once so hooked on Microsoft's "FreeCell" version of solitaire that I would download lists of solutions and complexity analyses by maths nerds. I was almost relieved when those miserable sods at Redmond removed it from Windows 7, and have resisted buying any other version. Long, long before that I played text adventures like Zork (under CP/M), Wizardry (crude graphics, on Apple II but nevertheless highly addictive), and graphic shooters like Doom. I even finished the hilariously grisly Duke Nukem. I still play a single game - the gorgeous French "stretchy" platform game Contre Jour - on my Android tablet.

So, with this rap-sheet, how can I claim not to be gamer? Because I lost all interest in shoot-'em-ups after Duke Nukem, never got into the modern generation of super-realistic shooters like Call of Duty and Grand Theft Auto, and have never purchased a computer for its game-play performance. I do realise that this cuts me off from a major strand of popular culture among today's youth, and *the* major source of current entertainment industry revenues, but I had no idea just how far I'd cut myself off until I read an article in the Guardian last week.

Called "Why my dream of becoming a pro gamer ended in utter failure" (http://gu.com/p/4fzjt/sbl), this fascinating article by tech reporter Alex Hern came as a revelation to me. First of all, I had only the vaguest idea that computer games were being played for money, but secondly I was utterly clueless as to exactly  *how* these games are being monetised. The games Hern played aren't GTA-style shooters but up-to-date versions of that mean old Wizardry I used to play, in which play proceeds by casting spells, chosen from a range of zillions. The strategy of these games, played online against human opponents, lies in carefully choosing the deck of spell cards you'll deploy, and in how and when to deploy each one. In game-theoretic terms this is fairly close to Poker, revolving around forming a mental map of your opponents' minds and strategies. And like Poker, these games (for example Hearthstone, which Hern tried) are played in championship series with huge cash prizes of £100,000, but as he soon realized, only one person gets that pay-off and the rest get nothing for a huge expenditure of playing effort. Instead the way most pro gamers get a regular, but more modest, pay-off is by setting up a channel on social-network Twitch, on which people watch you play while being shown paid-for ads.

I'll say that again in case it hasn't sunk in. You're playing a computer simulation of imaginary spell casting, against invisible opponents via a comms link, and people are paying to watch. This intrigues me because it fits so beautifully into a new analysis of modern economies - one might call it the "Uberisation Of Everything" - that I'm, along with many others, trying to explore. Everyone has recently been getting all whooped up about robots stealing our jobs, but for many young people the miserable jobs on offer are no longer worth protecting, and they dream instead of getting rich quickly by exploiting what talents they were born with: a pretty face, a fine voice, a strong imagination, in football, in hip-hop, or... in streaming Hearthstone.

IT lies at the very heart of this phenomenon. Long before robots get smart enough to do all human jobs, computers are assisting humans to do jobs that once required enormous, sometimes lifelong, effort to learn. Uber lets you be a taxi-driver without doing "The Knowledge"; a synthesiser makes you into an instant keyboard player and auto-tune a viable singer; an iPhone can make you a movie director; and Twitch can make you a Poker, or Hearthstone, or Magic pro. The casino aspect of all this, that your luck might make you instantly rich so you don't have to work, merely mirrors the official morality of the finance sector, where young dealers can make billion dollar plays and end up driving Ferraris (and very rarely in jail, LIBOR-fiddling notwithstanding). This is the economics, not of the Wild West itself, but of Buffalo Bill's Wild West Show. Over recent decades the media have so thoroughly exposed us all to the lifestyles of billionaires that now everyone aspires to be a star at something, work is regarded as the curse that Oscar Wilde always told us it was, and money (lots of it) is seen as the primary means to purchase pleasure and self-esteem. The Protestant Work Ethic that motivated our parents or grandparents is being flushed spiralling down the pan...









Thursday, 9 June 2016

I GOT (ALGO)RHYTHM

Dick Pountain/Idealog 258/06 January 2016 14:02

Regular readers might have gathered by now that music ranks equal first - alongside photography and programming - among my favourite recreations. In fact I combine all three in various ways, for example by applying filters to process my pictures, and by writing code to generate musical compositions. It's the latter that concerns me in this column. Around 14 years ago I first became interested in computer composition, and was inspired to write my own MIDI interface in Turbo Pascal v4 (my language squeeze of the time).

Rather than generating real-time music, this unit let me output MIDI files from Pascal programs, which could then be played in my sequencer of choice. I messed around for a while trying to do US-style minimalism (think Adams, Glass, Reich, Riley) constructing complex fugues and phase-change tunes that no human could play. The results never really satisfied me, partly because General MIDI instruments sounded pretty crap through the sound-cards of that era, and because I regularly ran up against shortage of memory problems, using what remained a more or less 16-bit development system. I put the project aside for around 10 years until another spurt of enthusiasm arrived (still using Turbo, but now running in a DOS box on a Pentium/Windows XP system). That time around I made some tunes that were sufficiently convincing to put up on SoundCloud, but there were still nagging problems.

Basically the structure of compiled Pascal confined me to writing quite short tunes. Using fixed length strings and arrays as my main data structures made long-range structure, like successive movements on a varying theme, just too cumbersome to achieve, but creating separate short movements and splicing them together by hand was cheating. My whole intention was to write single programs that generated pieces of recognizable music, interesting if not necessarily pleasant.

Around this time I finally shucked off my (increasingly anachronistic) addiction to Turbo Pascal and fell wildly for Ruby, as documented in previous columns, but never did quite get around to rewriting my composing system in it. There things rested again, until as described a couple of columns ago I remade acquaintance with the hitherto spurned Python language. In order to get up to speed in it I rewrote my venerable Poker program - first effort in Basic on Commodore Pet circa 1980; next in Delphi under Windows; last in Ruby circa 2002 - which translated with surprising ease into Python. Brimming with confidence I thought, it's now or never, and got stuck into rewriting my music system.

I struggled at first because the kind of bare-metal-bit-twiddling (curse you MIDI Variable-Length Quantity!) that's so easy in Turbo Pascal is far from obvious using Python's arbitrary-precision integers. Scanning the forums I soon found a GNU-licensed library by Mark Conway Wirt that does exactly what my old TP one did though, and I was away. Writing the higher level parts proved a revelation. Python's powerful dynamic sequence types the tuple, the list and the dictionary, enabled me to do away with fixed-length arrays and memory allocation altogether, and let me completely redesign the system.

The raw materials of my music remain strings representing sequences of pitch, time, duration and volume values, but now my top-level primitive called MIDIseq.phrase sucks in four such strings, like a ribosome chewing RNA, and chops them up into 4-tuples which are far more efficient and flexible for further processing. All of a sudden, thanks merely to a different set of data structures, my long-range structure problems went away: both horizontal (melody) and vertical (harmony) structures are now essentially without limit. I can write functions to generate random strings, reverse them, invert them, mix and combine them, even evolve them. Python's lambda functions let me generate novel musical scales and apply them on the fly, while iterators offer a fabulously compact way to encode long stretches of melody.

I could hardly be more impressed by this text-book example of what the more savvy computer scientists have been telling us for decades, namely that programs equal algorithms plus data structures, not just algorithms. This deep truth is in serious danger of being lost nowadays, partly thanks to some of the truly awful languages the market has foisted upon us, and partly due to the TED generation's rather naive awe about algorithms. In popular journalism algorithms are all we ever hear about: Google's new search algorithm, the latest AI algorithms, what's the algorithm for a conscious robot, and worse inanities. What neuroscience actually teaches us about the way the brain works is that it's hardly an algorithmic engine at all, and depends rather little on sequential processing. It's really more like a big, soft, fatty mass of fabulously clever data structures. But enough of that, back to my "Contracerto in Z Flat Minor"...

[Dick Pountain is sorely tempted to enter an Internet Of Things fridge for the 2016 Eurovision Song Contest]

COMMAND AND CONTROL

Dick Pountain/ Idealog 257/ 04 December 2015 09:34

During those awful last weeks of November 2015, with the bombing of a Russian passenger jet, the Paris shootings and the acrimonious debate over UK airstrikes in Syria, it was very easy to overlook a small but important news story, an Indonesian report into the loss of AirAsia flight QZ8501. In December 2014 that plane crashed into the Java Sea with loss of all 162 passengers and crew, and recovered "black box" data has enabled investigators to come to a firm, but disturbing, conclusion about the cause. It was what you might call software-assisted human error.

A broken solder joint in the Airbus 320-200's rudder travel limit sensor (probably frost damage) sent a series of error signals that caused the autopilot to turn itself off and forced the crew to take back manual control. Flustered by this series of messages the flight crew made a fatal error: the captain ordered the co-pilot, who had the controls, to "pull down", intending to reduce altitude. It was an ambiguous command which the co-pilot misinterpreted by *pulling* back on the stick, sending QZ8501 soaring up to its maximum height of 38,000 feet followed by a fatally irretrievable stall. The report recommends that international aviation authorities issue a new terminology guide to regularise commands in such emergencies, but I reckon this was a problem of more than just the words. Like all modern jets the Airbus 320 is fly-by-wire, with only electronic links from stick to control surfaces, and in current designs there's little  mechanical feedback through the stick ("haptic" feedback is planned for the next generation, due from 2017). I'd guess that co-pilot received few cues to the enormity of his error by way of his hand on the stick. It's not enough to *be* in control, you have to *feel* in control.

I've been intrigued by the psychology of man-machine interaction ever since I saw ergonomic research by IBM, back in the '80s, that showed that any computer process which takes longer than four seconds to complete without visual feedback makes a user fear that it's broken. That insight lead to all the progress-bars, hour-glasses and spinning hoops we've become so tediously familiar with since. An obscure and controversial branch of psychology called Perceptual Control Theory (PCT) can explain such phenonomena, by contesting the principal dogma of modern psychology, namely that we control our behaviour by responding directly to external stimuli (fundamental to both old-style Behaviourism and newer Cognitive versions). PCT says we don't directly control our behaviour at all, we modify our *perceptions* of external stimuli through subconcious negative feedback loops that then indirectly modify our behaviour.

A classic example might be riding a bike: you don't estimate the visual angle of the frame from vertical and then adjust your posture to fit, you minimise your feeling of falling by continuously and unconsciously adjusting posture. Similar mechanisms apply to all kinds of actions and learning processes and I was easy to convince because from childhood I've always hated skating (roller, ice, skiing), but I still love riding motorbikes at speed. There's no contradiction: when skating my feet feel out of control, whereas when biking they don't. Just some quirk of my inner ear. However this all has some fairly important consequences for current debates about robotics, and driverless cars in particular.  

The recent spate of celebrity doom-warnings about AI and robot domination are all directed against current assumptions that fully autonomous machines and vehicles are both desirable and inevitable. But maybe they're neither? The sad fate of AirAsia QZ8501 suggests both that over-reliance on the autopilot is severely reducing the ability of human crews to respond to emergencies, and also that it would be good to simulate the sort of mechanical feedback that pilots of old received through the stick, so they instinctively feel when they're steering into danger. All autonomous machines should be fitted, by law, with full manual-override that permits actual (or, grudgingly, simulated) mechanical control. Boosters of driverless cars will retort that computers react far faster than humans and can be programmed to drive more responsibly, which is quite true until they go wrong, which they will. Perhaps we need at last to augment Isaac Asimov's three laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
4. A robot must if instructed drop dead, even where such an order conflicts with the First, Second, and Third Laws.

[Dick Pountain believes that any application vendor whose progress-bar sticks at 99% for more than four seconds should suffer videoed beheading]

Friday, 22 April 2016

THE PYTHON HAS SPOKEN

Dick Pountain/Idealog256 /06 November 2015 13:09

One of the more frequent topics of this column has been my interest in programming, both as an important economic activity and, for me, a pleasing pastime: I refuse to call it a "hobby". My last such column was about Scratch, an innovative Lego-like visual programming language for teaching children (and adults) invented at MIT, and since then Kevin Partner has written an excellent PC Pro feature about Scratch (issue 253, p58, Nov 2015). In that column I confessed how I quickly I'd become hooked by Scratch's blocky metaphor, despite some major limitations, and at the very end mentioned that some Scratchers in Berkeley had extended the language to remove these limitations, and called it Snap! It was more or less inevitable that some wet weekend would arrive when I'd take a deeper, non-cursory look at Snap!, and when it did I was re-hooked several times over.

Snap! looks pretty much like Scratch, similar enough to execute most Scratch programs with little or no alteration, but it adds several missing features that make it a more grown-up language. In place of Scratch's table-like arrays it has dynamic lists as first-class, named objects; it has local variables, not merely globals like Scratch; and it supports "continuations" that enable you to pass one block as a parameter to another block, a feature from advanced functional languages like Scheme and Haskell. Given its similarity to Scratch, learning Snap! was, er, a snap and within a day I was looking for serious programs to convert. I settled on several I'd written several years ago in Ruby, including a visual matrix calculator and a visual simulation of a simple eco-system called "Critters". Both went easily into Snap! and worked well, "Critters" ending up as one single smallish block, thanks to the Scratch/Snap! concept of "sprites" (animated screen objects) which took care of all the graphics with no coding on my part.

Anyhow, the point I'm working up to is that this conversion job made a deep impression on me. It may not have escaped you that programming languages have something in common with religions, in that they can attract fanatical adherents who become impervious to criticism. My casual flip from Scratch to Snap! was a sort of apostasy and I'd enjoyed it, so I began sifting through my other Ruby projects, but in the process discovered the wreckage of an abandoned Poker program in Python. I'd flirted with Python back in 2002 but quickly abandoned it in favour of Ruby, on almost entirely aesthetic grounds. In those days I was a fanatical object-orientation nut and I hated Python's OOP syntax which involves prefixing  just about everything with "self", something that Ruby managed to do without.

I've become more ecumenical since, and that weekend of Snap! coding had revived my enthusiasm for Lisp-style list programming, so to my own surprise I decided to finish that Poker program in Python rather than convert it. I downloaded a more recent version, WinPython 64-bit 3.4.3, and set to work. Instead of pursuing the pure OOP architecture I'd started with - card as a class, hand as a class, player as a class and so on - I rewrote the lower levels using Python's "tuples" instead. Each card is just a tuple (facevalue, suit), while a deck is a 52-piece tuple of cards. Soon I'd rewritten the whole thing using only tuples and lists, and never mentioned a class until I reached the player level. My code shrank to half its size, ran many times faster, and more importantly it worked, which the original never had properly. (Crucial to this success was Python's  "list comprehension" construct, a fiendishly clever one-line trick for building lists by pattern matching).

This taught me a serious lesson, one that might help you too. Rigidly adhering to a single methodology can be counterproductive: recognise which tools are right for each job and use them, regardless of dogma. Cards, decks and hands didn't need or deserve to be objects (no inheritance required) while players did because they have many attributes like a hand, a bankroll, a strategy, a personality. When I wrote the original program in OOP style I'd actually obscured its structure, in favour of a class hierarchy the language imposed on me. I'll never stop claiming that good programming is an art, one in which an eye for structure is even more important than a head for logic. Factoring, that is breaking down your code into smaller chunks in the best way, and choosing the right data structures, are far closer to musical composition than they are to science. What a pity that, unlike mathematics which occasionally spawns a Hollwood movie, beauty in programming can only ever be recognised by a handful of fellow programmers.


Biog: Dick = [Idealog for Idealog in PCPro if editor_approval == True]


POD PEOPLE

Dick Pountain /Idealog 366/ 05 Jan 2025 03:05 It’s January, when columnists feel obliged to reflect on the past year and who am I to refuse,...