Thursday 8 June 2017

PYTHON RHYTHM

Dick Pountain/ Idealog 270/10 January 2017 10:54

I may have mentioned here a few dozen times how deeply I'm into music. I play guitar, bass and dobro (for amusement rather than reward); I regularly attend chamber and orchestral concerts at South Bank and Wigmore Hall; I listen to classical, jazz, blues, rock, bluegrass, country, dubstep, reggae, EDM and much, much  more on Spotify, both at home and on Bluetooth earbuds while walking on heath or park. But on top of all that, for the last 20-odd years I've been working on my own system of computerised composition.

It started in the mid-1990s, using Turbo Pascal to write my own music library that let me generate MIDI files from Pascal programs, and play them on any General MIDI synth. Before too long the memory management limitations of PC-DOS became an obstacle to writing big programs though. I've picked up and dropped this project many times over the intervening years, planned and failed to rewrite it in Ruby, planned and succeeded in rewriting it in the blessed Python. And I recently cracked a couple of remaining knotty problems (both related to the grim unfriendliness of the MIDI protocol) to produce a system that automatically generates tunes which sometimes sound convincingly like human music. A sort of musical Turing Test - not intended as yet another tool to help you write pop or dance music, because there are plenty of good ones already like Ableton and Sibelius.

Instead my system lets me dissect music into its atomic parts, then reassemble them to generate tunes few humans could play. One of the few real regrets of my life is that as a child I turned down the offer of piano lessons, and therefore didn't learn to sight-read musical notation at the best age. Instead I took up guitar in my teens and taught myself blues and ragtime by listening to records. This has had a profound effect on my understanding of music because the guitar is a fundamentally chromatic instrument: each fret is one semitone so the difference between white and black notes means nothing. I did painfully teach myself to read notation eventually but I'm far from fluent, and more to the point it actively irritates me. I now think chromatic.

Hence when designing my system I decided not to make the 'note' its fundamental data structure, but rather - like MIDI itself - I treat pitch, time, duration and volume as separate musical 'atoms'. Of course it's trivial in Python to strap these atoms together as (pitch, time, duration, volume) tuples, then manipulate those as units, and it enables me to translate ASCII strings into sequences of these atoms. The process of 'composing' a tune then becomes a matter of writing strings that define melody, rhythm and dynamics, each of which which I can alter separately. More significantly, the *program* can manipulate them by splicing and chopping, applying functions, adding randomness, say to change the rhythm or the timing of the same melodic fragment.

All the compositional tricks like fugues, canons, rondos, arpeggios are easy to achieve and automate. Play the same fragment in a different scale - major, minor, cycles of thirds, fourths, fifths, wholetone, Bartok's Golden Section pentatony - by choosing from a tuple of scale functions. I can even create new scales on the fly using  lambda function parameters. It's strictly a system for programmers, interfaced via a single Python function called 'phrase' that takes five ASCII strings as parameters. Each invocation, like:

S.phrase(1, 1, scale[mx], Key[2], Acoustic_Bass, S.nSeq(inv(p),t,d,v,m))

writes a sequence of notes, length defined by those strings, to one MIDI track. Programs tend to be short, forty or fifty lines with lots of loops. Using ASCII permits silly tricks like turning a name, let's say Donald Trump, into a tune (and yes, I have, as a sinister bass riff). MIDI is quite limiting of course, the available instruments aren't that great, so any tunes I like I whisk into Abelton and play them with proper samples into an MP3 file.

Though I do like some electronic dance music (especially Brandt, Brauer, Frick) that's really not what this system is meant for. I'm more into sort of jazz/classical fusions that mess with tricksy rhythms and harmonies that would be hard to play on real intruments. Everyone worries nowadays that robots are going to put them out of a job, but neither Adele nor Coldplay need have any fear on my account. In any case my competitors would more likely be software suites developed at France's IRCAM and INRIA, and they needn't panic either. And I don't intend to build a graphical user interface, because for me writing program code is just as much fun as playing with music.

[Dick Pountain would like to offer you a jaunty little tune based on the words BREXIT and TRUMP, by way of demonstration: file brexit bounce.mp3]

CTRL-ALT-DERIDE

Dick Pountain/Idealog 269/05 December 2016 10:47

I got my first inkling of the potential political power of social media back in 2007 on receiving a Facebook friend request from Barack Obama. I was momentarily surprised since I'd never met the man and at that time barely knew who he was. I quickly realised though that it wasn't from Barack himself but rather a bunch of young IT smart-asses at his campaign headquarters who'd learned how to scrape the content of FB posts to compile a list of people who might be sympathetic to his cause (but clearly didn't tell them whether said people had a vote in the USA). It seemed to me then entirely predictable that it would be Democrats, young and "progressive", who first learned how to play the social media, rather than those old, rigid, luddite, perhaps religious Republicans. Oh boy, how things change...

I expended quite a few typing-finger joules during the autumn of 2016 on Facebook, pointing out to certain American friends that the news stories (almost all involving Hillary Clinton) they were commenting on were bogus - affiliated to a number of Far-Right websites that they could have easily checked on Google - and that commenting merely promoted this bogosity to the top of their friends' news feeds. I'd already noticed, over  the previous year or so, that while loitering among the Guardian's Comment is Free forums (another bad habit) that whenever the subject of Putin's Russia was mentioned, a small army of trolls with slightly odd English syntax would appear within minutes, as if from a hollow tree trunk. It began to feel as though something organised was going on.

Turns out my gut feeling was right. Since Donald Trump's surprise election win (not that surprising to me in fact) the UK newspapers have been frothing about the Post-Truth era, Macedonian teenage lie-mongers and the rise of Fake News sites, but embedded among all this froth was an alarming nugget of real information. On December 4th 2016 an Observer story by Carole Cadwalladr alerted me to a research project by Jonathan Albright, assistant professor of communications at Elon University in North Carolina, into the interconnectivity of Far-Right websites which he calls the "The #Election2016 Micro-Propaganda Machine" (https://medium.com/@d1gi/the-election2016-micro-propaganda-machine-383449cc1fba#.gp86cg9ns).

Albright described this phenomenon as an "influence network that can tailor people’s opinions, emotional reactions, and create “viral” sharing (LOL/haha/RAGE) episodes around what should be serious or contemplative issues". Or more succinctly "data-driven 'psyops'" He crawled and indexed 117 sites known to be associated with the propagation of fake news, and produced a mind-boggling diagram of the outgoing links from them which create a vast alternative web that feeds back into the mainstream web via YouTube, Facebook, Twitter, Google and others (diagram at https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook#img-4 if the whole article is TLDNR for you). The builders of this network appear to have a masterly grasp of SEO and tweaking site linkages, which to judge from my own experience is now way in excess of anything Democrats or Independents are able to muster.

The size and effectiveness of this network raises some important questions for the future of our industry (and indeed whole society). Back in Idealog 267 I commented on the fact that the big five IT companies, through their sheer size and reluctance to pay taxes, may soon find themselves at odds with the federal government of the USA. That possibility is now magnified many times with the Republicans in command of both houses and well aware of the Democratic leanings of most Silicon Valley moguls. How much are you prepared to bet on whether the big five will resist all attempts to curb their independence, or will *reluctantly* bend the knee? And what happens if this new government with deep connections to the Alt Right gets a hold of that mighty untruth network?

Politicians have of course been telling lies ever since the Trojan War, but persistent and systematic lying is a relatively recent phenomenon which you might date from Karl Rove's famous quote about the Iraq War: "We're an empire now, and when we act, we create our own reality". In my own Naturalistic philosophy (read all about it at https://www.amazon.co.uk/dp/B00CORF62O) the world is made of atoms but we can't see them directly, just information about them sampled by our various sense organs. This information isn't always "correct": we can mistake things, imagine things, hallucinate, dream, play computer games. The internet is a perfect conduit for such information, for pictures of pizzas rather than pizzas you can actually eat. My definition of a Truth is information that actually corresponds to the action of some clumps of atoms in the real world at a certain time and place, and Albright's Micro-Propaganda Machine, once in malign hands, poses a real threat to the dissemination of such Truths.

[Dick Pountain is finding it very hard to imagine the lunchtime conversation between Donald Trump and Mark Zuckerberg]

BITS, ATOMS AND ELECTRONS

Dick Pountain/Idealog 268/08 November 2016 11:13

In a recent review of Amazon's Echo I read that "voice assistants have been on our phones for a long time, but they haven't really taken off" (until Alexa that is). The part of this claim that stopped me dead was that phrase "long time". Apple's Siri arrived with iPhone 4S in 2011 while Microsoft's Cortana was rolled out in 2015, so it would seem that five years is a "long time" nowadays. And it's true. Technology progresses, if perhaps not exponentially then according to some power law, so five years really can now be a whole generation. Leaps from CDs to downloads, downloads to streaming, keyboards to touch screens to voice: for millions of youngsters those "older" techs, if known at all, seem prehistoric.

Many commentators would proceed from there to discuss the effect of such rapid change on the human psyche, but I propose to skip all that (except for a throw-away aperçu that this crazy pace is partly to blame for that widespread disgruntlement displayed via Brexit and Trumpism). I'm actually more interested in how sustainable this pace is. There is of course one popular strand of opinion that answers this question with "for ever and ever", until we have robots smarter than us, we live in virtual worlds, and all our minds are connected together into some kind of singularity. I don't believe a word of that, perhaps because I bored myself of science fiction back in the '60s. I'd argue instead that although progress will continue, it's already diverting into a different direction, from bits back toward atoms.

I first put Bits v Atoms in a column title over 20 years ago (PC Pro issue 22), when observing that you can order a pizza via the web but you can't eat the picture of a pizza on the web. A platitude that remains true, though you can nowadays order a pizza and have it delivered to your door by Deliveroo, Just-Eat, Hungryhouse, GrubHub or a dozen similar sites. We're made of atoms so we need to eat atoms, and though we pay for those atoms nowadays in electrons (credit card transactions), those electrons only have value thanks to the atoms they can buy. Cars, houses, boats, yachts, private jets these are the things the owners of our social media get out of the game, not pictures on screens (and most of us would like some too). This inexorable logic means that over the long-term, information, bits, pictures on screens, can only decrease in value compared to atoms, and we're already beginning to see the effect of this on the internet giants. IT and the internet may have reduced the cost of distributing bits almost to zero, but they've barely started on reducing the cost of distributing atoms/things.

On the one hand Twitter - which is still losing *billions* rather than millions - just had to axe its video-sharing service Vine. On the other hand Netflix and Amazon have both started creating their own original digital content, which surely contradicts my thesis. But does it really? Already I'm reading in the business sections that both companies are getting nervous about the enormous cost of producing this content, and I'd guess that both are doing it only as a long-term strategic attack on Hollywood and the over-air and cable TV companies. Once (and if) they manage to slay those, pillage their audiences and archives, it will surely be more profitable to revert to recycling that vast archive, rather than pay for expensive new content. The makers of the great movies of the 20th century had no choice but to spend all that money: given a choice, a canny modern investor won't.

There is a way though, admittedly a wildly eccentric way, that I can see for the internet of the far future to remain economically viable. Some rather loud hiccups notwithstanding, BitCoin has demonstrated that it can fulfill the role of paying for stuff in the atom world. Also, the notion of a Universal Basic Income is starting to be taken seriously in some rather surprising political quarters. Now BitCoin imparts value to its electrons through scarcity, by the process of "mining" them using a horrendously intractable algorithm running on a very fast computer. Fine, let's invent a smallish computer that will be fitted into every household like a utility meter, which contains a whole stack of fast GPUs that are continuously mining something resembling BitCoin - the amount yours uncovers becomes your Universal Basic Income. What's more, these GPUs are water-cooled, so it's also your central heating boiler! Voilà, energy and money integrated into the same system that's distributed via existing infrastructure. OK, so the banks and electricity companies won't like it much. Boo hoo.  


CLOUDS ON THE HORIZON?

Dick Pountain/Idealog 267/05 October 2016 11:05

Last month PC Pro ed Tim Danton asked us for masthead quotes on "Which technology brand have you recommended most over the years, and why?" My reply was "Google, since their cloudy goodness now serves most of my computing needs". At the time that felt like a rather dull answer, but ever since Google cluster-bombed the industry with new products on 5th October I feel pretty damned smug (and prepared to pretend I had inside knowledge). 

That answer was based on the fact that Google has been handling my email, calendar and contacts in its cloud for many years now, thus permitting me to access them from any device and platform. More recently I've found that the ever-increasing capabilities of Google Keep let me use it for much of my data storage too. Don't be fooled by its Tonka Toy appearance, its combination of full-text search, colour coding and labelling makes it the most flexible text database I've found, plus it's multiplatform and its voice recognition is effective enough to enable dictation of notes. When shopping in Sainsbury's my grocery list is a Keep widget on my Android phone's home screen, and all the outlines and notes for this column go into Keep via my Lenovo Yoga laptop. I don't use Google Docs much myself, preferring Dropbox, but I often receive long documents that way from others I work with (including PC Pro).  

Google could in theory let me realise the perennial dream of a single box that does everything, given a large enough phone (I refuse to use the ph**let word) or top-end Chromebook, but actually it's achieved the opposite. Since I can access my data from anywhere I use five different boxes: Lenovo Windows laptop, Asus Android tablet, HTC Android phone, Amazon Kindle 4 and a Sony WX350 compact camera (for me phones make lousy cameras, regardless of resolution). As for those other four boxes, it's all about screens. I use the tablet most, to read and answer email, run Citymapper when I need to plan a trip across London, but mostly for searching Wikipedia and Google while I'm reading. And my reading is done either in paper books or on Kindle - I deliberately keep a vintage, non-backlit, non-touchscreen version 4 because I prefer to read by natural light and only want pages to turn when I say so. Nowadays I tend to request review books in Kindle format so I can make notes and highlight quotes as I read, then cut-and-paste those from Windows Kindle Reader on my Yoga straight into Libre Office. 

Since so much of my work and personal data now resides in Google's ecosystem, doesn't that dependency worry me? In some ways yes, but perhaps not ways you'd expect. I feel safer with Google than I would with Apple for all kinds of reasons: Apple has an even worse record than Google for high-handed treatment of its users, and I'm no devotee of its cult of shiny things. Both Amazon (via Kindle and Fire tablets) and Facebook would love to create ecosystems as rich as Google's, but they will be a long time coming and I don't trust either company that much. So the question for me really is, what's the future for *all* of these giant IT empires? 

A snippet from The Week's US business pages caught my eye recently, reporting that for the first time ever the five most valuable companies in the world by market capitalization are precisely these US tech companies: Apple, Alphabet (ie. Google), Microsoft, Amazon and Facebook. The amount of US corporation tax they avoid every year and the trillions of dollars they have stashed in overseas tax havens would transform the Federal budget if repatriated and taxed properly, providing on an ongoing basis enough to reform both the US health and educational systems. They all have turnovers comparable to the GDPs of many small countries. They possess competences that would be invaluable if applied to government. They are, in many respects, like mini-states themselves, the most notable missing component being that they don't have armies. This being the case, my extensive reading (especially of Machiavelli and Hobbes) suggests that the real state, that is the US Federal Government, cannot forever tolerate their current behaviour, nor resist the rich pickings that they flaunt. Sooner or later they'll cross some invisible line - Apple nearly got there by refusing the FBI request to crack that terrorist iPhone - a serious confrontation will arise, and they will discover what all such aristocracies throughout history have eventually learned, namely that you really do need an army. I don't pretend to know when or how it will happen, or with what result. I'm 71, I'll just keep using Google anyway...







SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...