Sunday, 1 July 2012

BACK TO ANALOG?

Dick Pountain/02 October 1996/12:44 pm/Idealog 26

For  anyone connected with the computer industry, it goes without saying that digital is better than analog. With a few notable exceptions digital technologies have been steadily displacing their analog predecessors for several decades now, to the extent that 'digital' has become almost a synonym for 'modern'. The few hold-outs like cinema film, TV and radio broadcasts are all poised to go digital very soon. Indeed the benefits of digital radio are so enormous, according to John Birt, that he's prepared to trash the world's best radio service to pay for them.

Few people ever question the innate superiority of digital technologies, which is just as well since if you do the answers are not at all clear cut. Before exploring the argument though I'd better establish exactly what we are talking about here. A digital technology is one which handles information divided into discrete chunks (eg. bits and bytes) whereas an analog technology works with continuously variable physical quantities (eg. voltages or pressures.) By and large the physical world we inhabit is analog. OK, quantum theory tells us that at the very smallest scale everything consists of indivisible quanta, but in everyday life we do not buy potatoes by the quantum but the kilogram, and we might receive any quantity (say 1.98345 kg or 2.0345679 kg) depending upon the accuracy of the greengrocer's scales. Any digital representation of this analog world is just an approximation - you can represent a continuously variable quantity like weight or illumination to an accuracy that depends on the length of the binary word you employ (eg. 16-bits, 32-bits.) The process of sampling or digitisation requires work and therefore costs energy.    

The great advantage of digital representations over analog is that they are much more robust against noise. I can transmit the byte 01111010 with perfect accuracy so long as those ones remain just strong enough to peek above the background noise, but if I set 1.28934 volts on a line, noise will alter the value measured at the other end in unpredictable and cumulative ways. This vastly improved reliability fuelled a post-war revolution in digital electronics, and there could have been no computer industry without it. However the computer industry has now advanced to a point where we are using digital machines to mimic the real world more and more closely, and it's at this point that we at last start to feel the costs rather than just the benefits of digitisation. Some aspects of the real world are extremely costly to compute by digital means, and as a result analog computation is actually making a comeback. An analog computer is any machine that phyically simulates the system whose behaviour you want to predict - a familiar example would be the old fashioned clockwork timepiece, which is just a little mechanical simulation of the earth's rotation.

If you can find a physical process that emulates your problem well enough, it can be dramatically more efficient than a digital algorithm because you are not incurring the cost of digitising the data, computing it and then converting back to an analog display. Here are a couple of silly examples. To divide by two, take a piece of paper whose length represents your number, fold it in half and measure it - your eyes and the paper's physical coherence perform the computation. To sort N numbers, cut lengths of uncooked spaghetti of proportionate lengths, hold them all in your fist and press it onto the table top till their lower ends are flush - now extract them from the top in sorted order. The actual sorting (though not the cutting or extracting) takes place in parallel, requiring the same time regardless of N (if your fist is big enough.) This is better behaviour than any known sort algorithm for a sequential digital computer, the best of which take time proportional to N log N. Of course this spaghetti-based computer is awfully slow in absolute terms, but you can find equivalent properties of electrical circuits that are very fast indeed.

A good friend of mine researches neural network chips at Caltech, alongside Carver Mead, the guru of the new analog electronics. Mead's famous dictum is that you should 'work with the physics' and his department uses this principle to build, among other things,  silicon retina chips that mimic various capabilities of the human eye. A chip they designed a few years back solves the 'color constancy' problem (eg. how does your eye  see the same red apple under morning or evening light.) Edwin Land, inventor of Polaroid, devised the Retinex algorithm which solves this problem, but it involves subtracting a blurred version of an image from itself which is horrendously expensive to compute digitally (of time order N4 for an N x N grid.) However it turns out that the currents flowing through an N x N grid of resistors will 'naturally' settle to the desired values because of an analogy between Kirchoff's Laws and retinal cells, and they do it in time ~N2. So the Caltech team built an analog chip based on a resistive grid which works in real-time on the RGB video stream from a camera - there's no digitisation or clocking involved, just raw video in and colour-corrected video out.

The human brain itself certainly works on similar analog principles, using physical effects like the diffusion of ions and the permeability of membranes to perform its computations. Despite having a signalling speed of just a few feet per second, it can still trounce a 300MHz digital CPU on anything except straightforward arithmetic speed. Carver Mead has calculated that human neurons consume on the order of 10-16 joules of energy per operation, but that no digital device will ever use less than 10-9 joules/operation, the difference being the price of digitisation.

No comments:

Post a Comment

INTERESTING TIMES?

Dick Pountain /Idealog 369/ 07 Apr 2025 12:06  Some 26 years ago (May 1999/ Idealog 58) I opened this column thus: “It's said there'...