Tuesday 14 April 2020

INTELLIGENCE ON A CHIP

Dick Pountain/ Idealog 301/ 4th August 2019 12:59:28

I used to write a lot about interesting hardware back in the 1980s. I won a prize for my explanation of HP’s PA-RISC pipeline, and a bottle of fine Scotch from Sir Robin Saxby for a piece on ARM’s object-oriented memory manager. Then along came x486, the CPU world settled into a rut/groove, and Byte closed. I never could summon the enthusiasm to follow Intel’s cache and multi-core shenanigans.

Last month’s column was about the CogX AI festival in King’s Cross, but confined to software matters: this month I’m talking hardware, which to me looks as exciting as in that ‘80s RISC revolution. Various hardware vendors explained the AI problems their new designs are intended to solve, which are often about ‘edge computing’. The practical AI revolution that’s going on all around us is less about robots and self-driven cars than about phone apps.

You’ll have noticed how stunningly effective Google Translate has become nowadays, and you may also have Alexa (or one of its rivals) on your table to select your breakfast playlist. Google Translate performs the stupendous amounts of machine learning and computation it needs on remote servers, accessed via The Cloud. On the other hand stuff like voice or face recognition on your phone are done by the local chipset. Neither running Google Translate on your phone, nor running face recognition in the cloud make any sense because bandwidth isn’t free, and latency is more crucial in some applications than in others. The problem domain has split into two – the centre and the edge of the network.

Deep learning is mostly done using convolutional neural networks that require masses of small arithmetic operations to be done very fast in parallel. GPU chips are currently favoured for this job since, although originally designed for mashing pixels, they’re closer to what’s needed than conventional CPUs. Computational loads for centre and edge AI apps are similar in kind but differ enormously in data volumes and power availability, so it makes sense to design different processors for the two domains. While IBM, Intel and other big boys are working to this end, I heard two smaller outfits presenting innovatory solutions at CogX.

Nigel Toon, CEO of Cambridge-based Graphcore, aims at the centre with the IPU-Pod, a 42U rack-mount board that delivers 16 PetaFLOPs of mixed-precision convolving power. This board holds 128 of Graphcore’s Colossus GC2 IPUs (Intelligence Processor Unit) each delivering 500TFlops and running up to 30,000 independent program threads in parallel in 1.2GB of on-chip memory. This is precisely what’s needed for huge knowledge models, keeping the data as close as possible to the compute power to reduce bus latency – sheer crunch rather than low power is the goal.

Mike Henry, CEO of US outfit Mythic was instead focussed on edge processors, where low power consumption is absolutely crucial. Mythic has adopted the most radical solution imaginable, analog computing: their IPU chip contains a large analog compute array, local SRAM memory to stream data between the network’s nodes and a single-instruction multiple-data (SIMD) unit for processing operations the analog array can’t handle. Analog processing is less precise than digital – which is why the computer revolution was digital – but for certain applications that doesn’t matter. Mythic’s IPU memory cells are tunable resistors in which computation happens in-place as input voltage turns to output current according to Ohm’s Law, in effect multiplying an input vector by a weight matrix. Chip size and power consumption are greatly reduced by keeping data and compute in the same place, hence wasting less energy and real-estate in A-to-D conversion. (This architecture may be more like the way biological nerves work too).

To effectively use the extra intelligence these chips promise we’ll need more transparent user interfaces, and the most exciting presentation I saw at CogX was ‘Neural Interfaces and the Future of Control’ by Thomas Reardon (who once developed Internet Explorer for Microsoft, but has since redeemed himself). Reardon is a pragmatist who understands that requiring cranial surgery to interface to a computer is a bit of a turn-off (we need it like, er, a hole in the head) and his firm Ctrl-Labs has found a less painful way.

Whenever you perform an action your brain sends signals via motor neurons to the requisite muscles, but when you merely think about that action your brain rehearses it by sending both the command and an inhibitory signal. Ctrl-Labs uses a non-invasive electromyographic wristband that captures these motor neuron signals from the outside and feeds them to a deep-learning network – their software then lets you control a computer by merely thinking the mouse or touchscreen actions without actually lifting a finger. Couch-potato-dom just moved up a level.

No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...