Tuesday 19 January 2021

END OF MOORE’S LAW (AGAIN)

 Dick Pountain/Idealog 312/15 Jul 2020 01:58

It’s now a good 11 years since I last wrote a column (Idealog 180) about the end of Moore’s Law, in which time the number of transistors on a chip must have grown at least 100-fold. As I said in that column (just as in the one four years earlier), predicting the end of Moore’s Law is a mug’s game. It’s a bit like predicting The Second Coming or the arrival of a Covid-19 vaccine. In a 1997 article for Byte I’d predicted that lithographic limits and quantum effects would flatten the curve below 100 nanometre feature size, and I was only off by one order of magnitude which almost counts as a win in this futile race. Intel’s latest fabrication plant, built to produce chips with minimum 10 nanometer feature size was very late indeed and only started delivering chips in 2019, five years after the previous 14nm generation of chips.

However in the last few months a chorus of highly-qualified commentators have been declaring that this time it’s for real: high performance computing pioneer Charles Leiserson of MIT has remarked that “Moore’s Law was always about the rate of progress, and we’re no longer on that rate.” It’s not just those physical limits on feature size I was writing about, but economics too. The cost of building a new fab has been rising 13% year-on-year and is headed north of $16 billion, at precisely the time that Donald Trump, great tech entrepreneur that he is, is calling for US companies to bring chip fabrication back home as part of his trade-war on the Far East. Only Intel, AMD and Nvidia can even contemplate a next lower level of feature size (and Nvidia’s not all that sure).

Of course reaching bottom in feature size doesn’t mean the end of all progress in computing power. The effect of Moore’s Law itself encourages software bloat - why bother writing efficient code if next year’s chip will speed up today’s crappy code. This is a problem waiting to be tackled: most of today’s commercial software could probably be sped up 100-fold by a decent rewrite. But another problem is that rewriting code is almost as expensive as fab-building.

Parallelism looked like the solution for a long time, and it sort of was: even the cheapest mobile phones today use multi-core processors, and AMD is selling 16-core chips right now. The thing is, the more cores you add to a chip, the more of the silicon real estate gets eaten up by interconnect, and what’s worse, the model of parallelism employed for x86 family processors isn’t automatically exploitable by old software without a rewrite.

There are two hugely different groups of people who need the extra power of multiple cores: games vendors and AI developers. The former have the cash to rewrite their flagship games for each CPU generation. The latter need far more, and far more flexible, parallelism than these chips offer, and so are headed off along a path toward special-purpose processors (I wrote about some of their novel architectures in Idealog 301 after the 2019 CogX show). Such ‘IPUs’ can speed up the kind of massive matrix and convolution calculations performed during deep-learning by several orders of magnitude - the problem is they can’t run Animal Crossing, or Google Chrome, or Microsoft Word. They are not general-purpose processors. They are however potentially incredibly lucrative, since they will eventually end up in every mobile phone, Alexa-style interface or self-driving vehicle. Venture capital is queuing up to invest in them just as mainstream processors begin to look less like a hot tip. Neil Thompson, an economist at MIT’s AI centre, has just written a paper called “The Decline of Computers as a General Purpose Technology” which gives you an idea of the drift.

Moore’s Law is underpinned by the scaling behaviour of CMOS fabrication technology, and it’s that we’re approaching the end of. Professor Erica Fuchs, of the engineering and public policy at Carnegie Mellon university, worries that a successor technology with equally benign scaling properties, that could maintain Moore’s Law for general-purpose chips, is as yet unknown and may take years of basic research and development to find with no guarantee of success. Candidates might include carbon nanotubes, graphene transistors, spintronics or even the dreaded qubits but none of these are obvious replacements for CMOS scaling. She calls for a huge boost in public research funding to replace all the venture capital that’s being diverted into special-purpose AI chips. Unfortunately the colossal cost of the Covid-19 pandemic is likely to make that a very hard sell indeed, given that most politicians have little idea of what chips do at all, let alone the subtle distinctions between special and general purpose ones.

[Dick Pountain plans to leave it for 16 years before he writes another Moore’s Law column]






















No comments:

Post a Comment

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...