We’re approaching the limits of computer power — we need new programmers now
Ever-faster processors led to bloated software, but physical limits may force a return to the concise code of the past
Way back in the 1960s, Gordon Moore, the co-founder of Intel, observed that the number of transistors that could be fitted on a silicon chip was doubling every two years. Since the transistor count is related to processing power, that meant that computing power was effectively doubling every two years. Thus was born Moore’s law, which for most people working in the computer industry — or at any rate those younger than 40 — has provided the kind of bedrock certainty that Newton’s laws of motion did for mechanical engineers.
There is, however, one difference. Moore’s law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application. In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases. „In terms of size of transistor,” he said, „you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations before we get that far — but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”
We’ve now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, there’s been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called „cores” — in the hope of postponing the awful day when the silicon chip finally runs out of road. (The new Apple Mac Pro, for example, is powered by a
But computing involves a combination of hardware and software and one of the predictable consequences of Moore’s law is that it made programmers lazier. Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes. He wrote it in assembly language to increase efficiency and save space; there’s a legend that for years afterwards he could recite the entire program by heart.
As Moore’s law reaches the end of its dominion, Myhrvold’s laws suggest that we basically have only two options. Either we moderate our ambitions or we go back to writing leaner, more efficient code. In other words, back to the future.
Прийшов час зупинитися, перестати писати і використовувати тормозні фреймворки, і почати оптимізовувати наш власний код? Невже настає епоха вдуманого програмування і висококваліфікованих розробників?