Skip to main content

Hard(ware) Takeoff

Debates about AI takeoff often devolve into a weird sort of reference class tennis: is intelligence more like the weight of a boat or the distance a plane can fly? I don’t find these terribly informative. On the other hand, sometimes people draw comparisons between AI development and the evolution of primate brains. While this is better, it’s also unsatisfying because the precise details of how brains work are not well understood. What was supposed to be a source of evidence ends up being a way to rephrase pre-existing intuitions.

I think computers themselves are one of the more useful analogies here. They have the advantage of being built by people, so the precise details are better understood. And if all software is artificial intelligence, maybe we can learn something from the way application software has developed.

If you consider the state of software today, it doesn’t look much like a “hard takeoff” scenario. Capabilities are siloed between applications, and they advance gradually over time. When a new version comes out, it may be only slightly better than the old one—some people will probably complain that it’s worse. It’s not different in kind.

But if you look back to the early history of computers, something more discontinuous happened. For thousands of years, abacuses were the state of the art. Mechanical adding machines were an improvement over that. And then electronics came along; we got video games and spreadsheets everything in between.

Abstractly, some sort of threshold was reached in computing hardware, beyond which many surface-level capabilities appeared in quick succession. And if you hit your computer with a hammer, all those capabilities would disappear at the exact same time. So perhaps different software capabilities are one and the same on some deeper level. In an important sense, they are. Video games and spreadsheets run on the same chips, with the same instruction sets, often written in the same programming languages compiled in the same way.

In another important sense, different pieces of software are entirely different. You’ll never install Microsoft Excel as an accidental side effect of installing Fortnite, and if you want to, your copy of Fortnite won’t help. A better description is that electronics provided a few key primitives that were necessary to construct a universal computer. Software is composed entirely of these primitives, but its nature depends on how they’re composed. A hobbyist with a spare afternoon can make a Turing-complete programming language, but not a triple-A video game, let alone the panoply of other software that makes computers interesting.

When hardware becomes versatile enough, complexity shifts to the software above it. This is a common dynamic underlying other “hard takeoffs”. In fact, it can happen multiple times to a system, such that what used to be software becomes the new hardware—like how the genetic code arose from the periodic table and in turn set the stage for cultural evolution. If you only look at the lower layers, you’ll miss the more complex ones, which are actually doing the work.

Deep learning has produced a slew of impressive results from systems that share simple and versatile ideas: layers, attention, gradient descent. That doesn’t mean that those ideas “are” intelligence. They’re more like the hardware for intelligence. What’s actually doing the work is the computations encoded in the weight matrices, which are diverse and complicated and messy. (In future systems, the computations might include ostensibly “general” concepts like tree search and Bayes’ theorem. If those were really the essence of AGI we’d have had it 50 years ago.)

General intelligence is not a single insight to be found with a blackboard and a laptop; it’s far more multifaceted. There’s no reason in principle that certain types of “intelligent” behavior can’t exist without others. But we can still expect to develop many of them at around the same time.