Why Intelligence Is So Hard to Define
It’s easy to come up with a definition of intelligence that works for everyday purposes. Intelligence is the factor that’s correlated with performance on a wide range of cognitive tasks. The thing Einstein had a lot of.
But for the purposes of AI research, an everyday definition is almost useless. For example, psychologists could tell you that working memory increases performance on a wide range of cognitive tasks. Does that mean working memory is the key to intelligence? No, not really. You can attach as much RAM as you want to your computer; it won’t wake up and develop a new theory of gravity. At best, we can say working memory enhances intelligence on the margin. We’re explaining differences between individuals, but not why humans can do things that other animals and computers can’t.
If we want to build intelligence from scratch, we’ll need to stop being so anthropocentric. Here’s an attempt at a first-principles definition from Legg and Hutter (2007):
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
This definition is applicable to non-human minds, it can be mathematically formalized, and it mostly aligns with intuition. But it has a couple of major problems.
Perhaps the most obvious one, in the age of ChatGPT, is the focus on agentic behavior. By most accounts, LLM chatbots represent the cutting edge of AI. By an agent-centric definition, they’re hardly intelligent at all. The only goal they achieve (if you could even call it that) is writing good responses to natural language prompts.
This shortcoming has practical consequences: definitions like the one above have often been the basis for sloppy arguments about AI agents inevitably taking over the world. But agentic planning isn’t inevitable in AI, for the same reason infinitely long tapes aren’t inevitable in computer hardware. Intelligence, like computation, can take many forms, and the one that’s easiest to describe may not be the one that’s widely deployed.
Even if we are willing to restrict our domain to agents, there’s a bigger problem hiding in the phrase “wide range of environments”. Left informal, this phrase might have invited a counterargument involving the no free lunch theorem. But Legg and Hutter weight environments by the negative exponential of their Kolmogorov complexity, so that’s not the problem here.
The problem is that a “wide range of environments” isn’t what matters. What matters is real environments. Any AI system has to trade off between performance in different environments, so prioritization is key. A system that can conduct Nobel prize-winning research in a biology lab ought to be considered more intelligent than one that can win any number of simple but nonexistent computer games. The Kolmogorov complexity weighting doesn’t capture that. Instead, we’ll have to begin our definition by specifying which environments are real, thereby describing all of human civilization. So much for being objective!
At this point we can see that a practical “definition” of intelligence will be more like a subjective judgment of whether a system can perform useful computational tasks. Is that even a scientific concept anymore?
Kind of. Useful tasks often do have something in common. But I’d argue the most important commonality isn’t about the tasks themselves, or the processes required to solve them. It’s that we perceive the processes as magic.
By this I mean we don’t understand what’s going on, so we reach for a catch-all explanation—just as a young child might not understand how an illusionist is performing their tricks, and conclude that it must be something about the wand. If the child got to look backstage, the magic would disappear; it would be replaced with new, distinct explanations.
As AI progresses, more and more types of human-like behavior can be precisely described. They stop being magical, so they stop being intelligent. In the ’60s one might have speculated that chess playing and essay writing abilities would emerge only from true AGI. That’s not what happened: they arrived decades apart, and the explanatory role of general intelligence has disappeared. Now, we see a chess engine as tree search with some bells and whistles; we call language models stochastic parrots.
But for the quickly shrinking set of tasks where humans have an advantage over machines, we can only see ourselves performing them effortlessly, unaware of the different algorithms our brains are using. So they’re magic. For now.
AI makes sense as a field because “useful computational processes that we don’t understand” is actually a really good way to group together research projects: the techniques involved are often similar. But it’s a mistake—and a source of much confusion—to treat that as a group of fundamentally similar natural phenomena.