Skip to main content

All Software Is Artificial Intelligence

Looking back on the history of computing always gives me a strange mix of feelings. First is amazement at how far we’ve come: from mainframes to PCs to smartphones, from command prompts to GUIs to speech recognition that actually works pretty well. Just about any graph you can draw shows incredible orders-of-magnitude growth.

At the same time, the vision of early computing pioneers is one we’ve fallen short of in many ways. There was plenty of inspiring talk about using computers to extend our minds, to think deeper thoughts and solve bigger problems. We’ve done some of that. But often, the experience of knowledge work on a computer is rather different: not so much extending the mind as wrestling with a mess of pre-defined abstractions.

By most accounts we have an abundance of software, addressing diverse needs and easier than ever to access. But with serious use, virtually any piece of software turns out not to be quite right. An important feature is missing, or buried in layers of menus. Data is forced into an unhelpful structure. Importing and exporting are cumbersome. Alas, software is distributed in a sealed package with little room for end-user customization. Take it or leave it.

In theory, programmers can create and modify software to meet their own needs. In practice, this is so complicated that it’s only rarely worth it. So even programmers are left waiting for Big Tech to build an entire software ecosystem around their workflow. Don’t hold your breath, though. Big Tech is busy with a new way to send 140-character messages to your friends. Forget flying cars—we didn’t even get a bicycle for the mind.

What happened? Why is it that, whether I’m running a minimalist UNIX environment or forking my data over to some billion-dollar startup, software feels so disappointingly constrained?

AI researchers have learned a lot about disappointment and constraints. If you set aside your hindsight and imagine going back to 1970, the field appears to be on the verge of a breakthrough. There were convincing demos of programs that could understand natural language and reason about its meaning. Human-level intelligence couldn’t be far off, right? But we know how the story goes: this was an illusion. Early AI projects only worked in the simplified world of demos, unable to cope with the nuance of the real one. When people tried applying the same techniques to more difficult problems, the illusion fell apart.

It’s at first tempting to say that these efforts were on the right track, but missing a key piece. Perhaps we need to add a few more rules for this edge case. Perhaps some deep mathematical insight will better capture the essence of reasoning. This is just another part of the illusion. No matter how hard we look, we’re blind to the complexity of what goes on in our own minds. We can’t count all the facts or linguistic structures that a program will have to understand before it’ll pass as intelligent. In terms of concepts we’re familiar with, AI seems like it ought to be straightforward. But that doesn’t count for much. Of course the human mind is simple to describe in terms of itself. When it turns out to be objectively complex, we call this a paradox.

This problem isn’t specific to domains that are traditionally considered “AI”. It’s a fundamental mismatch between what computers can do and what people want. Making a computer useful is hard. If you want it to do X, you have to write code to do X. Then, if you want it to do Y, you have to write more code to do Y. Our lives are full of Xs and Ys, and no shortcut will take care of them all.

It’s easy to see the same principle at work in application software. Most applications don’t even try to support the long tail of things that users might conceivably want to do. Those that try fail. In a sense, all software exhibits the properties that doomed so many 20th century AI projects: it’s labor-intensive to create, it only does a fraction of what we want, and it’s very brittle. Even simple problems break the assumptions it’s built upon.

Considering how badly the meticulous case-by-case approach failed in AI, it’s actually remarkable how well it works in some areas. Often the requirements are narrow enough, and the appeal is broad enough, that it’s profitable to make software that’s really good at one or two things. Unfortunately, there’s also a lot of value in that long tail of use cases, in functionality that doesn’t make sense to manually implement. Here, you’ll have to make do with the tools other people have already built. These tools aren’t intelligent; they can’t adapt to new situations. Whenever you want to do something the designers didn’t anticipate—which is extremely common in our messy human world—you’ll be disappointed. As long as computers lack something like human-level intelligence, the problem is fundamental.

But wait. What if we invert that statement? In the past decade, there’s been a growing hope we can replicate human-level intelligence, not with manually encoded knowledge, but with massive amounts of data and computation. If that’s true, what becomes possible?

Unambitiously, one could imagine a computer that programs itself. The user would write a natural language description of their needs, and a few minutes later there would be a custom-made application to address them. It would be trivial to experiment with dozens of interfaces for any purpose, no matter how niche. And with software so cheap to create, the best ideas would be open-sourced before you know you need them. Well-designed native applications to do anything, for free. That’s a start.

From there, deeper changes make sense. Virtual assistants take over anything that that doesn’t require human interaction. Boundaries between applications blur as features are invented and deployed on the fly. Even source code may lose relevance if AI systems have a superior way to represent programs. Taking it to an impractical extreme, every pixel on the screen could be rendered by a neural network. Designers, either human or machine, will need to come up with radically new interfaces for this malleable and intelligent form of software.

The approach I just described has a certain irony to it. Alongside brittleness, complexity and inefficiency are among the most frequently-cited problems with software. Can’t we do better without resorting to neural networks and supercomputers?

Indeed, though fully general software is an illusion, there are ways things could be improved on the margin: crisper abstractions, simpler tools, affordances for interoperability and customization. But as the software industry has grown far beyond the agency of any one actor, so too has the difficulty of coordinating to do things differently. No one’s in charge here. There’s only a collection of companies and individuals quite reasonably following the incentives. As much as people like high-quality software, it’s often more profitable to solve the first 80% of a problem before competitors do.

Human-level AI doesn’t have this issue. It may be decades off, and it will certainly introduce a host of challenges more formidable than bad interface layouts. But if we navigate those well, a better kind of software becomes possible—almost inevitable. Adaptability is exactly what the profit gradients point to. So next time you have to put up with a slow and buggy Electron app, take consolation in the fact that the same forces trapping us in this equilibrium can eventually break us out of it. Only then will computing finally deliver on its promise, over half a century old.