Crypto had an adoption problem that few people addressed honestly. The technology was real. The infrastructure was improving. The economics were compelling on paper. But the primary interface was a set of tools built by experts, for experts. You'd hand someone a wallet, point them to a DEX, explain gas fees, and tell them to go explore. Most people didn't. The ones who did were self-selecting: technically literate, risk-tolerant, and motivated enough to push through friction that would stop a normal person cold.
AI has a milder version of the same problem. The default interface for the most capable technology in a generation is a chat box. An empty text field asking "How can I help you today?" For the 1-5% of power users who treat that cursor like a creative instrument, this is enough. They write elaborate prompts, chain together workflows, and build custom agents. They're the early adopters who would have figured it out regardless of the interface.
But if the goal is 50%+ of knowledge workers using AI as part of their daily workflow, the chat box won't get us there. Most people don't know what to ask. They don't know what's possible. They open ChatGPT, type "summarize this," get a decent result, and close the tab. Maybe they come back a few times. The distance between "this is impressive" and "this is part of how I work" is real, and a blinking cursor doesn't bridge it.
This isn't a model quality problem. GPT-4, Claude, Gemini are all strikingly capable. The bottleneck is the interaction layer. And we've seen this exact pattern play out before, with higher stakes and a longer timeline.
The IBM era
In the 1960s and 70s, computers were powerful, expensive, and operated by specialists. If you wanted to use a computer, you needed to understand the computer. You'd write instructions in a programming language, submit them on punch cards or through a command line, and wait for results. The machine did what you told it to, assuming you knew how to tell it.
This was the IBM era of computing. The technology was transformative for the organizations that could afford it and staff it. Banks and large corporations, government agencies and research institutions. But the idea that a regular person would have a computer in their home, that they'd use it for writing and budgeting, would have been absurd. Computers were tools for computer people.
The user base was self-selecting in the same way AI's user base is today. You had to care enough about computing to learn computing. The value was there, but it was locked behind a competence barrier that most of the population had no reason to cross.
What Apple and Microsoft actually did
Two things happened in sequence. Apple made the computer personal, and Microsoft made it universal.
The Macintosh in 1984 introduced the graphical user interface to a mainstream audience. Instead of memorizing commands, you could point and click. Instead of thinking in the computer's language, the computer started meeting you in yours. Files looked like files. Folders looked like folders. The abstraction layer between "what the machine can do" and "what I want to do" got thin enough that normal people could cross it.
But the GUI alone wasn't enough. What Microsoft did through the late 80s and 90s was build the application ecosystem that turned personal computers into universal tools. Windows gave you the operating system. Office gave you the applications. Word for writing, Excel for numbers. Each application took a general-purpose machine and gave it a specific, well-defined job. You didn't need to understand computing to write a memo. You didn't need to know how a spreadsheet engine worked to build a budget. The application told you what it could do, gave you structured ways to do it, and handled the complexity underneath.
This is the part that matters. The shift from "here's a powerful machine, figure it out" to "here's a tool for your specific job" is what took computing from millions of users to billions. The general-purpose capability was necessary but not sufficient. The application layer is what made it real.
AI's IBM moment
Today's AI landscape maps cleanly onto computing in the late 70s. The foundation model companies are building the equivalent of mainframes and operating systems. Capable, general-purpose, improving quarter over quarter. The chat interface is the command line of this era: powerful for those who know how to use it, opaque for those who don't.
The power users are already here. Developers building with APIs and knowledge workers who've woven AI into their research and writing workflows. They represent the same slice of the population that early computer users did: self-motivated, technically curious, willing to experiment.
But the vast majority of knowledge workers are still in "summarize this" territory. They've tried AI. They've been impressed. They haven't changed how they work. The chat box gives them a taste of capability without a path to habitual use.
The first wave is already here
Companies are already building AI applications, and their success will hinge on where they meet the user.
The first attempt from most incumbents has been predictable: bolt an AI chat onto the existing product. Microsoft added Copilot to Office. Notion added Notion AI. The pattern is the same in both cases. Take the app people are already using, add a chat sidebar, and call it AI-powered.
This is understandable but insufficient. Bolting a chat interface onto an existing workflow doesn't change how people work. It adds a feature. And features don't drive adoption of a new paradigm. The spreadsheet succeeded because it was built from scratch around a new capability. That's the bar. An AI sidebar on a product designed for manual work is still a product designed for manual work, with a text field stapled to the side.
The companies that crack AI adoption will rethink applications from the ground up. Starting from what AI can do, and building the workflow and interface around that, rather than retrofitting intelligence into tools designed before intelligence was available.
Distribution is different this time
The computing analogy diverges in one important way. In the 70s and 80s, distribution was the bottleneck. Microsoft won in part because it controlled the OS and had distribution agreements with PC manufacturers. If you wanted to reach users, you needed OEM deals and physical distribution channels. The playing field favored incumbents by default.
That constraint doesn't exist anymore. A startup building from a garage in Bangalore or a dorm room in Austin can ship an AI application to the entire internet on day one. Distribution is solved. What isn't solved is the experience. The moat in the AI application era isn't who can reach the most devices. It's who can build the workflow that makes a knowledge worker say "I can't go back to doing this the old way."
This means the Word and Excel of the AI era might not come from Microsoft or Google. They could come from a two-person team that understands a specific workflow better than any incumbent and builds an AI-native tool around it. The barrier to entry is low. The barrier to getting the experience right is high. That's a healthy combination.
Who becomes the OS?
The foundation model companies are aware of this dynamic. They aren't content to be the engine underneath other people's applications. They want to be the platform.
Anthropic is already moving in this direction with products like CoWork, a desktop tool that lets non-developers automate file and task management. It's an early signal of what an AI operating system might feel like: an environment where AI-native workflows run, not just a model you query through a chat window.
Whether Anthropic or OpenAI becomes the AI equivalent of a modern OS is an open question. The path from "powerful chat interface" to "platform that hosts an ecosystem of applications" is long, and the eventual shape might look very different from what we'd predict based on the computing analogy. These companies have the intelligence layer. Whether they build the right application layer on top, or whether that gets built by thousands of startups on their APIs, is the defining question of the next five years.
The timeline is compressed
This transition will happen faster than the computing analogy suggests. The shift from IBM mainframes to Windows took roughly 20 years. AI's version will compress into something closer to five. The foundation models are already good enough. The APIs are accessible. The infrastructure is cloud-native. The missing piece is the application layer, and building applications on top of AI is orders of magnitude easier than building applications on top of early hardware.
The chat box was the right starting point. It proved the capability and attracted the early adopters. But it's the beginning of the story. The future looks like what happened after the command line: specific tools for specific jobs, and the quiet disappearance of the technology into the background of how work gets done.