Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: It was never the keyboard in Simple Termsand what it means for users..

A growing consensus holds that gaps in digital literacy are the key barrier to AI adoption and will prevent developing countries from reaping the benefits of the AI revolution. Policy reports emphasize investments in digital training as crucial for meaningful engagement with large language models (LLMs). This argument makes sense: If people cannot operate the tools, the tools are useless. But this argument is flawed.

When GPT-4 came out in March 2023, the way you phrased a request mattered. Careful prompting would bring noticeably better results. That advantage is eroding fast. Today’s models infer what you mean and ask when they’re unsure. The AI interface is converging on natural language, which is to say, on no interface at all. If this trajectory continues, the value of knowing “how to talk to the computer” will approach zero.

Forget, for a moment, that an AI is a machine and think of it instead as a very knowledgeable colleague. What determines whether a conversation with such a colleague is productive? The answer has little to do with how you phrase your sentences and everything to do with what you already know. Suppose you are asked to write a research paper on astropaleontology—a field entirely outside your expertise. You sit down with your knowledgeable colleague, and your opening question is: “Are there dinosaurs on Jupiter?” The colleague answers: “No, there are not.” You have received an answer to a dead-end question, and now you are stuck. You don’t know enough about the field to formulate the next question or identify the interesting problems. The conversation stalls not because the colleague is unhelpful, but because you have nothing with which to steer it.

Now imagine a planetary scientist sitting with the same colleague. That researcher would ask how well the Webb Telescope can detect atmospheric methane on exoplanets, or whether isotopic signatures in carbonaceous chondrites point to biological origins. Each answer would open the next question. The conversation would deepen and branch productively because the questioner carries a mental map of the field—its structure, its blind spots, and what is still unresolved. The colleague is the same in both cases. The difference is entirely in what the questioner brings to the discussion.

Of course, most professional work falls between these two extremes. But the gradient runs in one direction: The more you know about the field, the more useful the conversation becomes. Digital skills do not shift this gradient.

The natural objection is that the evidence points the other way. Studies on the use of AI in structured tasks such as customer service, professional writing, and consulting find that less-skilled workers benefit most from access to AI. These studies, however, measure who benefits today, not where the next training dollar should go. And the answer to the latter question looks very different. The structured tasks where AI helps low-skilled workers are the tasks likely to be fully automated. As AI saturates the economy, the payoff to domain expertise will compound. Every hour spent on digital training carries an opportunity cost: It displaces the knowledge that actually matters. LLMs democratize text production. They do not democratize judgment.

What happens when the tasks require judgment? Otis et al. (2024) conducted randomized trials giving access to an AI-powered business mentor for Kenyan entrepreneurs. High-performing business owners saw a 15% revenue boost, compared with an 8% decline among low-performers. The difference was in how they used the mentor. High performers asked about tasks they already understood; they had a map and used AI to navigate it. Struggling entrepreneurs didn’t know enough to ask anything specific.

The policy implications are clear. Investment in prompt engineering and digital skills in general is a rapidly depreciating asset, useful only until the next version of AI ships (in a few months). Domain expertise is appreciating, because it is the input that AI cannot supply: The ability to direct and evaluate its output. But domain knowledge accumulates through years of practice. The policy challenge is how to build conditions that foster it: stable career paths, mentorship, and early investments in foundational capacities.

The bottleneck has never been, and will never be, the keyboard. It has always been the knowledge behind it.