Tech Explained: AI Agents and the Next Layer of India’s Digital Infrastructure  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: AI Agents and the Next Layer of India’s Digital Infrastructure in Simple Termsand what it means for users..

Brazilian President Luiz Inácio Lula da Silva, Indian Prime Minister Narendra Modi and French President Emmanuel Macron at the opening ceremony of the 2026 India AI Impact Summit in New Delhi on February 19. (Prime Minister’s Office)

At a gathering of government officials, tech leaders and artificial intelligence researchers during the India AI Summit last month, an MIT professor compressed an entire social theory for the technology’s future use into what was presented as a technical upgrade: that giving every citizen a personal AI agent could serve to decentralize AI.

This vision suggests not simply broad consumer access to AI tools but a prevalence of personal proxies that negotiate, coordinate, transact and interface on one’s behalf. Essentially, it pictures agents speaking to agents so that people do not have to.

The idea has gained traction in discussions around Doot, a whitepaper envisioning a citizen-owned AI agent built on India’s digital public infrastructure. The professor, Ramesh Raskar, offered an illustrative example of a 70-year-old woman in rural Bihar planning a visit to Kumbh Mela, a mass Hindu pilgrimage whose scale and administrative complexity make it a recurring test case for India’s infrastructural and governing capacities. Her agent, in this context, would organize travel, account for dietary constraints, coordinate accommodation and interact with vendors — provided, crucially, that the surrounding ecosystem was similarly agent-enabled. Vendors, platforms and institutions would also deploy agents, as the system would function most effectively when proxies interacted with one another.

The leap from thinking that AI can assist with tasks to that AI agents should form an ecosystem that mirrors society has been persistently framed as championing democratization through universal access and participation. The language leans heavily on contrast: we are leaving the factory phase of AI, characterized by centralized compute, enormous capital concentration and passive end-users, and moving into a bazaar phase in which individuals train their own agents and actively participate.

The same talk also envisioned an agent ecology: repair shops, insurance, whole new markets, justice systems and institutions all orbiting around agents. It follows then that the aforementioned bazaar would be one not for people, but for their proxies.

In the Indian context, this proposal cannot be understood in isolation. It builds directly upon a decade of digital public infrastructure (DPI) development that has already transformed the relationship between citizen and state. Aadhaar, a nationwide biometric ID system, has made identity legible and verifiable in machine-readable form at unprecedented scale; UPI, the public digital payments infrastructure, has transformed money into a seamless, interoperable data flow; DigiLocker has converted paper documents into state-recognized digital credentials; and ONDC seeks to reorganize online commerce itself by replacing platform monopolies with an open, state-backed network protocol.

This infrastructure has not merely digitized services but reconfigured how citizens are made legible to the state and how transactions are validated within public and private systems.

An AI agent layered onto this stack represents a qualitative shift. If DPI made citizens verifiable and transactable, a personal agent makes them delegable by allowing a digital proxy to complete tasks, manage interactions and make routine decisions on their behalf. The citizen is no longer only identified and connected, but is represented and acted for through computational systems embedded within the same infrastructural rails.

When hallucination becomes governance

In this context, the probabilistic and sometimes hallucinatory tendencies of contemporary AI systems become a feature of how actions are carried out, not just how outputs are generated. They generate outputs by predicting statistically plausible continuations rather than by grounding claims in stable referents. Hallucination, then, is not an anomaly, but a structural feature of large language models. As long as AI functions as an assistant, its unpredictability is treated as a tolerable technical flaw. The human user still authorizes the final action, preserving a clear line of accountability.

The framing changes when these systems become proxies. If agents built on Doot are expected to resolve eligibility questions, book services, negotiate transactions or mediate disputes, hallucination becomes embedded in action. We are contemplating embedding systems that generate plausible fictions by design into the core of everyday representation.

Addressing this is not simply a matter of improving accuracy rates. Rather, it raises deeper questions about who gets to define what counts as reliable knowledge and whose judgment we trust when decisions are made. On what basis does an agent decide what is relevant? What counts as a correct interpretation of a citizen’s intent? How are errors identified and contested when mediation is continuous and automated?

The most compelling rhetorical defense of Doot is its supposed universalism. Everyone gets an agent, after all — not just corporations with proprietary models or elites with access to compute, but ordinary citizens connected to DPI rails.

Yet, equal access to a proxy is not equal access to the conditions that shape that proxy’s operation. AI agents would depend on identity systems, payment rails, data standards and API frameworks that structure what can be known and acted upon. Training data reflects existing social hierarchies, and optimization objectives are embedded by designers.

What Doot distributes, then, is mediated access, not infrastructural authority. A citizen equipped with an agent may experience convenience and efficiency, but that does not mean they have gained power over the systems that interpret and act on their behalf. In this sense, distribution can look like participation, while still leaving the core epistemic and infrastructural authority untouched.

Why, then, should we want this? The justifying narrative accompanying the Doot project is familiar: if agents handle negotiation and coordination, people will have more time for meaningful human connection. Friction, in this telling, is inefficiency. But the features of interaction being outsourced — negotiation, disagreement, clarification and compromise — are deeply social practices. They are how individuals test claims, assert interests, contest decisions and generate shared understanding.

When such interactions are abstracted into optimization processes, decisions become outputs of probabilistic inference rather than outcomes of social exchange. What emerges is a parallel layer of social and economic activity conducted by algorithmic representatives.

The bazaar becomes an ecosystem of proxies such that the citizen participates through delegation. That transformation may increase efficiency and it may expand access, but it also changes the texture of participation in a democracy.

A political choice, not a technical phase

Efficiency is not neutral, while delegation is not equivalent to democracy. And hallucination, when scaled across millions of mediated interactions, ceases to be a minor technical flaw; it becomes a structural condition of governance.

If DPI restructured how citizens are known and serviced, AI agents could reconfigure how they are represented and acted for. We must understand this not as the next technical iteration in a linear story of digital progress but a political choice about the architecture of mediation.

Before normalizing a society of proxies layered atop India’s digital public foundations, we need to ask what kind of political and epistemic subjects these architectures produce.

A citizen who is verifiable, transactable and delegable through code occupies a different institutional landscape than one who participates through direct engagement.

The question is not whether this is technically feasible, but whether embedding hallucinatory, probabilistic systems into everyday representation strengthens democratic participation or abstracts it further into infrastructure?