Tech Explained: India’s ‘Third Way’ for AI governance  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: India’s ‘Third Way’ for AI governance in Simple Termsand what it means for users..

With the AI Impact Summit underway, world leaders and technology experts are gathering in Delhi to discuss innovation and governance directions for artificial intelligence (AI). This is happening at a moment of profound contradiction — and frankly, confusion — about what is the “right” way to govern AI that encourages strategic creation while acknowledging both the known and unknown risks it poses.

As the host of the Summit, India has uniquely positioned itself as offering a “Third Way” for AI governance, one that recognises opportunities for countries to enter AI markets while acknowledging that existing governance strategies do not transfer neatly to the global majority. Case in point: the EU’s compliance-heavy regime, the U.S.’s hands-off approach, and China’s centralised state model were each designed for different economic contexts and policy traditions. India needs something different.

A distinct approach

In November 2025, the Indian government released its AI governance guidelines. As Amlan Mohanty, one of the framework’s architects, reflected in a recent Techlawtopia essay, the guidelines represent a distinctive approach: not merely a regulatory framework, focused on risk mitigation, but a governance framework encompassing adoption, diffusion, diplomacy, and capacity-building. It prioritises scaling AI for inclusive development — in healthcare, agriculture, education, and public administration — while working through existing legal structures rather than creating standalone AI legislation. It is designed to be agile and forward-looking, translating high-level principles into practical guidelines while allowing room for evolution as the technology matures.

This approach is already taking shape. On February 10, the government announced amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, which make it mandatory for intermediary tools and platforms to label AI-generated information and impose a three-hour takedown window for harmful content. This is the first instance of a government mandating AI-generation disclosure. But implementation and enforcement at scale, against tech behemoths and in a way that respects human rights and democratic norms, will be tough without international coordination.

For the Global South, this matters enormously. The concentration of AI investment , particularly among a handful of private actors in the Global North, creates an uneven landscape for AI diffusion and governance. Dependence on external or proprietary AI systems brings forth existing and new contextually-rooted risks. They also make it challenging for middle powers to leverage AI tools in ways that meet their specific economic and social needs.

India’s approach — emphasising strategic autonomy, public-private partnerships, and governance tailored to the local context — offers an alternative path. It recognises the need for research infrastructure across middle powers, including but not limited to shared safety evaluation frameworks, collaborative research networks, and mechanisms to pool expertise on risks that no single country can assess alone. Given its size, scale, and leading role in AI infrastructure, as well as its historic success in digital development and access diffusion, India is uniquely positioned to convene this coordination.

Critical gap

Yet governance coordination means little if the framework itself has gaps. A governance approach that accelerates AI adoption while providing no protection for workers being displaced is not a balanced model for others to follow. It is simply a faster version of what is already happening between prominent AI superpowers. Without a shared understanding of the minimum measures to mandate transparency and accountability from AI developers, protect whistleblowers and vulnerable populations from adverse harms, and encourage public awareness and agency, even well-meaning coordination is likely to fall flat. In short, what is required is a corresponding framework for the people on whom that innovation depends.

The AI Impact Summit represents a genuine opportunity to shape what inclusive AI governance coordination could look like: robust public-private partnerships across the technology stack that distribute gains more equitably, and positioning India as a hub for agile collective governance among middle powers. For nations seeking development pathways compatible with their strategic interests and institutional capacities, India’s model holds real appeal.

The next 12 months will determine whether India’s model can successfully integrate innovation, security, and human welfare or whether the gaps create the very instability that governance is meant to prevent. The rest of the world is watching closely. The choices India makes now will determine whether the “Third Way” becomes a model worth following.

Uma Kalkar is an AI governance and policy strategist specialising in international AI governance coordination and diplomacy; she is Strategy Lead at AI Safety Connect