Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: India’s revised IT rules aimed at AI deepfakes are stiff but will they be effective? Don’t bet on it in Simple Termsand what it means for users..
In the absence of a dedicated law to police the abuse of fast-evolving AI and Generative AI—systems that can impersonate humans, fabricate reality and operate on their own—India’s government had initially opted for calibrated restraint. Its AI Governance Guidelines and Digital Personal Data Protection Act were designed as ‘techno-legal’ frameworks to test corporate compliance, encourage innovation and nudge companies to embed safety into their systems.
That phase seems to be giving way to a firmer stance.
Under the amended IT Rules of 2026, online platforms must remove non-consensual sexual imagery (deepfakes included) within two hours of a complaint’s receipt. Other unlawful content must go within three hours of a government or court order. AI-generated content must be clearly labelled.
Platforms that offer AI tools must prevent the creation or spread of child sexual abuse material (CSAM), explosives-related content and fraudulent deepfakes. User complaints must be resolved within seven days.
India is not alone in tightening oversight. Germany’s NetzDG gives platforms 24 hours to remove ‘manifestly illegal’ content. The EU’s Digital Services Act demands expeditious action and immediate compliance with court or trusted-flagger orders, though without a clock countdown. Australia’s eSafety regime allows 24-hour takedown notices in serious cases.
India’s deadlines are particularly strict. However, while their intent is laudable, the challenge lies in execution. While large platforms do use automated systems to detect CSAM and some synthetic content, our linguistic diversity, cultural complexity and sheer content volume make contextual judgement of fraudulent posts difficult. With just two hours to act, platforms might drop stuff first and inspect it later; so false-positive cases could rise, even as restoration appeals lag removals.
Second, labelling AI-generated content sounds straightforward, but much content is edited and re-posted across platforms; tools mandated to ‘prevent misuse’ must be able to tell legitimate satire, art or political commentary apart from malicious posts.
Third, traceability is hard to achieve. Metadata can be stripped or altered. Basic ‘Exif’ data, like time-stamps and device IDs, can be snipped out or faked. Watermarks can be degraded through compression, cropping or pixel edits. Open-source models can be trained not to embed any detectable markers. That said, cryptographic provenance chains, platform monitoring and legal deterrence may raise the cost of deception. But as AI improves, these safeguards may not suffice.
Moreover, a traceability device meant to catch fraudsters could be misused as a surveillance tool to expose whistleblowers or citizens sharing lawful but socially sensitive content.
This is a big challenge in a country where what’s satire to one user may look like blasphemy to another. We need a crackdown only on what’s clearly unlawful—like CSAM, explicit non-consensual imagery, direct incitement to violence and outright fraud. Freedom of expression must not get squashed.
None of this argues against regulation, since AI harms are real. Deepfakes can be ruinous and CSAM is depraved, whether it’s AI-generated or real. Deepfake tools can clone faces and voices convincingly enough to perpetrate big frauds. No government can stand idle. But the success of the new rules will depend less on clock timers and more on the clarity of definitions, transparency of enforcement, independence of oversight and credibility of false-alarm redressal.
