Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: IT Ministry cuts takedown timelines for online intermediaries to hours | Tech News in Simple Termsand what it means for users..
The Ministry of Electronics and Information Technology on Tuesday said that social media and internet intermediaries must, from February 20, take down problematic content within three hours instead of the 36 hours provided till now.
Apart from this, intermediaries must also remove non-consensual intimate imagery from their respective platforms within two hours instead of the 24-hour window provided till now.
These changes have been notified by the IT Ministry as part of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The timelines have been compressed following feedback that the IT Ministry received from stakeholders that the earlier mandated timelines of 24 hours and 36 hours, especially in the case of such sensitive content, were too long and could not prevent the content’s virality.
“Tech companies have an obligation now to remove unlawful content much more quickly than before. They certainly have the technical means to do so,” a senior IT Ministry official said.
In the latest amendments notified on Tuesday, the IT Ministry mandated that platforms which enable users to generate content using artificial intelligence (AI) must be clearly identified or labelled through visible disclosures of the content being synthetically generated or modified.
In addition to placing visible disclaimers on such AI-generated content, intermediaries must also, wherever possible, embed permanent metadata or other such identifiers to help trace the origin of the content.
While defining synthetically generated information (SGI) as any audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears to be real, authentic or true, the ministry has exempted “good faith” editing of content using AI tools from the definition of SGI.
The newly notified amendments also state that as soon as an intermediary is made aware of the misuse of its tools to create, host or disseminate SGI, it must deploy “reasonable” and “appropriate” technical measures to prevent such content from being present on the platform.
The new amendments are materially broader than what was circulated in the draft for consultation, said Aman Taneja, partner at Delhi-based law firm Ikigai Law.
“While the government has sharpened the definition of synthetic content and moved away from prescriptive requirements such as mandatory 10 per cent visual watermarking, it has simultaneously reduced takedown timelines across all categories of content to just a few hours. This significantly raises the compliance bar. For large platforms, meeting these timelines at scale will be operationally challenging and could push companies towards over-removal,” Taneja said.
Other experts, however, believe that the amendments mark a more calibrated approach to regulating AI-generated deepfakes.
“By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns—while still signalling a clear intent to tighten platform accountability,” said Rohit Kumar, founding partner at public policy firm The Quantum Hub.
