Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: India tightens AI rules: Social media must label deepfakes, remove harmful posts in 3 hrs in Simple Termsand what it means for users..
India is updating its AI rules to curb the rise and spread of “synthetically generated information” be it audio, visual or audio-visual, putting the onus on social media platforms to label such content properly and remove any objectional content within three hours. Objectionable content must be removed within 3 hours, labelling mandatory: Govt’s new rules on AI, deepfakes
The Government of India has introduced stricter rules for AI-generated and deepfake content, placing greater responsibility on social media platforms.
The Indian government is tightening its grip on AI-generated and deepfake content, mandating that social media platforms take down objectionable material within three hours and making the labelling of AI-generated content compulsory. Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, platforms will have to ensure that any content created using AI tools is clearly and prominently labelled, while also requiring users to declare whether the content they upload has been generated or altered using AI.
The new rules also require intermediaries to remove certain categories of unlawful or harmful content within three hours, and to ensure that AI-generated or manipulated content is clearly disclosed to users. The government is also mandating that social media platforms deploy tools and verification mechanisms to check user declarations, and holding them responsible if AI-generated content is published without proper disclosure. According to the government the new rules are aimed at curbing the misuse of AI and deepfakes online, while pushing platforms to act faster on harmful or misleading content.
This is a developing story. It will be updated.
– Ends
The Indian government is tightening its grip on AI-generated and deepfake content, mandating that social media platforms take down objectionable material within three hours and making the labelling of AI-generated content compulsory. Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, platforms will have to ensure that any content created using AI tools is clearly and prominently labelled, while also requiring users to declare whether the content they upload has been generated or altered using AI.
The new rules also require intermediaries to remove certain categories of unlawful or harmful content within three hours, and to ensure that AI-generated or manipulated content is clearly disclosed to users. The government is also mandating that social media platforms deploy tools and verification mechanisms to check user declarations, and holding them responsible if AI-generated content is published without proper disclosure. According to the government the new rules are aimed at curbing the misuse of AI and deepfakes online, while pushing platforms to act faster on harmful or misleading content.
This is a developing story. It will be updated.
– Ends
