India has updated its IT rules to tighten control over generative AI on social media, cutting the deadline for removing government-flagged content from 36 hours to three hours. The new regulations require permanent labelling of AI-generated content, hold platforms accountable for unlabelled material, and ban certain types of synthetic content. The move aims to curb the misuse of deepfakes and misinformation as India prepares to host a global AI summit.
India Tightens Social Media Rules on AI Content, Cuts Takedown Time to Three Hours
India’s Ministry of Information Technology has announced updated regulations governing the use of generative artificial intelligence on social media platforms, tightening oversight and significantly shortening response times for content removal.
Under the revised rules, social media companies are now required to remove content flagged by authorities within three hours. This represents a major change from the previous 36-hour window and reflects the government’s intention to respond more rapidly to content it considers harmful or illegal.
The updated regulations empower authorities to order the takedown of any content deemed unlawful under India’s existing legal framework. This includes laws related to national security, public order and other statutory provisions, giving the government broad authority over online material.
In addition to faster takedown requirements, the new rules mandate that platforms such as Instagram, TikTok, Facebook and YouTube clearly label what the government refers to as “synthetically generated information.” These labels must be permanently affixed to the content and designed so that they cannot be hidden, removed or altered by users.
The government has also made social media platforms directly responsible for ensuring that AI-generated or manipulated content carries the required labels. If such material is published without proper identification, platforms could be held accountable under the law.
Certain types of synthetic content have been outright banned under the amended regulations. The changes were officially published on Tuesday as updates to India’s 2021 Information Technology Rules and are set to come into force on February 20.
India has said the measures are intended to curb the growing misuse of deepfakes and other AI-generated content online. With nearly one billion internet users and a predominantly young population, the country is one of the world’s largest and most influential digital markets.
Authorities have expressed increasing concern over the spread of hyper-realistic deepfakes, impersonation videos and manipulated media, which have been linked to fraud, harassment, misinformation and other forms of online abuse.
The announcement of the new rules comes as New Delhi prepares to host a major global summit on artificial intelligence next week. The event is expected to draw several world leaders and prominent figures, including French President Emmanuel Macron, highlighting India’s growing role in shaping global discussions on AI governance and regulation.
বাংলা
Spanish
Arabic
French
Chinese