Explained: What are government’s new IT rules that go into effect starting today and what they mean for you

Explained: What are government’s new IT rules that go into effect starting today and what they mean for you
Representative Image
The latest IT rules governing AI-generated content have come into effect in India today. The new rules require social media platforms to label deepfakes, synthetic audio, and altered visuals with visible markers that users can identify immediately. The amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, were notified earlier this month, on February 10, via Gazette Notification G.S.R. 120(E) and signed by the Ministry of Electronics and Information Technology Joint Secretary, Ajit Kumar.In the new framework, platforms are required to embed metadata and unique identifiers in all synthetically generated content so that it can be traced back to its source. The metadata and unique identifiers cannot be altered, hidden, or deleted once they are applied. This is the first time that AI-generated content has been brought under a formal regulatory framework in India.

What is 'synthetically generated' content that the government wants to monitor

While announcing the new laws, the government formally defined "synthetically generated information" for the first time. It refers to any audio, visual, or audio-visual content created or altered using a computer that appears real and depicts people or events in a way that could be mistaken for genuine.
This includes deepfake videos, AI-generated voices, and face-swapped images, essentially anything where a machine has made something look or sound authentic. The definition also applies to AI-generated images of fictional scenarios involving real people.However, not all digital editing work falls under this category. If the work does not affect the original message, then the exempted categories include normal changes such as colour correction, noise reduction, file compression, text translation, and accessibility improvements. Also exempted is conceptual or illustrative content in documents, research papers, PDFs, and presentations.The notification specifically excludes content created for "hypothetical, draft, template-based or conceptual" purposes. An office PowerPoint with a stock AI illustration does not qualify as synthetically generated information, while a deepfake of a politician delivering a speech they never gave does.

What these new IT rules for AI-generated content mean for social media users

If you use Instagram, YouTube, or any major platform, the most visible change will be the addition of labels. Any AI-generated post, reel, video, or audio clip will now carry a clear tag indicating it was machine-generated, visible before you like, share, or forward it.When you upload content, you may also be asked by the platforms whether it was developed or modified by AI. Giving a false statement about that is no longer simply a violation of the terms of service; it may also invite legal consequences under the BNS or POCSO Act, depending on the content. Social media platforms must also remind users of this requirement at least once every 3 months.Any service that hosts or facilitates the distribution of AI-generated content must clearly mark it as such. Not in small print, not in metadata, but right on the content itself. Services must also imprint permanent markers and unique identifiers on such content so that it can be traced back to its origin, and they must prohibit any attempt to delete or alter such markers. This closes a loophole that could have allowed the markers to be erased as soon as a user re-uploaded a file they had downloaded.Large services such as Instagram, YouTube, and Facebook have further obligations. Before any file is posted, they must require the user to state whether the content is AI-generated and then use automated software to verify the user's statement. If a service is found to have knowingly hosted unmarked AI-generated content, it will forfeit its legal safe harbour rights.One version of the rules specified that visual markers must appear on at least 10% of the screen and that audio markers must play during the first 10% of a video. This has since been removed after pressure from industry groups. Marking is still required, but without the size requirement.Response timelines have been tightened sharply. For certain government orders, platforms now have three hours to act, down from 36. Other deadlines have been cut from 15 days to seven and from 24 hours to 12.Platforms must also actively use automated tools to block AI-generated content that violates the law. This includes child sexual abuse material, obscene content, fake electronic records, content related to weapons or explosives, and deepfakes designed to misrepresent real people or events.The rules also update legal references, replacing mentions of the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. The draft rules were first published in October 2025, and platforms had until February 20 to comply with the final notification.
author
About the AuthorTOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article
Follow Us On Social Media