IT minister Ashwini Vaishnaw has called for a techno-legal approach to tackle AI-generated harmful content. Speaking to reporters at the ongoing IndiaAI Impact Summit 2026, the minister said that there is a consensus among the global leaders over how AI should be used for good, and that India needs stronger regulation to curb deepfakes.
“A good consensus is emerging among the global leaders. In our interaction, so far, with many other countries, everybody believes that yes, AI should be used for good and all the harmful impact must be contained,” Vaishnaw told The Times of India when asked whether the government is working on any policy or regulations to take actions against companies that fail to meet requirement to curb the use of AI technology for malicious purposes.
“It has to be done through a techno-legal approach and cannot be done through passing a law. It has to be done through a technological approach where technology can be used in a safe way,” he added, pointing out that India has “IndiaAI Safety Institute (AISI) and we are working with many academic institutions for creating the Technical Solutions to prevent harmful impact of AI.”
AISI has been established to develop indigenous, secure and ethical AI frameworks, focusing on mitigating risks like deepfakes.
Stronger regulation needed to curb Deepfakes: Vaishnaw
However, while talking about deepfakes rampant online, Vaishnaw said that there is a need for much stronger regulation to curb such content.
“I think we need a stronger regulation on deepfakes. It is a problem growing day by day. We need to protect our society from this harm. We have initiated a dialogue with the industry on this,” he told the reporters.
“Certainly there is a need for protecting our children, protecting our society from these harms. We have already initiated a dialogue with industry on this. What we need is beyond what we have, already beyond the steps that we have already taken. Even our IT Committee of the Parliament has studied this issue, and they have made some recommendations,” he said.
“So, certainly, I believe that, yes. We need much stronger regulation of Depfakes, and we definitely must create that consensus within the parliament for creating those significantly stronger regulation,” the minister added.
When asked about age-based restrictions, he said that age based regulation is needed, and that the government has created this age-based differentiation on the content, which is accessible to students and young people”.
Government's new IT rules make AI content labelling mandatory
Vaishnaw’s comments came a week after the government brought AI-generated content - deepfake videos, synthetic audio, altered visuals – under a formal regulatory framework for the first time by amending India's IT intermediary rules.
Notified via gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, take effect from February 20.
The new rules make it mandatory for platforms like YouTube, Meta-owned Instagram, Facebook and X (formerly Twitter), among others to label all synthetically generated information (SGI) prominently enough for users to spot it instantly.
These platforms have also been asked to deploy automated tools to cross-verify, checking the content's format, source and nature before it goes live.