TOI-NFSU Hacked! 2.0: Spotted A Deepfake? Save And Act In 48 Hours

TOI-NFSU Hacked! 2.0: Spotted A Deepfake? Save And Act In 48 Hours
Ahmedabad: A few seconds of audio, a short video clip, or an innocuous photograph is now enough to upend your life. At the Hacked 2.0 event in Ahmedabad, experts warned that while the legal framework struggles to keep pace with artificial intelligence (AI)-driven abuse, women and children remain exposed to what a senior National Forensic Sciences University (NFSU) multimedia forensic expert described as the "weaponisation of technology".The session, "A Toolkit for Women and Children Against Deepfakes and AI Threats", hosted by the Institute of Chartered Accountants of India (ICAI) Ahmedabad, brought together multimedia forensic scientists and legal experts from the NFSU. Hacked 2.0 is a strategic partnership between The Times of India and NFSU.
Ahmedabad: Archaeological Discovery, IIM-A School of AI, Bullet Train Bridge Progress And More
Tracing the real-world impact of these offences, associate dean at NFSU and multimedia forensic expert, Dr Surbhi Mathur, cited the case of Rajathi Kamalakannan, a Chennai food truck owner who fought a legal battle after deepfake images circulated online in 2025 and hurt her livelihood. Mathur also explained that an active social media presence increases the risk. She pointed out that many women influencers on Instagram have faced issues with deepfakes. The risk is quantified as increasing by 15.7% for every 10,000 followers, illustrating how digital visibility is being systematically weaponised by predators. She added that women are often coerced into silence through the threat of "social shame and defamation".
Detection of deepfakes and AI attacks, particularly within families, remains the first line of defence. Mathur pointed to common flaws in synthetic media or videos —machines often fail to replicate natural blinking patterns of five to 10 seconds, and there is frequently "latency" between audio and lip movement. To counter voice-cloning scams, she urged families to agree on a "safe password" — for example, "gulab jamun" — to verify identity during distress calls. She also recommended breach-checking services like Have I Been Pwned and online freeware deepfake-detection tools to determine whether a video is a deepfake. Mathur emphasized that parents must supervise gaming and video platforms, implement parental control settings, and teach children that personal identifiers like school names or birth dates are high-value targets for online predators. When harm occurs, however, technology alone is insufficient. Assistant professor at NFSU's School of Law Forensic Justice and Policy Studies, Dr Rajdeep Ghosh, warned that unlike the United States, India has no standalone ‘take it down' law. Content removal, he said, operates through a patchwork of statutory provisions and procedural safeguards. Assistant professor at the NFSU's School of Law Forensic Justice and Policy Studies, Dr Rajdeep Ghosh, warned that unlike the United States, India has no standalone ‘take it down' law. Content removal, he said, operates through a patchwork of statutory provisions and procedural safeguards.Under Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, as amended in 2025, intermediaries must remove unlawful content within 36 hours of receiving a valid court order or govt notice.Photographs posted without consent must be taken down within 24 hours. Failure to do so results in the loss of "safe harbour" protection under Section 79 of the IT Act, exposing web platforms to criminal liability. Ghosh stressed that after the abuse is reported, the first 48 hours are decisive. Preserving URLs, metadata and hash values is critical for investigations under the Bharatiya Sakshya Adhiniyam, 2023. Victims are advised to contact the 1930 cybercrime helpline or log on to the Sahyog portal immediately. "Delay is the biggest factor in weakening judicial remedies," he warned.

End of Article
Follow Us On Social Media