AI scam is costing victims millions: Cybersecurity experts share tips to prevent you from becoming next victim
Initially deepfake videos may have been regarded as a quirky feature of the Internet but now they are probably the most formidable weapon in the hands of cyber criminals. With the digital masterpieces of fake vocals and images, deepfake, an AI-based media producing technique, has been exploited to the extent that even attentive individuals are falling victim to these imitations. Criminals are now using it to impersonate company executives, fabricate celebrity endorsements and even pose as family members in distress.
IBM reports that data breaches involve AI-driven attacks, with phishing and deepfakes among the most commonly cited methods. As the technology becomes cheaper and easier to access, awareness of how these scams work has never been more important.
A recent 2026 Anti-Fraud Technology Benchmarking Report by Association of Certified Fraud Examiners (ACFE) and SAS (industry research report widely cited in cybersecurity) revealed, “Deepfake social engineering saw the sharpest surge, with 77% of respondents reporting a slight-to-significant increase.” It added, “Only 7% of anti-fraud professionals say their organisations are more than moderately prepared to detect or prevent AI-fuelled fraud.”
This is powerful statistical backing that deepfake scams are rapidly increasing while organisations are unprepared, reinforcing the warning about awareness and scepticism as key defences.
Below, we break down how deepfake fraud works, the real-world cases that illustrate its scale and what people can do to protect themselves.
Deepfake technology uses artificial intelligence to synthesise realistic audio, video and images of real people, making it appear they said or did something they never did. For criminals, this has created a new category of fraud that is far more convincing than a traditional phishing email.
The most common forms include:
Danny Mitchell, Cybersecurity Writer at Heimdal Security, a Copenhagen-based cybersecurity company whose AI-powered protection platform is used by enterprises and security teams worldwide, has spent considerable time studying how AI is being weaponised against everyday people and organisations. He shared, “What makes deepfake fraud particularly dangerous is how accessible the technology has become. A few years ago, creating a convincing deepfake required significant technical skill. Now, tools are widely available online that can generate fake audio or video in minutes.”
Modern deepfakes can replicate a person's voice, facial expressions and mannerisms with enough accuracy to bypass the instinctive checks most people rely on.
According to a recent 2026 report published in The EDP Audit, Control, and Security Newsletter (EDPACS), author Ahmet Yiğitalp Tulga noted, “The rapid proliferation of artificial intelligence (AI) and deepfake technologies has introduced new and complex risks to individuals, companies, financial systems and digital trust.”
This claims that deepfake fraud is no longer theoretical, it is already impacting financial systems and individuals at scale.
“Traditional scams rely on urgency and anonymity,” Mitchell added. “Deepfake fraud goes further by borrowing someone's identity completely, which is why victims so often don't realise what has happened until it's too late.”
Several high-profile cases in recent years show just how far this type of crime has progressed.
“Criminals use celebrities because the familiarity people feel towards them can override rational judgement,” said Mitchell. “When someone believes a famous person has singled them out, the emotional pull is powerful. That is exactly what these fraudsters are counting on.”
A 2026 systematic literature review in the Journal of Visual Communication and Image Representation established, “Deepfake… has taken synthetic media closest to the reality that the human eye cannot differentiate.” This directly underpins the core narrative that deepfakes are now convincing enough to deceive even careful individuals, explaining why victims fall for scams involving video calls, celebrity endorsements and family impersonation.
Despite how convincing deepfakes have become, they are not flawless. There are still signs to look for.
“If you slow down and look carefully, there are often clues,” said Mitchell. “But the most practical warning sign isn't technical. If someone is pressuring you to act fast or transfer money through an unusual channel, that alone should give you pause, no matter how convincing the video or voice appears.”
Modelling how generative AI manipulates human decisions in social engineering fraud, an April 2026 research paper in arXiv highlighted, “AI has not invented a new crime… it has industrialised an ancient one: the manufacture of trust.” This is especially useful for supporting that deepfake fraud works because it exploits trust, not just technology.
Protecting yourself from deepfake fraud comes down to one habit above all else: verify before you act. If you receive an unexpected request for money or sensitive information, even from someone who looks and sounds completely familiar, confirm it through a separate, trusted channel before doing anything. Call the person back on a number you already have. Check with a colleague. Take the time to question it.
Danny Mitchell asserted, “It is also worth staying informed about how these scams are developing. AI-enabled fraud is moving quickly and the tactics criminals use are becoming more sophisticated. The more people understand how deepfakes work, the harder it becomes for fraudsters to use them successfully. Awareness, paired with a healthy scepticism around unexpected requests, remains one of the most effective defences available.”
IBM reports that data breaches involve AI-driven attacks, with phishing and deepfakes among the most commonly cited methods. As the technology becomes cheaper and easier to access, awareness of how these scams work has never been more important.
A recent 2026 Anti-Fraud Technology Benchmarking Report by Association of Certified Fraud Examiners (ACFE) and SAS (industry research report widely cited in cybersecurity) revealed, “Deepfake social engineering saw the sharpest surge, with 77% of respondents reporting a slight-to-significant increase.” It added, “Only 7% of anti-fraud professionals say their organisations are more than moderately prepared to detect or prevent AI-fuelled fraud.”
This is powerful statistical backing that deepfake scams are rapidly increasing while organisations are unprepared, reinforcing the warning about awareness and scepticism as key defences.
Deepfake technology is increasingly being weaponised by fraudsters to impersonate executives, celebrities and even loved ones, costing victims millions.
Below, we break down how deepfake fraud works, the real-world cases that illustrate its scale and what people can do to protect themselves.
What deepfake fraud is and how it works
Deepfake technology uses artificial intelligence to synthesise realistic audio, video and images of real people, making it appear they said or did something they never did. For criminals, this has created a new category of fraud that is far more convincing than a traditional phishing email.
The most common forms include:
- impersonating senior executives to authorise fraudulent transfers
- fabricating celebrity endorsements to promote investment scams
- mimicking a family member's voice to claim they are in an emergency
Danny Mitchell, Cybersecurity Writer at Heimdal Security, a Copenhagen-based cybersecurity company whose AI-powered protection platform is used by enterprises and security teams worldwide, has spent considerable time studying how AI is being weaponised against everyday people and organisations. He shared, “What makes deepfake fraud particularly dangerous is how accessible the technology has become. A few years ago, creating a convincing deepfake required significant technical skill. Now, tools are widely available online that can generate fake audio or video in minutes.”
Cybersecurity expert identifies the real-world scams to know about and the warning signs that can help you spot a deepfake.
Modern deepfakes can replicate a person's voice, facial expressions and mannerisms with enough accuracy to bypass the instinctive checks most people rely on.
According to a recent 2026 report published in The EDP Audit, Control, and Security Newsletter (EDPACS), author Ahmet Yiğitalp Tulga noted, “The rapid proliferation of artificial intelligence (AI) and deepfake technologies has introduced new and complex risks to individuals, companies, financial systems and digital trust.”
This claims that deepfake fraud is no longer theoretical, it is already impacting financial systems and individuals at scale.
“Traditional scams rely on urgency and anonymity,” Mitchell added. “Deepfake fraud goes further by borrowing someone's identity completely, which is why victims so often don't realise what has happened until it's too late.”
Real-world examples of deepfake scams
Several high-profile cases in recent years show just how far this type of crime has progressed.
- The $26 Million Video Call Scam: An employee at a large Hong Kong-based multinational was tricked into transferring nearly $26 million to criminals after joining what appeared to be a legitimate internal video conference. Every other participant on the call was a deepfake. The fraud only came to light after the employee contacted their head office.
- The Deepfake Romance Gang: A fraud network wiped out in Asia had made use of AI-generated female profiles to rope in men in India, Taiwan and Singapore to develop their relationships. The group had managed to get as much as $46 million from the innocent victims who had simply trusted the people communicating with them before the law enforcement officers could catch up with them.
- Celebrities Used as Bait: In one recent case, a woman spent two years believing she was in an online relationship with actor Martin Henderson, known for his roles in Virgin River and Grey's Anatomy. Using AI-generated voice messages and deepfake video, perpetrators convinced her to send $375,000.
“Criminals use celebrities because the familiarity people feel towards them can override rational judgement,” said Mitchell. “When someone believes a famous person has singled them out, the emotional pull is powerful. That is exactly what these fraudsters are counting on.”
Expert warns that scepticism around unexpected financial requests is now one of the most important defences people have against AI-powered fraud.
A 2026 systematic literature review in the Journal of Visual Communication and Image Representation established, “Deepfake… has taken synthetic media closest to the reality that the human eye cannot differentiate.” This directly underpins the core narrative that deepfakes are now convincing enough to deceive even careful individuals, explaining why victims fall for scams involving video calls, celebrity endorsements and family impersonation.
Warning signs a video or voice might be a Deepfake
Despite how convincing deepfakes have become, they are not flawless. There are still signs to look for.
- Unnatural facial movements or blinking: Deep fake videos sometimes have difficulty mimicking the tiniest aspects of human expressions. Be on the lookout for faces where the edges are blurry, people blinking at different times and smiles not in line with what the person is feeling.
- Audio that sounds slightly off: Voice made by artificial intelligence features a slight flatness or a different rhythm, while the background sounds may be very artificial.
- Mismatched lip movements: Synchronisation between speech and lips is often imperfect, particularly at faster talking speeds.
- Urgent requests for money or sensitive information: Any pressure to act quickly, transfer funds, or share personal details through an unusual channel should raise immediate concern.
“If you slow down and look carefully, there are often clues,” said Mitchell. “But the most practical warning sign isn't technical. If someone is pressuring you to act fast or transfer money through an unusual channel, that alone should give you pause, no matter how convincing the video or voice appears.”
Modelling how generative AI manipulates human decisions in social engineering fraud, an April 2026 research paper in arXiv highlighted, “AI has not invented a new crime… it has industrialised an ancient one: the manufacture of trust.” This is especially useful for supporting that deepfake fraud works because it exploits trust, not just technology.
Protecting yourself from deepfake fraud comes down to one habit above all else: verify before you act. If you receive an unexpected request for money or sensitive information, even from someone who looks and sounds completely familiar, confirm it through a separate, trusted channel before doing anything. Call the person back on a number you already have. Check with a colleague. Take the time to question it.
Danny Mitchell asserted, “It is also worth staying informed about how these scams are developing. AI-enabled fraud is moving quickly and the tactics criminals use are becoming more sophisticated. The more people understand how deepfakes work, the harder it becomes for fraudsters to use them successfully. Awareness, paired with a healthy scepticism around unexpected requests, remains one of the most effective defences available.”
end of article
Health +
- Persistent abdominal pain after eating is often dismissed as acidity, but it could be something far more serious
- Erling Haaland diet and workout routine: How the Manchester City star stays in peak form
- Why are heart attack symptoms in women often missed
- Why drinking water first thing in the morning may not be helping you (and what works better)
- Why fatty liver is becoming India’s most common liver condition
- Psychologist, author Jordan Peterson suffering from akathisia: Know what it is and why it happens
- 10-minute daily habits that can improve your health fast
Trending Stories
- Paresh Rawal calls Sunil Grover and ‘The Kapil Sharma Show’ ‘everyday competition’ for comedy films, calls comedians ‘bloody volcano’
- Rakesh Bedi visited Javed Akhtar's house for free food, alcohol during FTII days: 'He would come with Farooq Shaikh, Neena Gupta'
- Mumtaz gets emotional over Rajesh Khanna’s Aashirwad demolition: 'Kaka and Anju took care of us'
- Birth dates with the strongest personality and attraction power
- Quote of the day by Sandra Bullock
- King: Pen Marudhar buys all-India rights for ₹250cr; big Christmas 2026 release eyed
- 'Bhooth Bangla' crosses Rs 100 crore mark worldwide in 4 days
- 'Dhurandhar 2' OTT release: The Ranveer Singh starrer locks Rs 150 crore deal, here's when and where you can watch it
- Quote of the day by Leo Tolstoy: “The changes in our life must come from the impossibility to live otherwise than according to the demands of our conscience not from our mental resolution to try a new form of life”
- Bedi Nearly Refused Uri Role: daughter pushed him to trust Aditya Dhar; Dhurandhar 2 success follows
Photostories
- 9 best natural sweetener substitutes for processed sugar
- ‘Pushpa: The Rise’ to ‘Jai Bhim’: Most searched South movies online
- Ghee vs olive oil for Indians: Which is healthier and how to use both for better daily nutrition
- Shark Tank India judge Namita Thapar’s luxury life: From a Rs 50 crore house to a stunning car collection
- Manu Bhaker’s Faridabad home blends minimalism with strong roots and emotional depth
- Simple AC maintenance tips to avoid costly repairs
- 5 cities in India with high rents where housing comes at a premium price
- Did Alia Bhatt just front a Pakistani label in silk suits, or is this another viral fashion mix-up? Here’s what we know
- 6 Psychology hacks that actually work
- 6 types of food you should not store in plastic containers: Food authority's strict guidelines on plastic materials safe for food products
Up Next
Start a Conversation
Post comment