In Singapore, scams have consistently ranked among the top crimes reported each year, costing victims millions of dollars annually. Among the fastest-evolving forms of fraud are impersonation scams, where criminals pose as trusted individuals or organisations to deceive victims. What makes the situation in 2025 particularly alarming is the rapid advancement of artificial intelligence (AI). From deepfake voices to hyper-realistic fake videos, scammers are now equipped with powerful tools that make their impersonations harder to detect than ever.
This article explores how AI is reshaping impersonation scams, what makes them so dangerous in the Singaporean context, and whether it may eventually become nearly impossible to distinguish a real interaction from a fraudulent one. Most importantly, it provides practical steps for Singapore residents to protect themselves.
Impersonation scams occur when criminals pretend to be someone trustworthy. They may act as government officials, bankers, family members, or public figures to trick their victims into giving up money or sensitive details.
In 2025, AI has taken these scams to a new level:
AI voice cloning can replicate the voice of a trusted person using just a few seconds of audio.
Deepfake video technology allows criminals to insert someone’s face into a video, such as a fake testimonial or urgent plea.
AI-enabled chatbots are capable of carrying on long, human-like conversations to prolong the deception.
Multilingual capabilities mean scammers can operate in English, Mandarin, Malay, or Tamil, mirroring Singapore’s linguistic diversity.
In August 2025, actor Tay Ping Hui denounced an AI-generated video that falsely showed him promoting a “diabetes medication.” Scammers had taken footage from his appearance on the podcast Call Us Daddy (hosted by Allan Wu) and spliced it into repetitive clips that looked nothing like him.
On 22 August, Tay issued an Instagram post warning his followers not to click on any links or buy the advertised product. He expressed both irritation and amusement, noting that “the AI shots looked nothing like me,” but emphasised that people should remain alert.
A 77-year-old woman almost fell victim after seeing the video on Facebook. She placed an order for two boxes of the “medication” at S$99. Fortunately, since the payment option was cash upon delivery, she did not lose any money.
This case demonstrates the real and local impact of AI impersonation scams and highlights the urgent need for awareness and safeguards in Singapore.
Several factors make Singapore a prime target for AI-powered impersonation scams:
High digital connectivity: With widespread use of smartphones, WhatsApp, Telegram, and social media, scam content spreads rapidly.
Trust in authority figures: Singaporeans often respond quickly to requests from perceived officials, such as police officers, MOM or IRAS representatives, and well-known individuals.
Prevalence of cashless payments: Instant transfers through PayNow and other digital wallets mean scammers can receive funds almost immediately.
Family-oriented culture: When scammers clone the voices of family members, especially targeting seniors, emotional trust can override caution.
Abundance of media content: Public lectures, podcasts, and social media posts give scammers plenty of material to create realistic impersonations.
The short answer is no. However, detection will become much more difficult.
Realistic manipulation: Deepfake visuals and cloned voices appear increasingly authentic.
Large-scale targeting: AI can generate customised scams tailored to thousands of victims simultaneously.
Sustained deception: AI chatbots can interact continually without fatigue, gradually breaking down resistance.
Personalisation: By scraping social media data, scammers can add personal touches that increase credibility.
Human intuition: Emotional pressure, urgency, or odd phrasing can still trigger suspicion.
Verification culture: Singapore’s emphasis on “pause and verify” practices provide a valuable safeguard.
Technological defence: Banks and government agencies are building detection systems for deepfakes and voice fraud.
Law enforcement support: Singapore’s strict regulations and active policing make the environment less favourable to scammers compared with many countries.
If you receive a suspicious call, message, or video, even if it looks real you should verify through trusted lines such as:
ScamShield Helpline: 1799
Scam.SG: Check for the company credibility
Agree on a unique phrase known only within your household. Anyone requesting urgent help should be able to use this phrase to prove their identity.
AI scams often push victims with urgent requests for money or instructions to keep matters confidential. Treat both as warning signs.
Think carefully before posting voice or video clips online. These provide raw material for AI voice cloning and deepfakes.
The ScamShield app can filter out known scam calls and messages. Multi-factor authentication (MFA) on bank accounts, email, and messaging apps provides an additional security layer.
Older family members may be more trusting of videos or phone calls. Teach them to verify information and avoid making payments without confirmation.
Report fake content on social media platforms and contact the Singapore Police Force if you suspect an impersonation attempt.
Banks, telcos, and regulators are developing AI-powered detection systems for synthetic voices and deepfake media. The Infocomm Media Development Authority (IMDA) continues to strengthen caller ID authentication and block spoofed numbers.
The ScamAlert.sg website and initiatives like “Spot the Signs. Stop the Crime.” are regular reminders to remain cautious. High-profile examples, such as the Tay Ping Hui case, further raise public awareness.
Since many scams originate overseas, Singapore works closely with regional and global enforcement bodies to trace syndicates and disrupt their networks.
Experts warn of several developments on the horizon:
Hyper-realistic deepfakes may make full video conferences or live-streams appear genuine even when fabricated.
Automated personalisation could allow criminals to target thousands of Singaporeans with messages referencing their careers, families, or recent purchases.
Biometric deception may arise in future scams, attempting to bypass fingerprint or face recognition technology used in banking and digital services.
While AI has brought immense benefits to society, it has also empowered scammers with unprecedented tools. In Singapore, where digital payments and online communications dominate daily life, AI-powered impersonation scams pose a growing threat in 2025.
The question of whether AI will make scams impossible to detect does not have a simple answer. Scams will certainly become harder to identify, but vigilance, verification habits, and strong enforcement mean detection is still achievable.
Ultimately, the most powerful defence is awareness. By staying informed, applying safeguards, and relying on official channels, Singaporeans can continue to protect themselves even in a world where faces and voices can be convincingly faked
Stay smart, stay safe, stay vigilant with Scam.SG.