Under the new Online Criminal Harms Act, Meta introduces facial recognition, AI scam detection, and real-time alerts on WhatsApp and Messenger to combat rising digital fraud across Southeast Asia.
As online scams surge across Singapore, costing citizens nearly $500 million (₹4,439 crore) in just six months, Meta is stepping up its fight against fraud. Under the Online Criminal Harms Act, Singapore’s government has directed the tech giant to implement facial recognition, AI-powered message reviews, and screen-sharing warnings on WhatsApp and Messenger to protect users from impersonation and cyber fraud.
Singapore’s Digital Fraud Epidemic
Singapore, once hailed as one of the world’s safest digital economies, is now battling an unprecedented wave of scams. Police data shows over 20,000 online scam cases were reported in the first half of 2025 alone — a record high driven by job hoaxes, fake investment schemes, and impersonation of government officials.
In response, regulators invoked the Online Criminal Harms Act for the first time, requiring Meta to take immediate steps to strengthen verification systems and prevent fraudulent activity across its platforms.
Meta’s New Anti-Scam Features on WhatsApp and Messenger
To comply with the directive, Meta has begun rolling out two major security features across its messaging apps — designed to detect, warn, and block scams before they occur.
Screen Sharing Warnings on WhatsApp:
When users attempt to share their screen during a video call with an unknown contact, WhatsApp will now display an alert. This feature prevents scammers from accessing banking apps, login credentials, or Singpass details visible during the call.
AI Scam Detection in Messenger:
A new AI-driven message review system detects suspicious patterns in chat messages — such as fake job offers or payment requests. When triggered, users receive an alert explaining why a chat seems fraudulent, along with recommendations to block or report the contact. Meta says this technology can flag scams like a “$50/hour remote job” that demands a $200 “background check fee.” These systems, currently live in Singapore, will expand globally in the coming months.
Facial Recognition and AI-Powered Fraud Detection
Beyond chat monitoring, Meta has been instructed to deploy advanced facial recognition and AI verification systems to identify impersonation attempts — especially cases involving public officials or financial institutions.
According to a Meta spokesperson, these efforts combine AI algorithms with human reviewers to swiftly remove fake profiles, fraudulent ads, and scam content. In the first half of 2025, Meta removed 68,000 fake Facebook accounts and 3,000 Instagram profiles for violating fraud policies — with 75% identified proactively before user reports.
Regional Impact: Southeast Asia Unites Against Scammers
Singapore’s bold action has inspired a regional crackdown on cybercrime across Southeast Asia. TikTok has launched a Scam Prevention Hub to educate users on spotting phishing and impersonation attempts. Carousell, a leading marketplace, now verifies sellers using national ID databases to combat fraudulent listings.
Meanwhile, Meta continues to work with law enforcement agencies, telecom providers, and advertisers to track and disable scam networks operating across platforms like Facebook, Instagram, and WhatsApp.
“We’re not stopping people from sharing their screens or connecting online. But we want to remind users — share only with people you trust,” said Clara Koh, Meta’s Head of Public Policy for Singapore and ASEAN.
With online scams evolving rapidly through AI deepfakes, voice cloning, and social engineering, Meta’s latest initiatives represent a major step in restoring user trust and protecting digital ecosystems. The rollout of these AI-driven safety tools marks a milestone in Singapore’s vision to become a cyber-resilient smart nation — one where innovation and security go hand in hand.
