Meta adds anti-scam tools and expands AI-led enforcement with a 90% advertiser-verification goal
Original: Meta Launches New Anti-Scam Tools, Deploys AI Technology to Fight Scammers and Protect People View original →
Meta announced a new anti-scam package on March 11, 2026, combining user-facing warnings with expanded AI-led detection. On WhatsApp, the company will warn users when behavioral signals suggest that a device-linking request may be suspicious. On Facebook, it is testing alerts around suspicious friend requests. On Messenger, Meta is expanding advanced scam detection to more countries, including flows that ask whether users want to share recent chat content for an AI scam review when new-contact conversations match known scam patterns.
The release also stands out for the operating metrics Meta disclosed. The company said it wants verified advertisers to drive 90% of ad revenue by the end of 2026, up from 70% today. It also reported that it removed more than 159 million scam ads last year, with 92% taken down proactively before anyone reported them. In India, Meta said it banned more than 12.1 million pieces of ad content in 2025 for fraud, scam, and deceptive-practices violations, with more than 93% removed proactively. It also said it took down 10.9 million Facebook and Instagram accounts associated with criminal scam centers and disabled more than 150,000 accounts tied to scam-center networks in Southeast Asia through a global law-enforcement operation.
Why it matters
- Platform trust and safety is increasingly being run as a full operating system that combines identity checks, user warnings, ad controls, AI classifiers, and external enforcement.
- The scale of the numbers suggests scam defense is now a core systems problem for consumer internet platforms, not just a moderation edge case.
- Meta’s advertiser-verification target shows trust controls moving closer to the revenue engine itself.
These figures are Meta’s own disclosures, so they should be read as company-reported enforcement data rather than an independent audit. Even so, the announcement is important because it shows how consumer platforms are applying AI in operational safety, especially against impersonation, link deception, and account-takeover patterns. As scam operations become more organized and cross-platform, AI-backed trust systems are becoming part of the core competitive infrastructure behind messaging, ads, and payments.
Related Articles
Meta announced new anti-scam protections across WhatsApp, Facebook, and Messenger on March 11, 2026. The company also detailed broader AI-based scam detection, enforcement statistics, and a plan to raise advertiser verification so verified advertisers account for 90% of ad revenue by the end of 2026.
Meta said on March 11, 2026 that it is accelerating its in-house MTIA roadmap across four generations, from MTIA 300 through MTIA 500. The company is using custom silicon to push harder on ranking, recommendation, and especially GenAI inference economics at Meta scale.
Meta announced new anti-scam tools on March 11, 2026 for WhatsApp, Facebook, and Messenger, alongside new AI detection and enforcement efforts. The update combines user-facing warnings, advertiser verification, and large-scale takedown data.
Comments (0)
No comments yet. Be the first to comment!