Meta rolls out AI support and stronger content enforcement across Facebook and Instagram

Original: Boosting Your Support and Safety on Meta's Apps With AI View original →

Read in other languages: 한국어日本語
AI Mar 22, 2026 By Insights AI 2 min read Source

Meta said on March 19 that it is expanding AI-driven support and safety systems across Facebook and Instagram. The company is rolling out the Meta AI support assistant in countries and territories where Meta AI is available, placing it inside the mobile apps and desktop Help Center so users can handle account problems without leaving the product.

Meta says the assistant is meant to do more than answer FAQs. It can help with scam and impersonation reports, explain why content was removed, surface appeal options, manage privacy settings, reset passwords, and update profile settings. The company says the system typically replies in under five seconds. Meta is also starting to use the assistant for login help in select cases in the US and Canada, with broader expansion planned.

The larger part of the announcement is Meta's plan to move more content enforcement to advanced AI systems over the next few years. Early internal tests cited in the post say the systems found and mitigated 5,000 scam attempts per day that no review team had previously caught, cut user reports of the most impersonated celebrities by more than 80%, caught twice as much violating adult sexual solicitation content, and reduced mistakes by more than 60%. Meta also said broader testing lowered views of scam and serious-violation ads by 7%.

What Meta says changes next

  • The new enforcement systems can operate in languages spoken by 98% of people online, up from coverage of roughly 80 languages before.
  • Meta says it will reduce reliance on third-party vendors for content enforcement as internal AI systems mature.
  • The company also says humans will remain responsible for high-risk decisions such as account disablement appeals and reports to law enforcement.

The significance of the update is scale. Meta is not describing an experimental chatbot feature; it is outlining how AI will be embedded into frontline support, fraud response, and moderation operations across apps with billions of users. That makes this a meaningful operational shift, even before the longer-term enforcement transition is fully complete.

Share: Long

Related Articles

AI 2d ago 2 min read

On March 11, 2026, Meta said it is expanding anti-scam warnings and AI-driven scam detection across WhatsApp, Facebook, and Messenger. The company also said it wants verified advertisers to account for 90% of ad revenue by the end of 2026, and disclosed enforcement figures including 159 million scam ads removed last year and 10.9 million scam-center-linked accounts taken down.

AI 11h ago 3 min read

On March 11, 2026, Meta launched new anti-scam tools across WhatsApp, Facebook, and Messenger, including device-linking warnings, suspicious friend-request alerts, and broader AI scam review. Meta also said it removed more than 159 million scam ads last year and took down 10.9 million accounts linked to scam centers.

AI Mar 12, 2026 2 min read

Meta announced new anti-scam protections across WhatsApp, Facebook, and Messenger on March 11, 2026. The company also detailed broader AI-based scam detection, enforcement statistics, and a plan to raise advertiser verification so verified advertisers account for 90% of ad revenue by the end of 2026.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.