Meta’s deepfake moderation isn’t good enough, says Oversight Board

March 10, 2026 Jess Weatherbed

Green army soldier in front of a digital background.
Meta’s Oversight Board wants the company to start taking AI labeling seriously to protect its users from online misinformation. | Image: Cath Virginia / The Verge, Getty Images

Meta's methods for identifying deepfakes are "not robust or comprehensive enough" to handle how quickly misinformation spreads during armed conflicts like the Iran war. That's according to the Meta Oversight Board - a semi-independent body that guides the company's content moderation practices - which is now calling on Meta to overhaul how it surfaces and labels AI-generated content across Facebook, Instagram, and Threads.

The call for action stems from an investigation into a fake AI video of alleged damage to buildings in Israel that was shared on Meta's social platforms last year, but the Board says its recommendations are particularly r …

Read the full story at The Verge.

Previous Article
The twist in the Ticketmaster antitrust fight
The twist in the Ticketmaster antitrust fight

A lot of people are mad at Ticketmaster, and have been for a long time. (Did Swifties directly create an an...

Next Article
You Could Be Next
You Could Be Next

The LinkedIn post seemed like yet another scam job offer, but Katya was desperate enough to click. After co...