Meta Faces Criticism for Its Handling of Deepfakes in Conflicts

Published on March 11, 2026 | Translated from Spanish

Meta's Oversight Board has issued a harsh report criticizing the company's insufficient systems for moderating AI-generated content, especially deepfakes, during conflicts such as the war in Iran. After analyzing a fake video about damage in Israel, the body calls for urgent reform. It points out that the current model, which relies heavily on user self-reporting, is inadequate for the rapid spread of disinformation on platforms like Facebook, Instagram, and Threads.

Distorted Meta logo against a background of binary codes and conflict flags.

Technical Audit: From Forensic Detection to C2PA Labeling 🔍

The Board's recommendations address the full deepfake audit cycle. Technically, they demand better proactive detection tools, equivalent to advanced digital forensic algorithms capable of analyzing inconsistencies in lighting, skin textures, or facial geometry, similar to those used in 3D analysis. In parallel, they promote the C2PA standard as a digital notary that embeds metadata about the content's origin. The key is that this labeling must be accessible and clear, not just technical. It contrasts with the current reactive audit, where forensic investigators must reverse-engineer the manipulation without prior clues.

Towards a Community Standard for Digital Integrity 🤝

The call for a specific community standard for AI content is the reflective axis. It's not just about improving algorithms, but collectively defining what constitutes a deceptive deepfake and what level of manipulation is acceptable. This shifts the discussion from the technical to the social realm, demanding transparency in sanctions. The audit ceases to be solely Meta's task and becomes a framework of shared responsibility, where C2PA labeling clarity and High Risk warnings are the critical interface with the community.

Would you use AI to detect AI?