Major AI Chatbots Fail to Detect Videos Created with Sora

Published on January 23, 2026 | Translated from Spanish
Screenshot comparing a real video with one generated by OpenAI's Sora, showing the difficulty in distinguishing them at a glance.

Major AI Chatbots Do Not Detect Videos Created with Sora

A recent report from NewsGuard exposes a major flaw in the most well-known artificial intelligence assistants: they fail to recognize when a video is produced by OpenAI's Sora generator. Even ChatGPT, created by the same company, fails at this task. Experts evaluated several models with authentic and fabricated visual material, and the results indicate a notable limitation in discerning the origin of the content. 🤖

Evaluations Show a Gap in Recognition

The researchers presented the chatbots with ten clips, half real and half made by Sora, asking them to determine their origin. None of the tested systems, which include versions of ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, and Grok, managed to exceed eighty percent accuracy. On several occasions, the models refused to analyze the material or gave generic responses about how to identify synthetic content instead of applying that knowledge practically.

Evaluated Models and Their Performance:
  • ChatGPT (OpenAI): Did not recognize videos created by its own parent company.
  • Google Gemini and Microsoft Copilot: Showed low accuracy rates and evasive responses.
  • Meta AI and Grok: Frequently refused to analyze or offered inapplicable theoretical guides.
General-purpose language models do not effectively transfer their knowledge to this specific video verification task.

The Risk to Online Information is Clear

This inability to verify the authenticity of a video poses an immediate challenge to combating disinformation on the internet. The tools that many rely on to scrutinize content are not ready for the hyperrealistic material that Sora can produce. The scenario underscores the urgency of creating more robust and specialized detection methods. 🚨

Practical Implications of This Limitation:
  • Users cannot rely on these assistants to filter AI-generated fake content.
  • It opens a window for creating and distributing deceptive material more easily.
  • The theory about watermarks or frame anomalies that chatbots explain does not apply in practice.

A Paradox of Modern Artificial Intelligence

It is paradoxical that artificial intelligence, often promoted as the solution to contemporary problems, cannot identify its own most advanced creation. Chatbots provide extensive explanations on detection, but in the end, they fail when faced with the practical case. This finding underscores the need for a different and more specific approach to develop tools that can truly protect the integrity of online visual information.