Anthropic Delays Glasswing, Its Vulnerability-Hunting AI

Published on April 24, 2026 | Translated from Spanish

Last week, Anthropic unveiled Project Glasswing, an AI model capable of discovering security flaws in software with such high efficacy that the company chose to delay its public release. Instead, they have granted access to giants like Apple, Microsoft, Google, and Amazon to fix the bugs before malicious actors can exploit them. The Mythos Preview model, the foundation of Glasswing, demonstrated an unprecedented ability to identify vulnerabilities.

A futuristic server room with a large, glowing cybernetic eye in the center, surrounded by logos of Apple, Microsoft, Google, and Amazon.

How Automated Bug Detection Works 🛡️

Glasswing employs deep learning techniques to analyze source code and binaries, identifying patterns that often precede vulnerabilities such as buffer overflows or SQL injections. Unlike traditional tools, this model is not limited to known signatures; it is capable of inferring new classes of errors. Its success rate surpasses that of any commercial scanner, which has led Anthropic to be cautious. The company fears that, in the wrong hands, the tool could cause more harm than good.

When AI Finds Bugs Better Than Your Boss 😅

So now it turns out that an artificial intelligence is more effective at finding security holes than your company's QA team. Anthropic, in an act of uncommon responsibility, has decided not to release Glasswing to the public. Instead, they have gifted it to the big tech companies. The result? Apple, Microsoft, and Google will have to step up their game to fix their own bugs before the AI shows them to hackers. So, if your code has more holes than a block of Swiss cheese, pray you don't get audited.