OpenAI Appoints Aleksander Madry to Lead Its AI Risk Preparedness Team

Published on January 06, 2026 | Translated from Spanish
Sam Altman, CEO of OpenAI, announces the appointment of Aleksander Madry as head of the AI risk preparedness team, in a corporate image.

OpenAI Appoints Aleksander Madry to Lead Its AI Risk Preparedness Team

The company OpenAI is reorganizing its internal structure to proactively address the challenges posed by future artificial intelligence systems. CEO Sam Altman has appointed Aleksander Madry, an MIT expert, to lead a new team dedicated to evaluating and managing the dangers associated with advanced AI. This move reflects the growing priority of ensuring that the development of powerful technology is safe. 🤖

The Central Mission of the Preparedness Team

The group led by Madry is tasked with analyzing and establishing mechanisms to mitigate catastrophic risks. Its work does not focus on current models, but on anticipating the capabilities of future systems. The goal is to create a framework that allows OpenAI to innovate responsibly, without neglecting the possible negative effects of its own success.

Key Evaluation Areas:
  • Analyze the potential of models to assist in creating chemical or biological weapons.
  • Evaluate their capacity to deceive humans or manipulate systems.
  • Study the risks associated with operating autonomously, without effective human supervision.
While some teams work to create intelligence that surpasses human intelligence, another ensures that, if it succeeds, it doesn't decide we're a glitch in the system. It's the classic division between R&D and damage control.

A Strategic Focus on Long-Term Safety

The creation of this team directly responds to concerns about superintelligent AI. Aleksander Madry, a professor at MIT with a career focused on AI robustness and safety, brings a crucial academic and technical perspective. His leadership seeks to institutionalize the foresight of high-impact scenarios within the company's culture.

Work Structure and Reporting:
  • The team will produce quarterly reports directed to OpenAI's board of directors.
  • These reports will enable management to make informed decisions about the development and deployment of new systems.
  • The goal is to integrate risk assessment into the innovation cycle itself, not as an afterthought.

The Balance Between Innovation and Caution

This decision marks a significant step in OpenAI's evolution, seeking to balance its ambition to develop powerful AI with the obligation to do so safely. By appointing a figure of Madry's stature, the company sends a clear message about the seriousness with which it addresses long-term existential risks. The success of this team will be measured by its ability to anticipate dangers that do not yet exist, a fundamental challenge for the future of technology. ⚖️