EU postpones AI rules for high-risk systems until 2027

Published on May 07, 2026 | Translated from Spanish

The European Union has decided to postpone until December 2027 the application of obligations for artificial intelligence systems considered high-risk. This measure, adopted following pressure from various industrial sectors, aims to give companies more time to adapt to the new regulation. The extension affects areas such as employment, healthcare, and banking.

A digital hourglass on a table, with sand falling towards 2027 next to a document titled 'AI Act'.

Technical implications of the postponement in AI development 🛠️

The delay allows developers to adjust their models without the pressure of an immediate deadline. This includes implementing transparency mechanisms, bias evaluation, and data traceability. Companies can now certify their systems under the new framework without haste, although they must comply with documentation and human oversight requirements. The extension does not exempt them from the prohibition of banned uses since February 2025.

High-risk AI waits on the bench 🎢

In the end, the EU has opted for the classic we'll do it tomorrow. Meanwhile, hiring algorithms and diagnostic machines will continue to operate without a danger label. It's like getting on a roller coaster and the operator telling you: don't worry, we'll check the seatbelt in 2027. Everything under control, or at least until then.