Who Oversees Artificial Intelligences?

Published on February 11, 2026 | Translated from Spanish
Infographic showing a traffic light with red, amber, and green lights superimposed on a digital brain, symbolizing the regulation of AI by risk levels.

Who supervises artificial intelligences?

Have you ever wondered who controls whether artificial intelligence systems act correctly? It's similar to having very powerful tools, but without clear guidance on how to handle them. Now, public institutions are beginning to define rules for this new digital environment. 🤖

The European regulation that classifies risk

The European Union has promoted one of the first comprehensive legislations to regulate AI. It works with a risk-based approach. Some uses, such as biometric recognition in public spaces, are completely banned. Others, like conversational assistants, require companies to inform users that they are interacting with a machine. The goal is to anticipate and avoid negative consequences.

Key points of the regulation:
  • Prohibition of high-risk uses: Systems like real-time facial identification for mass surveillance have been banned.
  • Mandatory transparency: Chatbots and systems that generate content must disclose their automated nature.
  • Focus on application: The law evaluates the specific purpose, not the technical tool in the abstract.
The hammer is not prohibited, but using it to break a window. An algorithm can be benign for sorting images and risky for granting credit.

The focus is on use, not technology

A fundamental aspect of this regulation is that it does not attempt to regulate AI as a concept, but its practical implementation. The same machine learning system can be harmless for cataloging files and potentially harmful if it selects job candidates without human criteria to review it. The distinction is crucial.

Examples of how the evaluation changes:
  • Classifying pet photos: Low-risk use, generally permitted.
  • Scoring social assistance applications: High-risk use, subject to strict auditing and control requirements.
  • Generating creative texts: Limited-risk use, requires labeling of automated content.

Looking to the future

The main challenge does not lie in machines rising up, but in society not knowing how to manage their power responsibly. These first laws represent the equivalent of putting on a seat belt before driving at high speed. Establishing clear limits now is essential to innovate with confidence and safety. 🔒