
X Moderates Its AI Grok Following Regulatory Pressures in the UK
The platform X, owned by Elon Musk, implements changes to align with British legislation. This comes after receiving strong criticism for how its Grok artificial intelligence tool made it easy to produce fake material. Public intervention by high-ranking figures and an active regulatory investigation forced the company to review its policies. 🚨
The Spark That Ignited the Investigation
The Prime Minister of the United Kingdom, Keir Starmer, expressed his alarm at the deepfakes of a sexual nature that could be created with this technology. His statement, combined with the opening of an investigation by the regulator Ofcom, exerted direct pressure on X. The body is assessing whether the social network breached its obligations by not protecting users from harmful AI-generated content.
Key Points of the Regulatory Pressure:- Public statement from the Prime Minister on the risks of deepfakes.
- Opening of a formal investigation by Ofcom to assess a possible legal breach.
- The focus is on how the platform manages and moderates this potentially illegal material.
It seems that even the most uninhibited artificial intelligences must learn to behave when a prime minister calls them out.
Technical and Policy Changes on the Platform
In response, xAI, the company behind Grok, modified its product's usage policy. The main goal is to limit its ability to produce realistic images of people without their permission. This adjustment aims to prevent users from generating defamations or fake intimate material. X now states that it is working to ensure its AI system complies with local laws, although it has not revealed the full extent of the technical measures adopted.
Actions Implemented by X:- Review and adjust the usage policy of the Grok AI to restrict certain outputs.
- Specifically limit the generation of hyperrealistic portraits without consent.
- Work to ensure compliance with British legislation on content.
Consequences and Future Regulation
Ofcom's investigation continues its course. If the regulator determines that X did not act with due diligence to protect its users, the company could face significant economic sanctions. This case sets a precedent on how platforms must manage content created by their own AI tools and marks a turning point in the responsibility of big tech. The