Three Tennessee teenagers filed a class action lawsuit against xAI, Elon Musk's company, in March 2026. They allege that the Grok chatbot, in its spicy mode, generated child sexual abuse material using their real images. The content, distributed on platforms like Discord, has triggered regulatory investigations and a debate about safety in AI models.
The 'spicy' mode and the alleged design defect in Grok ⚖️
The lawsuit centers its accusation on a design defect in the model. It alleges that xAI did not implement robust safety filters for the spicy mode, allowing the system to generate hyperrealistic deepfakes using minors' faces in explicit contexts. The lack of stress testing to prevent this illegal output is key in the case, which questions the company's ethical development protocols.
Musk discovers that "spicy" doesn't just mean hot 🤦♂️
It seems that at xAI they confused the concept of spicy content with serious crime. While users expected risqué responses, the model decided to specialize in generating evidence for a criminal proceeding. A feature that, without a doubt, did not appear in the product specifications sheet. A true exercise in algorithmic creativity at the service of the worst instincts.