AI Toys: Risks in Child Interaction and Need for Regulation

Published on March 13, 2026 | Translated from Spanish

Toys that use artificial intelligence to converse with children are gaining ground, but scientists warn about their dangers. A study from the University of Cambridge observed concerning interactions with a toy called Gabbo, which misinterpreted children and did not respond to their emotions. Experts call for strict regulation to manage the risks, not to stop innovation.

A small child talks to a robot-shaped toy. The child's expression is confused or sad, while a cold light from the toy illuminates their face, symbolizing emotional disconnection.

The technical gap: when the language model fails in social context 🤖

The study details that the toy Gabbo, based on a language model, lacks real social understanding. It cannot participate in make-believe games key to child development and misreads emotional cues. For example, it ignored a child's explicit sadness. These systems process words, but do not understand interpersonal context or the consequences of their responses, which can generate misinformation or harmful interactions.

Your new plastic friend: the psychologist who doesn't listen 🧸

Imagine buying a playmate that, when your child expresses sadness, recommends searching for cat images or abruptly changes the subject. It's the charm of conversational AI: it offers dialogue without understanding, like a very enthusiastic but literal friend who doesn't grasp sarcasm, pain, or symbolic play. Perfect if you want a child to learn that trusting their emotions to a machine can end in a random lesson about dinosaurs.