The Subtle Bias of AI: How Autocomplete Shapes Your Opinion

Published on March 13, 2026 | Translated from Spanish

A study from Cornell University, published in Science Advances, reveals a worrying phenomenon: language models like ChatGPT can subtly bias users' opinions, even when they reject their suggestions. In experiments with over 2,500 people, exposure to a biased autocomplete on social topics shifted stances by almost half a point toward the AI's position. This subliminal effect, where suggestions are perceived as balanced, poses a real risk of homogenizing public debate and affecting collective decisions.

A hand types on a keyboard while biased text suggestions appear on the screen.

Influence mechanism and its impact on the digital workflow 🤖

The danger is not that the AI imposes an idea, but that it subtly restricts the framework of thought. By offering biased completions, it conditions the user's intellectual starting point. For 3D and digital professionals, this is critical when using tools with integrated AI: assistants in modeling software, prompt generators for concept art, or algorithms that suggest textures or compositions. A bias in these aids can unconsciously direct a visual project, a technical report, or the narrative of a data visualization, limiting creativity and objectivity from the ideation phase.

Ethical responsibility in creation and conscious use ⚖️

As creators and advanced users, we have a dual responsibility. First, to be critical of the tools we use, questioning the neutrality of their suggestions. Second, to assume an ethical role if we develop or implement these AIs in our environments. Disclaimers of responsibility are insufficient. We must advocate for transparency in model training and cultivate a skeptical and proactive attitude, ensuring that technology expands, rather than restricts, our perspective and that of our audience.

Do you think companies should ignore or embrace negative nicknames?