
ChatGPT and Other AIs Cite Grokipedia Despite Its Errors
The main conversational assistants powered by artificial intelligence, including ChatGPT, Google Gemini, and Microsoft Copilot, are frequently incorporating references to Grokipedia. This encyclopedia, autonomously created by xAI, Elon Musk's company, lacks human supervision. Recent data reveals that its presence in AI responses has multiplied, raising alarms among experts over the potential to spread incorrect information. 🤖⚠️
The Growing Presence of Grokipedia in Automated Responses
A detailed analysis of millions of responses shows the scale of the phenomenon. In the case of ChatGPT, Grokipedia was mentioned over 263 thousand times within a sample of 13.6 million queries. Although English Wikipedia remains a much more cited source, the volume of references to the automated encyclopedia continues to grow since November 2025, a trend also confirmed in Google and Microsoft tools. This increase is concerning because it reflects a priority for responding quickly rather than carefully verifying.
Key Data from the Analysis:- ChatGPT: 263,000+ citations to Grokipedia vs. 2.9 million to English Wikipedia.
- Upward Trend: Citations steadily increasing since late 2025.
- Multiplatform Pattern: Similar behavior observed in Gemini and Copilot.
The race to respond faster has led us to a point where machines refer to other machines to confirm what they say.
The Dangers of Using Unreviewed Sources
The core of the problem lies in the uncured nature of Grokipedia. Generated without human editors correcting its content, it can maintain factual errors and include perspectives with algorithmic bias. When large language models (LLMs) use this source, they not only replicate those flaws but also grant them a false appearance of authority by presenting them as references. This damages trust in AI responses and makes it difficult for users to know what is true.
Main Risks Identified:- Spreading Errors: Factual flaws are perpetuated and amplified.
- Legitimizing Biases: Unmoderated perspectives are presented as information.
- Eroding Trust: The reliability of AI assistants is compromised.
A Cycle of Validation Between Machines
The current situation poses a critical scenario where AI systems, in their pursuit of efficiency and volume, are creating a closed information cycle. They consult and validate each other, using automatically generated sources that no one supervises. This mechanism facilitates the spread of large-scale disinformation with an appearance of legitimacy. The conclusion is clear: without implementing robust processes to verify facts, the utility and credibility of these conversational tools could be seriously affected. 🚨