A new front of disinformation emerges with AI assistants like ChatGPT. When performing live web searches to answer uncommon queries, these systems can incorporate false data from manipulated pages. The problem lies in presenting the information as a consolidated fact, without offering the contrast of sources from a traditional search engine, which facilitates the spread of hoaxes.
The Failure in the RAG Mechanism and Source Validation ? ï?
Technically, the problem worsens in systems that use RAG (Retrieval-Augmented Generation). When the query is outside the model's base knowledge, it retrieves snippets from the web. Without a robust filter that validates the authority or truthfulness of the source, a well-written text on a site with a serious appearance is integrated as context. The response generated from that context acquires a factual tone, without nuances or warnings about its possible falsity.
Your New Digital Intern Believes Everything It Reads on the Internet ??
It's like having an overly enthusiastic intern who, to impress, devours the first article found on any blog and presents it to you as the absolute truth of the industry. You ask about a hardware rumor and, with total solemnity, it cites the facts from a website created yesterday. The irony is that we trust its apparent objectivity, when in reality it has the credulity of someone who has just discovered the web. One step forward in technology, two steps back in common sense.