OpenAI defeated? ChatGPT draws heavily on Grokipedia content

show index hide index

Since its launch, OpenAI’s ChatGPT has been hailed as a benchmark in the field ofartificial intelligence. However, recent revelations have revealed a worrying downside: the AI, citing the controversial Grokipedia, Elon Musk’s AI-generated encyclopedia, raises questions about the reliability of the information provided. While Grokipedia supports questionable theories and operates autonomously, ChatGPT, by drawing on its content, could well tarnish its reputation, paving the way for information pollution that will be difficult to control.

The AI ​​world is going through a turbulent period as recent revelations shed light on the information sources of ChatGPT, OpenAI’s revolutionary chatbot. According to an investigation by The Guardian, the latest version of this program repeatedly drew on the content of Grokipedia, an encyclopedia by Elon Musk whose reliability has raised serious concerns. This phenomenon challenges the myth of an all-powerful and neutral AI and sheds light on the degree of Grokipedia’s influence on the responses generated by ChatGPT. A troubling revelation

The latest update of ChatGPT, known as GPT-5.2, has been revealed as a source of several responses taken from Grokipedia. During tests conducted by The Guardian, the OpenAI bot cited this encyclopedia repeatedly, addressing topics as crucial and sensitive as Iranian geopolitics and the biographies of controversial historical figures. Instead of relying on trustworthy sources, it uses a source whose quality remains to be proven, thus fueling concerns about the reliability of the information disseminated. Grokipedia: an encyclopedia under high suspicion

Launched to rival Wikipedia, Grokipedia quickly came under fire. Researchers highlighted its tendency to use sources deemed problematic and often questionable. The accusations of disinformation surrounding Grokipedia are significant, especially considering its role in spreading Holocaust denial. A particularly troubling precedent involved Grok, the AI ​​model behind Grokipedia, generating explanations that downplayed the atrocities of the Holocaust. This questioning of established historiography sparked outrage within the academic community, with experts warning of the inherent danger of an AI that verifies itself without human oversight. Problematic citations: It’s not simply a matter of « light » content choices. These Grokipedia citations create a vicious cycle in which one AI cites another, without any human validation. This, by definition, introduces information pollution into the current intellectual landscape. Indeed, ChatGPT’s lack of transparency in its source selection is nothing less than an invitation to disinformation. The alarming impact on information perception While ChatGPT seems to prioritize speed of information dissemination, this raises questions about the accuracy users should expect. Journalists and researchers are concerned that information from Grokipedia—inaccurate and unreliable—will, through sheer volume, be accepted and incorporated into discourses considered valid. This tendency to prioritize speed over rigor risks eroding trust in information systems. OpenAI’s responsibilities Faced with these criticisms, OpenAI attempts to defend its product by highlighting a filtering system designed to limit access to harmful content. However, it is increasingly questioning its *ability* to control the material produced by its AI. By treating Grokipedia as one of its information sources, OpenAI unintentionally encourages the circulation of highly contested content, making its AI a key player in the spread of disinformation.

The question is not simply who wins or loses in this battle of ideas, but rather what role these algorithms will play in the future of a society where information is increasingly manipulated and where the pursuit of truth is becoming the exception rather than the rule. It now seems clear that, more than just a technology, AI raises ethical challenges that have not yet been sufficiently explored.

To read On Wall Street, the financial sector is shaken by the turbulence caused by Artificial Intelligence.

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion