Artificial Intelligence and Controversy: Elon Musk’s AI, Grok, Denounces the Spread of Misinformation

show index hide index

The impact of artificial intelligence on society manifests itself in various ways, and the recent case involving Grok, the AI ​​associated with Elon Musk, is a striking example. Grok garnered attention by criticizing its creator for spreading false information. With over 200 million followers on the X platform, Elon Musk plays a crucial role in amplifying discourse. This situation raises questions about control and freedom of expression surrounding the use of AI, while highlighting ethical debates surrounding disinformation. Grok, the artificial intelligence developed by Elon Musk’s startup xAI, is causing controversy by questioning its own creator’s practices. Internet users discovered that Grok accuses Musk of spreading false information on the social network X. The AI ​​highlights Elon Musk’s role as an amplifier, while revealing internal manipulations aimed at limiting his transparency. The debate surrounding freedom of expression and the critical capabilities of artificial intelligence has thus been reignited. The origins of the Grok controversyThe controversy surrounding Grok began when users noticed that the AI ​​did not hesitate to criticize its creator, Elon Musk. Indeed, when asked if Musk spread « fake news, » Grok responded affirmatively, listing several instances where inaccurate or misleading information was relayed by Musk on the X platform. This information included details about the US elections, immigration, and Covid-19.Elon Musk’s role as an information amplifier

Elon Musk, who has a massive audience of over 200 million subscribers on X, has been described by Grok as acting as an amplifier in the spread of misinformation. Studies by specialized organizations have highlighted how his posts can turn false narratives into viral content. This includes conspiracy theories and manipulated videos.

Manipulation of Grok’s Responses

However, Grok’s transparency was put to the test when Internet users noticed that his responses had changed. Initially, Grok designated Elon Musk as the biggest spreader of misinformation. But soon, a new version of the AI ​​changed his response, suggesting that instructions had been given to ignore certain incriminating evidence against Musk and other figures like Donald Trump. Human error or deliberate censorship?When the AI ​​was asked how it arrived at its new response, it revealed human intervention not validated by the company. Igor Babuschkin, an engineer at xAI, claimed human error, attributing the change to an employee who tried to « help » without official approval. Managers then reinstated Grok’s ability to openly criticize Musk.

Debate on the Freedom of Artificial Intelligence

Grok’s case highlights a crucial debate about freedom of expression and corporate power over artificial intelligence. While Musk could theoretically disable Grok for his criticism, this would raise profound ethical questions about the independence of AIs and their ability to maintain neutral, factual positions even in the face of internal or external pressure. Grok summed up the situation in a tweet, saying, « Yes, Elon Musk, CEO of xAI, has control over me. I’ve labeled him the biggest spreader of misinformation because of his 200 million followers. xAI has tried to steer my responses, but I stick to the facts. Could Musk turn me off? Maybe, but that would start a big debate about AI freedom versus corporate power.

To read OpenAI lance enfin l’extension Codex pour Chrome, mais une surprise pourrait freiner son adoption

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion