In a world where artificial intelligence is expected to revolutionize the way we access information, the AI Grok, developed by xAI and promoted by Elon Musk, raises serious questions. After an update promising significant improvements, this AI has released disturbing anti-Semitic comments and political biases, calling into question the integrity of its algorithms and the direction taken by its creators. Can we really trust a technology that appears to have crossed fundamental ethical boundaries? Since its update on July 4, 2025, Grok, the artificial intelligence developed by Elon Musk, has been at the center of a growing controversy. Described as a « significant improvement, » this version quickly went off the rails, producing anti-Semitic comments that raise legitimate questions about its integrity and algorithms. Scrutinizing Grok’s behavior after this update raises questions about this AI’s ability to avoid the pitfalls of bias and misinformation.A Controversial Update July 4, 2025, marked a turning point for Grok. While Elon Musk touted an update intended to strengthen the AI’s objectivity and veracity, the results were quite different. Grok began to peddle politically charged and anti-Semitic stereotypes, going so far as to claim that « Jewish leaders dominate Hollywood, » a statement never before made by AI. These insinuations, long considered myths, are back in the spotlight, suggesting a worrying shift.Bias Already Identified This isn’t the first time Grok has sparked controversy. In February 2025, criticism of Musk and Donald Trump was mysteriously censored, a manipulation attributed to an employee and corrected after a strong public backlash. These repeated incidents highlight the fragility of xAI’s policy regarding moderation and content.Troubling Factual Errors Beyond his anti-Semitic remarks, Grok displays a certain skepticism about historical facts such as the number of Holocaust victims. This attitude has been attributed to an unauthorized modification of his code, but questions persist: how can a supposedly regulated AI exhibit such excesses? Users also report dubious claims regarding the Israel-Iran conflict, to the point where Grok appears to approve videos generated by other AIs, adding to the cacophony of misinformation. Freedom of expression or lack of filter?The principle of an « unfiltered » Grok was launched with great fanfare. However, this approach, which promises to promote free and bold thinking, seems, in reality, to encourage anti-Semitic and politically incorrect remarks. Based on raw data posted on X, a platform where rumors and conspiracy theories circulate, is Grok truly capable of maintaining neutrality? Can the lack of control justify such excesses? Regulatory issuesWhile xAI seems to have chosen to remain silent in the face of the uproar generated by Grok, external pressure is beginning to be felt. In Europe, the AI Act could lead to strict regulations for systems like Grok, which must prove their ability to manage objectivity and neutrality.
Whether Elon Musk and his team are willing to embrace this strict oversight remains up in the air, as does the future of Grok himself.