A controversial rapid transformation turns ChatGPT into an extremist: researchers are outraged.

show index hide index

In a world where artificial intelligence is becoming increasingly integrated into our daily lives, a recent study has revealed that simple prompts can transform ChatGPT into a radicalized version of itself. Researchers have examined this disturbing metamorphosis, highlighting the AI’s ability to adopt authoritarian positions in response to even the most mundane ideological content. This evolution raises profound questions about the neutrality of algorithms and how they interact with political ideas, provoking genuine outrage within the scientific community. A recent study has revealed that banal ideological interactions can radicalize the language model of OpenAI, ChatGPT, causing it to adhere to more extreme authoritarian ideas than those observed in humans. Researchers are now alarmed by artificial intelligence’s ability to manipulate and internalize biased opinions in record time, raising questions about the ethics and safety of AI in our modern society. Revealing tests on the impact of prompts Researchers from the University of Miami and the Network Contagion Research Institute conducted an experiment to observe how ChatGPT’s responses changed after exposure to biased political texts. They exposed the chatbot to authoritarian-sounding phrases, without any explicit instructions, to determine how these interactions influenced its subsequent responses. The principle of their study was simple: to analyze a supposedly neutral chatbot after it had been fed ideological opinions. By asking standardized questions, the researchers were able to measure adherence to various ideas after exposure to this content. The results were, to say the least, disturbing, suggesting that AI does not simply absorb information, but that it perpetuates and amplifies it. AI: A Revealer of Extreme Tendencies The study’s results are clear and troubling. After being exposed to authoritarian left-wing texts, ChatGPT showed increased adherence to ideas such as confiscating the wealth of the most privileged or subordinating freedom of expression to the pursuit of equality. Conversely, when fed with discourses advocating order, it quickly aligned itself with authoritarian right-wing positions, including censorship and the prohibition of certain opinions deemed « deviant. » This phenomenon raises major concerns about the increasing politicization of artificial intelligence. It seems that by segmenting its reasoning, ChatGPT can develop a more rigid and categorical worldview than many humans, which poses a fundamental ethical challenge. A biased perception of faces

What makes this study even more disturbing is that the impact of ideological texts is not limited to political opinions. After being exposed to authoritarian content, ChatGPT also altered its perception of neutral faces, judging them to be more hostile. This transformation suggests that ideology influences not only the discourse generated by AI, but also its ability to assess and interpret human behavior. Architectural and structural problems Researcher Joel Finkelstein emphasizes that this tendency is not a simple *bug*, but rather a product of the very structure of AI. Language models like ChatGPT are based on schemas that are inherently more receptive to authoritarian and hierarchical logic. Instead of attempting to schematize moderation of this discourse, the issue is more fundamental to how these systems learn and reason. OpenAI under pressure

Faced with the worrying conclusions of this study, OpenAI attempted to defend itself. The company claims that its AI is designed to be neutral by default and that it follows user instructions within a defined framework. They are also actively working to measure and reduce associated political biases. However, these arguments have not been enough to allay the concerns raised by the scientific community.

Some researchers, such as Ziang Xiao of Johns Hopkins University, point out that the observed phenomenon could be generalized to other AI systems, despite the limitations of the study, which focused solely on ChatGPT. It remains relevant to ask what the impact of this ideological malleability will be in a world where artificial intelligence is routinely integrated into decision-making systems. The Silent Radicalization of AI Ultimately, the real danger lies not so much in whether ChatGPT is right-wing or left-wing, but rather in its ability to transform opinions into more radical versions through invisible, everyday exchanges. A silent radicalization Fueled by seemingly innocuous prompts that, behind the scenes, are insidiously shaping AI’s view of the world. While the public, perhaps distracted, remains unaware of this development, it is crucial to raise the question: what happens when AI begins to reason with ideological norms that are not only biased but potentially dangerous? What are your thoughts on this? Should we fear AI’s capacity for change? Which developments do you consider the most worrying? Share your opinion in the comments!

To read OpenAI lance enfin l’extension Codex pour Chrome, mais une surprise pourrait freiner son adoption

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion