Beware of AI news advice: ChatGPT and Gemini often give you false information

show index hide index

In an increasingly connected world, artificial intelligence tools like ChatGPT and Gemini are emerging as instant sources of information. Many are seduced by their ability to provide rapid responses. However, what is often overlooked is that these AI bots seem to have mastered the art of illusion. Their apparent confidence can lead us straight to misinformation about recent events. In a context where current events are becoming increasingly complex, relying on these digital assistants can prove to be of questionable reliability. In a world where artificial intelligence tools like ChatGPT and Gemini are becoming increasingly popular, many of us rely on them as reliable sources of information. However, an alarming reality is emerging: these artificial intelligence systems are frequently wrong. One study reveals that nearly 45% of the answers provided by these systems contain inaccuracies. This situation raises serious questions about the risk of widespread misinformation. A look back at this worrying phenomenon. Seductive but misleading answersAIs like Gemini present themselves with disconcerting confidence, appearing to hold answers to complex questions. However, their confidence can hide significant errors. Used to discuss current events and sensitive topics, these AIs offer unverified information that can mislead users. Thus, the line between information and disinformation is blurring. An alarming finding on current affairs Data from a survey conducted by the EBU and the BBC reveals that 76% of the information provided by Geminicontain errors. Whether it’s recent events or current analyses, these tools are not up to the task. Moreover, the sources cited are often either fictitious or unable to corroborate the arguments put forward. This trend of disinformation does not target a specific audience or region, but affects the entire world.A Targeted Generation

In the digital age, the way we get our information has radically changed. According to the Reuters Institute , 7% of people already get their information from AI assistants. This figure rises to 15% among young people under 25. In other words, the generation that checks everything on their phones could easily absorb fake news, while believing they have access to the most reliable information. A disturbing reality that the platforms seem unwilling to acknowledge. The Institutions’ ReactionThe discovery of these flaws has caused unrest within the institutions. The EBU calls for strict regulation of artificial intelligence tools, particularly to protect the integrity of information. The BBC, for example, recently pressured Apple to correct its AI-generated summaries. The clear objective is to transparently indicate that these notifications do not come from human sources, to avoid any confusion between these robots and traditional media.

The consequences of disinformation

If you want to deepen your understanding of the implications of these technologies, you can consult articles such as the role of AI-generated images during conflicts, or the case of the Prime Minister designed by AI . You can also explore initiatives such as Polymate, which aims to counter the excesses of social media using artificial intelligence. Recently, the challenge of disinformation in France was addressed by

Mistral

, the French leader in the sector. Ultimately, as we dive into the vast ocean of information generated by AI, it is crucial to maintain a critical mind. Don’t let sophisticated tools mislead you. Check your sources and remain vigilant against the misinformation that can creep in through these artificial intelligences.

To read Claude et le chantage : Anthropic lève le voile sur une révélation surprenante

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion