ChatGPT mistaken for a child: stuck in restricted mode

show index hide index

In a world where artificial intelligence is evolving at breakneck speed, ChatGPT faces a strange dilemma: adult users are being treated like children. OpenAI’s recent implementation of an age filter has transformed the user experience, limiting access to essential features and reducing the richness of interactions. Stuck in restricted mode, this tool, intended as a work and reflection companion, becomes inaccessible to those who need to address complex topics. This paradox raises a multitude of questions about the effectiveness and benevolence of these new regulations. In a world where artificial intelligence is becoming increasingly ubiquitous, OpenAI’s latest decision has sparked fierce criticism. The implementation of an age detection system for its ChatGPT chatbot has led to significant malfunctions. Many adults find themselves, against their will, trapped in a limited communication mode, as if they were children. Let’s delve into the details of this controversy. A good intention poorly executed OpenAI justified its decision to strengthen the protection of young users by activating a safe environment for ChatGPT. This « teen mode » is supposed to filter content inappropriate for minors. However, the problem lies in how this detection system works. It relies on behavioral signals, which can easily lead to errors. An adult, for example, could be wrongly labeled a minor, restricting their access to mature and complex topics.An age filter that wreaks havoc ChatGPT’s age detection feature, unfortunately, seems to be overly sensitive. Many users who have always used the platform responsibly are suddenly being labeled as teenagers. This phenomenon isn’t just a slight alteration to the user experience. Discussion topics become inaccessible, and the tone of responses becomes more formatted and watered down. Ultimately, these adults find themselves in a world that doesn’t suit them, a nightmare for those using ChatGPT professionally or creatively. Frustrated and distrustful users Frustration is becoming increasingly palpable among those subjected to these restrictions. It’s not simply a passing inconvenience—it impacts their ability to work, express ideas, and even conduct research. Being suddenly treated like a child by an algorithm can undermine users’ trust in the platform. Their inability to understand why this restriction has occurred only increases their distrust of OpenAI. Furthermore, the sensitivity of the issue creates a sense of insecurity regarding the technology that is supposed to make their lives easier.

Privacy compromised

To prove their age, users may be required to provide personal information via Persona, a third-party service. While this method aims to increase security, it raises significant privacy concerns. How can we be sure that the data provided will never be used for undisclosed purposes? Users are right to be reluctant to send an ID or even a selfie to verify their age. The fears are real: the normalization of identity verification in digital services that should remain accessible and anonymous.

An Echo in the Digital World The situation observed with ChatGPT is not isolated. Other platforms likeYoutube

or

instagram

have encountered similar problems. These social networks have also deployed age filters, but without the same professional scope as ChatGPT. Yet, while these platforms often confine themselves to entertainment, ChatGPT is venturing into more serious areas. This changes the

relationship

between the user and artificial intelligence, making the positioning of this technology as an assistance tool all the more delicate.

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion