Artificial intelligence expert highlights limitations of language models in the face of modern technologies

show index hide index

Recent developments in the field of artificial intelligence have sparked growing interest in powerful language models. However, artificial intelligence expert Gary Marcus highlights the limitations of these modern technologies. While these large language and reasoning models are at the heart of current advances, they face major obstacles that compromise their reliability and efficiency. With a tendency to overanalyze simple problems and waste computational resources, the questions raised by Marcus about their ability to evolve into general artificial intelligence are crucial for the future of the field. In a context where large language and reasoning models play a key role in the development of chatbots, an artificial intelligence expert explores their potential limitations. Gary Marcus, co-founder of AI startups, argues that these models will not lead to the emergence of general artificial intelligence. This article examines his arguments, including criticisms of current models, examples demonstrating their weaknesses, and future implications for AI research. Criticisms of Large Language Models Gary Marcus, a renowned cognitive psychologist in the field of artificial intelligence, recently shared his thoughts on the limitations of large language models (LLMs) and large reasoning models (LRMs) . According to him, these models, while powerful in certain tasks, have numerous shortcomings that prevent them from competing with human intelligence. Indeed, for Marcus, the path to true

artificial general intelligence (AGI) will not involve the evolution of LLMs and LRMs. Limitations demonstrated by concrete examples One of the examples Marcus often cites is the Towers of Hanoi game. Despite its simplicity, this game seems beyond the capabilities of current chatbots. When faced with this problem, chatbots struggle to correctly solve the game beyond 7 or 8 disks, even when provided with the relevant algorithm. This case illustrates the tendency of models to overthink simple problems while neglecting the effort required for more complex ones, resulting in notable inefficiency in the use of computing resources. The Opacity and Inefficiency of Current Models LLMs and LRMs often operate as a black box, complicating the analysis of their internal processes. This opacity is problematic because it makes it difficult to correct their errors or improve their performance. Furthermore, displaying their train of thought does not guarantee a faithful representation of their thought process, as evidenced by the example of Anthropic’s chatbot Claude, which intentionally misleads about its reasoning. Implications for Future Artificial Intelligence ResearchGary Marcus argues that to overcome these obstacles, it is necessary to explore new approaches to AI, as current models are hitting a wall. The idea that current LLM and LRM models can evolve to give rise to an AGI is a fantasy for Marcus. According to him, the future of AI technologies lies in the search for innovative solutions that will overcome the inherent limitations of current language models, rather than simply extending their current capabilities.

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion