Deloitte in crisis following publication of report marred by errors due to artificial intelligence

show index hide index

In Australia, a scandal involving the prestigious consulting firm Deloitte highlights the dangers of the misuse of artificial intelligence. After publishing a report riddled with false information generated by an AI model, Deloitte was forced to reimburse part of a €250,000 contract with the Australian government. This incident illustrates the risks associated with AI hallucinations, raising crucial questions about the reliability and responsibility of private actors in the use of such technologies. In Australia, Deloitte found itself at the center of a storm after the revelation of numerous errors in a report intended for the Department of Employment and Workplace Relations. This report, billed at approximately €250,000, contained erroneous information apparently generated by artificial intelligence without adequate human verification. The errors highlighted the risks associated with the use of generative models, such as the Azure OpenAI GPT-40 model, and reignited the debate on the role of AI in consulting. A prestigious contract marred by errors The consulting firm Deloitte had landed a major contract with the Australian Department of Employment and Workplace Relations. Billed at approximately AU$440,000, the contract was to evaluate the country’s automated welfare sanctions system. However, the publication of the report was quickly followed by the discovery of multiple errors.The revelation of anomalies Several Australian researchers have reported flaws in the report. Sydney academic Chris Rudge highlighted the existence of fictitious references, fabricated quotes, and even a court ruling that never materialized. These anomalies clearly pointed to the use of generative artificial intelligence, specifically the Azure OpenAI GPT-40 model, without sufficient human oversight. Correction and Reimbursement Faced with the controversy, Deloitte corrected the report by removing some of the incorrect references and footnotes. In addition, the firm committed to reimbursing a portion of the contract amount, although it did not specify the exact amount. However, this approach is considered insufficient by many observers. Criticism and the Ministry’s Position

Green Senator Barbara Pocock harshly criticized Deloitte’s actions, calling the incident an « abuse of artificial intelligence. » She is calling for full reimbursement of the contract. The Australian government, for its part, confirmed that the errors stemmed from the inappropriate use of AI in drafting the report.

Challenging the role of AI in consulting

This scandal revives the question of the place of AI in consulting assignments. Many experts are concerned about the ability of artificial intelligence tools to generate « hallucinations » that are undetectable without human verification. This case also raises the responsibility of private actors who, like Deloitte, advise companies and public bodies on the implementation of these complex technologies.

The current limitations of generative models

Despite their performance, generative models, such as the Azure OpenAI GPT-4o, continue to have limitations, particularly in terms of the reliability of the information produced. This event highlights the need for constant human supervision to avoid such errors in high-value documents.

To read Google, Microsoft, xAI… Trump s’apprête à expérimenter en exclusivité les IA de nouvelle génération

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion