Science
Deloitte Faces Backlash Over AI-Generated Report Flaws

Deloitte has come under scrutiny for relying on generative AI to produce a report for a government agency, resulting in the discovery of numerous “nonexistent references and citations.” This incident raises critical questions about the reliability of AI-generated content and the importance of rigorous verification processes in corporate practices.
Valence Howden, an advisory fellow at Info-Tech Research Group, recently highlighted how Isaac Asimov’s three laws of robotics might be updated for the current landscape of generative AI. Originally penned in his 1950 work, “I, Robot,” the first law states: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Howden humorously suggested that an updated version for 2025 could read: “AI may not injure a hyperscaler’s profit margin.”
The implications of this shift in perspective are significant. As generative AI becomes increasingly integrated into business processes, the need for accountability and accuracy is paramount. Howden proposed new laws that could guide enterprise IT in their use of generative AI.
The second law could be: “Generative AI must obey the orders given it by human beings, except where its training data doesn’t have an answer, in which case it can produce inaccurate responses without clarification.” The third law might state: “Generative AI must protect its own existence as long as such protection does not undermine the interests of the hyperscaler.”
These revised principles reflect the ongoing challenges faced by organizations like Deloitte, which is expected to lead by example in validating AI outputs. The irony here is palpable; Deloitte’s role is to advise clients on best practices for leveraging generative AI, yet it has demonstrated the opposite through its recent missteps.
In light of these events, it is essential to establish guidelines for how enterprise IT departments engage with generative AI. One proposed law states: “IT Directors may not harm their organizations by neglecting to verify AI-generated outputs.” This emphasizes the responsibility of leaders to ensure that AI tools are used effectively and responsibly.
Another critical point is the necessity for AI models to communicate uncertainty. If a model cannot provide a reliable answer, it should clearly state, “I don’t know.” This approach would prevent the dissemination of misleading information, a significant violation of ethical AI use.
The recent report incident also highlights the broader implications for return on investment (ROI) associated with generative AI tools. The rigorous verification processes required to ensure data accuracy may diminish the high ROI many executives anticipate. Instead, organizations should view these AI systems as tools to assist rather than replace human workers, recognizing the importance of critical thinking and verification.
As someone with journalistic experience, I understand the value of dealing with unreliable sources. When engaging with information from generative AI, it is vital to ask probing questions and conduct further inquiries based on the information provided. Just as with off-the-record tips in journalism, AI-generated insights can lead to valuable discoveries if approached with caution.
For example, a past experience as a reporter involved tracking down missing city resources based on a tip from an unreliable source. Although the source was questionable, following up led to the discovery of approximately 60,000 missing street signs. This illustrates how careful investigation can yield meaningful insights, even from less-than-reliable sources.
Generative AI users must recognize that for every accurate piece of information, there may be several inaccuracies. Many AI models generate “hallucinations,” or incorrect outputs, often due to insufficient training or reliance on low-quality data. As previously noted, using reputable sources such as the New England Journal of Medicine is crucial, especially in sensitive fields like healthcare.
Furthermore, the potential for outdated or misinterpreted information adds another layer of complexity. An answer that is correct in one geographical context may not hold true in another. Organizations must remain vigilant and ensure they apply due diligence, particularly when requesting actionable outcomes from AI models.
In conclusion, the recent incident involving Deloitte underscores the need for a more cautious and responsible approach to generative AI in business environments. By implementing strict verification processes, organizations can safeguard their interests while harnessing the potential of AI technologies. As the landscape continues to evolve, adapting to these challenges will be essential for maintaining integrity and accountability in the use of generative AI.
-
World3 months ago
Test Your Knowledge: Take the Herald’s Afternoon Quiz Today
-
Sports3 months ago
PM Faces Backlash from Fans During Netball Trophy Ceremony
-
Lifestyle3 months ago
Dunedin Designers Win Top Award at Hokonui Fashion Event
-
Sports3 months ago
Liam Lawson Launches New Era for Racing Bulls with Strong Start
-
Lifestyle3 months ago
Disney Fan Reveals Dress Code Tips for Park Visitors
-
World3 months ago
Coalition Forms to Preserve Māori Wards in Hawke’s Bay
-
Health3 months ago
Walking Faster Offers Major Health Benefits for Older Adults
-
Politics3 months ago
Scots Rally with Humor and Music to Protest Trump’s Visit
-
Top Stories3 months ago
UK and India Finalize Trade Deal to Boost Economic Ties
-
World3 months ago
Huntly Begins Water Pipe Flushing to Resolve Brown Water Issue
-
Entertainment3 months ago
Experience the Excitement of ‘Chief of War’ in Oʻahu
-
Science3 months ago
New Interactive Map Reveals Wairarapa Valley’s Geological Secrets