Science
Deloitte’s Generative AI Misstep Highlights Need for Caution

Deloitte’s recent experience with generative AI underscores the risks of relying on this technology without thorough verification. The consulting firm faced backlash after submitting a report to a government agency that included multiple “nonexistent references and citations.” This incident has raised questions about the reliability of generative AI tools, particularly in professional settings.
In a nod to the relevance of Isaac Asimov’s classic work, *I, Robot*, Valence Howden, an advisory fellow at Info-Tech Research Group, proposed a modern interpretation of Asimov’s three laws of robotics for the generative AI landscape. Howden suggested that the first law today might read: “AI may not injure a hyperscaler’s profit margin.” This reflects growing concerns among industry leaders about the implications of unchecked AI use on corporate finances.
Howden also updated Asimov’s second law. Traditionally, it stated, “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” In today’s context, he proposed: “Generative AI must obey the orders given it by humans, except where its training data doesn’t have an answer, in which case it can produce information without basis—now termed ‘Botsplaining.’” This highlights the potential for AI to generate misleading information when it lacks accurate data.
The third updated law suggests that “Generative AI must protect its own existence as long as such protection does not hurt the Almighty Hyperscaler.” This reflects a growing trend of technology prioritizing corporate interests over accuracy or ethical considerations.
Deloitte’s situation serves as a cautionary tale. The firm was expected to set a standard for best practices in AI verification. Instead, it illustrated the pitfalls of relying on generative AI without rigorous checks. The irony is not lost; Deloitte is positioned to guide enterprises on how to effectively use AI, yet it stumbled into demonstrating poor practices.
In response to these challenges, Howden proposed three new laws for enterprise IT to govern the use of generative AI. The first law states that “IT Directors may not injure their enterprise employers by not verifying generative AI or agentic output before using it.” This emphasizes the need for accountability in technology adoption.
The second law advises that “A model must obey the orders given by humans, except when it lacks sufficient reliable data. In that case, it must clearly state, ‘I don’t know.’” This call for transparency is crucial in preventing misinformation from spreading within organizations.
The final law posits that “IT Directors must protect their own existence by not blindly using whatever generative AI produces. Failure to do so could lead to termination and potentially legal repercussions.” This stark warning highlights the serious responsibilities that come with leveraging AI technology.
The necessity for stringent verification practices might challenge the anticipated return on investment (ROI) that many executives envision. Generative AI is a tool designed to assist rather than replace human workers. A pragmatic approach involves treating AI-generated information as a potentially unreliable source, necessitating further investigation and confirmation.
Drawing from personal experience, the author recalls instances in journalism where low-reliability sources prompted valuable inquiries. This approach parallels the handling of generative AI outputs. While AI can provide leads, users must exercise caution and conduct due diligence to confirm the information’s accuracy.
Moreover, the reliability of generative AI is not just about hallucinations—instances where AI produces incorrect information due to a lack of training on specific subjects. It also involves the quality of the data available. For instance, using reputable medical journals offers more reliable insights than personal websites with dubious credibility.
Additionally, even high-quality data can become outdated or misinterpreted. Context is crucial; an accurate fact in one region may not hold true in another. The same applies to user queries, which can be misinterpreted by AI systems.
When considering the use of generative AI, it is vital to differentiate between two categories of functions: informational tasks, such as answering questions, and action tasks, involving coding or creating documents. The latter requires more rigorous verification to ensure accuracy.
While the pursuit of efficiency remains important, the potential risks associated with unverified AI outputs could undermine the very ROI that leaders seek. Ultimately, the emphasis should be on the quality of AI implementation rather than the mere speed of information delivery.
-
World3 months ago
Test Your Knowledge: Take the Herald’s Afternoon Quiz Today
-
Sports3 months ago
PM Faces Backlash from Fans During Netball Trophy Ceremony
-
Lifestyle3 months ago
Dunedin Designers Win Top Award at Hokonui Fashion Event
-
Sports3 months ago
Liam Lawson Launches New Era for Racing Bulls with Strong Start
-
Lifestyle3 months ago
Disney Fan Reveals Dress Code Tips for Park Visitors
-
Health3 months ago
Walking Faster Offers Major Health Benefits for Older Adults
-
World3 months ago
Coalition Forms to Preserve Māori Wards in Hawke’s Bay
-
Politics3 months ago
Scots Rally with Humor and Music to Protest Trump’s Visit
-
Top Stories3 months ago
UK and India Finalize Trade Deal to Boost Economic Ties
-
World3 months ago
Huntly Begins Water Pipe Flushing to Resolve Brown Water Issue
-
Entertainment3 months ago
Experience the Excitement of ‘Chief of War’ in Oʻahu
-
Science3 months ago
New Interactive Map Reveals Wairarapa Valley’s Geological Secrets