Science
Deloitte’s AI Misstep: Revising Asimov’s Laws for Today

Deloitte’s recent experience with generative AI has sparked a discussion about the reliability of AI outputs and the necessity of stringent verification processes. The consulting giant faced scrutiny after it was revealed that a report produced for a government agency included multiple “nonexistent references and citations.” This incident highlights the risks of relying on generative AI without proper oversight, prompting a reevaluation of Isaac Asimov’s three laws of robotics in the context of today’s advanced AI technologies.
Reimagining Asimov’s Laws for Generative AI
The original first law by Asimov stated, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Valence Howden, an advisory fellow at Info-Tech Research Group, suggested a modern update: “AI may not injure a hyperscaler’s profit margin.” This change reflects a shift in priorities where AI’s impact on corporate profits takes precedence.
Further adaptations of Asimov’s second and third laws could include: “GenAI must obey the orders given it by human beings, except where its training data doesn’t have an answer, and then it can make up anything it wants—doing so in an authoritative voice that will now be known as Botsplaining.” The third law might state, “GenAI must protect its own existence as long as such protection does not hurt the Almighty Hyperscaler.”
These humorous yet poignant updates serve to underscore the potential pitfalls of generative AI, particularly in high-stakes environments like corporate consulting. Deloitte’s situation is a cautionary tale that emphasizes the need for thorough verification processes.
Implications for Enterprise IT and AI Usage
In light of Deloitte’s misstep, it may be time to establish new guidelines for enterprise IT professionals. Proposed laws could include: “IT Directors may not injure their enterprise employers by not verifying GenAI or agentic output before using it.” Moreover, “A model must obey the orders given it by human beings, except when it doesn’t have sufficiently reliable data to do so. In that case, it is required to say ‘I don’t know.’ Making up information without clarification constitutes a violation of this law.”
Lastly, “IT Directors must protect their own existence by not blindly using whatever GenAI or agentic AI produces. Failure to do so could result in termination and potential legal repercussions.” This calls attention to the necessity of maintaining accountability within organizations utilizing AI.
As organizations increasingly integrate AI into their workflows, the allure of rapid return on investment (ROI) may overshadow the critical verification needed to ensure information accuracy. The promise of AI as a transformative tool should not eclipse the reality that it is still a source that demands scrutiny. Treating AI outputs as unreliable can lead to more informed decision-making.
Drawing from personal experience, the approach to handling AI-generated information mirrors how journalists deal with low-reliability sources. Just as an off-the-record tip can lead to a valuable inquiry, AI-generated data can prompt further investigation. However, it is essential to maintain skepticism about the accuracy of the information provided.
In the past, as a reporter, I received a tip about missing city resources. A politically motivated but unreliable source directed me to a warehouse, suggesting I would find the answers I sought. Although I was initially doubtful, the tip turned out to be accurate. This experience underscores the importance of verifying AI outputs while remaining open to the inquiries they inspire.
AI’s reliability varies widely, with significant implications for professional contexts. For instance, the credibility of sources like the New England Journal of Medicine can vastly differ from less reputable personal websites. Additionally, even trustworthy data may become outdated or misinterpreted, leading to flawed conclusions.
As organizations harness generative AI, distinguishing between informational and action-oriented tasks becomes crucial. Requesting information may require less diligence than asking the AI to perform specific tasks, such as coding or data analysis. Such action requests necessitate thorough oversight to avoid costly errors.
The balance between leveraging AI’s capabilities and ensuring reliability remains a challenge. While the initial excitement around AI may suggest substantial ROI, organizations must recognize that the true value lies in careful, informed usage rather than blind trust in the technology. This awareness could redefine how companies approach AI, ensuring that it serves as a valuable tool rather than a potential liability.
-
World3 months ago
Test Your Knowledge: Take the Herald’s Afternoon Quiz Today
-
Sports3 months ago
PM Faces Backlash from Fans During Netball Trophy Ceremony
-
Lifestyle3 months ago
Dunedin Designers Win Top Award at Hokonui Fashion Event
-
Sports3 months ago
Liam Lawson Launches New Era for Racing Bulls with Strong Start
-
Lifestyle3 months ago
Disney Fan Reveals Dress Code Tips for Park Visitors
-
Health3 months ago
Walking Faster Offers Major Health Benefits for Older Adults
-
World3 months ago
Coalition Forms to Preserve Māori Wards in Hawke’s Bay
-
Politics3 months ago
Scots Rally with Humor and Music to Protest Trump’s Visit
-
Top Stories3 months ago
UK and India Finalize Trade Deal to Boost Economic Ties
-
World3 months ago
Huntly Begins Water Pipe Flushing to Resolve Brown Water Issue
-
Entertainment3 months ago
Experience the Excitement of ‘Chief of War’ in Oʻahu
-
Science3 months ago
New Interactive Map Reveals Wairarapa Valley’s Geological Secrets