Skip to content
All posts

The AI Briefing: Lessons from Embarrassing AI Mistakes

IMG-20250914-WA0070edited

 

Hi 👋I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.

I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.

Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.

This week has brought major developments in AI, from high-profile legal cases in Australia and the U.S. to fresh debates on transparency and trust. The pace of change is only accelerating, and the lessons for business leaders are becoming sharper by the week.

Top 5 Stories in AI

  1. Australian Senate inquiry publishes report containing false AI-generated claims about Deloitte and KPMG:

    An academic submission to the Senate Inquiry into Consulting Firms included hallucinated references that went far beyond minor citation errors. The AI had fabricated serious allegations of fraud, complicity in wage theft, and audit failures, wrongly implicating Deloitte and KPMG in scandals they had nothing to do with. The original authors failed to catch the errors in their review process, and the fabricated allegations were reproduced in the inquiry’s official report before being corrected. The academics have since apologised. Read the article here: Academics apologise for AI blunder implicating Big Four.

    This case highlights how AI-generated mistakes can escalate into criminal-level allegations and find their way into official government reports if oversight fails. It underlines the need for robust internal review processes when AI is used, and for strong organisational policies and training so staff understand how to use AI tools responsibly. 

  2. Switzerland launches Apertus, the world’s first fully transparent AI:

    Swisscom has unveiled Apertus, an open-source AI model designed to be 100% transparent in its architecture, training data, and performance metrics. Unlike closed, proprietary systems, Apertus allows full scrutiny by researchers, regulators, and developers. The project is intended to build public trust in AI and demonstrate that openness can be a foundation for safety and accountability. Read the article here: Apertus: A fully open, transparent, multilingual language model.DALL·E 2025-05-02 15.42.21 - A conceptual image set at night in a dimly lit modern MBA classroom. Mature, diverse students in business attire are engaged in a serious discussion o

    Why it matters: Transparency is fast becoming a benchmark in AI regulation, from the EU AI Act, to sector-specific guidance in other jurisdictions. For organisations, Apertus shows what regulators are moving toward: systems that can be explained, audited, and trusted. Businesses using AI, even those relying on commercial tools, will increasingly need frameworks that mirror these principles: clear policies, explainable outputs, and robust internal governance.

  3. A major report prepared by Deloitte for the Australian Department of Employment and Workplace Relations is under scrutiny after citation errors spark AI concerns:

    The report contained multiple inaccuracies in its references, including wrong titles, dates, and publishers. These issues were first flagged by an academic at the University of Sydney, who questioned whether AI tools had been used. Deloitte denied that claim, attributing the mistakes to human error in referencing rather than reliance on AI. Read the article here: Deloitte report suspected of containing AI invented quote.

    The incident has prompted debate over the standards of quality control in government-commissioned reports, and the risks if consultancies were to rely on AI without robust review processes.

  4. Otter AI faces U.S. class action over secret recordings:

    A lawsuit filed in the U.S. District Court for the Northern District of California alleges that Otter.ai secretly recorded and stored private conversations without consent, using them to train its transcription service. Plaintiffs argue the app’s background recording function captured sensitive discussions without participants realising, raising serious privacy and transparency concerns. The suit claims this conduct violates state and federal wiretap and privacy laws and seeks to represent California users who may have been affected. Read the article here: Class-action suit claims Otter AI secretly records private work conversations.

    If successful, the case could set an important precedent for how AI productivity tools handle consent and data protection, issues regulators around the world are beginning to scrutinise more closely.

  5. Anthropic faces backlash over Claude privacy changes:

    Anthropic has updated the consumer terms for its AI assistant Claude, making user chats available for model training by default unless users opt out. The change has prompted criticism from privacy advocates, who argue that sensitive conversations could be exposed if users are not fully aware of the new policy. Anthropic maintains that the shift will help improve Claude’s performance, but the move highlights ongoing tensions between innovation, transparency, and user trust in the AI sector. Read the article here: Anthropic Wants to Use Your Chats With Claude for AI Training: Here's How to Opt Out.

    If you are using external AI tools, always check whether default settings expose data for training. Without proper vetting, sensitive company or client information could be at risk.


Case Summary: Australian Solicitor Reprimanded for Relying on Inaccurate AI-Generated Cases

Dayal [2024] FedCFamC2F 1166

Background

In August 2024, the Federal Circuit and Family Court of Australia (Judge A. Humphreys) found that a Victorian solicitor had tendered a list of authorities that included non-existent cases. The list and summaries, generated by an AI tool, contained inaccurate citations. The solicitor told the Court he did not fully understand how the tool worked and had failed to verify the accuracy of the results. He offered an unconditional apology, stressing that he had not intended to mislead the Court.

The Court accepted that the conduct was unlikely to be repeated but regarded it as a serious breach of professional standards. Judge Humphreys referred the matter to the Victorian Legal Services Board (VLSB) for regulatory review, noting the wider concerns raised by the growing use of AI in litigation.

DALL·E 2025-08-26 17.41.39 - A realistic illustration of a tax lawyer in a modest office setting, in the same style as previous grounded images. The man is in his late 30s to 40s,

Victorian Legal Services Board Reasoning and Decision

On 19 August 2025, following its investigation, the VLSB varied the solicitor’s practising certificate. The solicitor is now:

  • no longer entitled to practise as a principal lawyer
  • no longer authorised to handle trust money
  • no longer permitted to operate his own law practice
  • restricted to practising only as an employee solicitor
  • required to undertake two years of supervised legal practice, with both he and his supervisor reporting to the VLSB on a quarterly basis

The VLSB stated that the case reflects its commitment to ensuring legal practitioners who use AI do so responsibly and in line with their professional obligations. It urged solicitors to review its Statement on the use of artificial intelligence in Australian legal practice and to undertake continuing professional development before adopting AI tools in their work.

This case underlines how unverified AI use can compromise the integrity of legal proceedings, erode client confidence, and expose practitioners to disciplinary action. 

Why This Matters

This case is significant for two reasons. First, it demonstrates that regulators are willing to impose real professional consequences when lawyers rely on AI without proper verification. Second, it sets an example for how oversight bodies may respond in other jurisdictions as courts and regulators worldwide grapple with the risks of AI in legal practice.

For law firms and legal departments, the lesson is clear: if AI is used in research, drafting, or case preparation, it must be reviewed and verified with the same rigour as traditional methods. Failure to do so can undermine the integrity of proceedings, damage client confidence, and expose practitioners or firms to disciplinary action. As AI tools become more common in professional services, regulators will expect not only caution but clear policies, training, and accountability for how they are used.

Similar cases have arisen in the UK and Europe

References:


How Sena Consulting Can Help

The organisations that will win with AI are those that can move fast while keeping decision making safe, fair, and well governed. That means:

  • Having a clear, documented framework for AI use
  • Reducing bias and improving decision quality without slowing innovation
  • Staying agile as technology and regulations evolve

Sena Consulting works with organisations to put these frameworks in place so AI adoption is not just fast but sustainable. It is about creating the right conditions to accelerate AI adoption without hidden risks or costly delays.

If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.

📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here


Take the AI Risk & Readiness Self-Assessment

If you are curious about where your organisation sits on the AI risk and readiness scale, take my 5-minute Self-Assessment 🕒.

It produces a tailored report showing your organisation’s AI red flags 🚩 and gives you practical next steps to strengthen your AI use so it is safe, strategic, and under control. 

You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE