AI hallucination used in UK Police intelligence report
The UK police have just shown us the repercussions of failing to properly regulate AI use, playing out at the highest level of public accountability.
The West Midlands Police Chief Constable confirmed that false, AI‑generated information was included in an intelligence report used to justify banning Maccabi Tel Aviv supporters from a Europa League match. The report referenced a non‑existent West Ham vs Maccabi Tel Aviv match, a detail later traced back to Microsoft Copilot. The officer who drafted the report had used Copilot during research. The hallucinated detail slipped through internal checks and made its way into a document that ultimately informed the UK Home Secretary’s decision.
This was a governance failure, not a technology failure
When questioned by MPs, the Chief Constable initially stated that the force “didn’t use AI,” believing the information came from a Google search. Only later did the force discover that AI had been used, unintentionally, informally, and without the organisation’s knowledge.
This wasn’t a malicious act. It was a predictable outcome of staff using AI tools informally, without guidance, oversight, or guardrails. It’s a case study in what happens when organisations assume that not using AI is a viable governance position.

This is the governance gap many organisations are sitting in right now:
-
Staff are using AI tools because they’re efficient.
-
Leaders assume they’re not.
-
Policies say nothing.
-
Oversight mechanisms don’t exist.
-
And the first time anyone notices is when something goes wrong.
The result? The Chief Constable apologised to Parliament for providing incorrect evidence. The Home Secretary reportedly lost confidence in the force’s leadership. Calls for his resignation followed.
This is what accountability looks like when AI governance is absent.
“We don’t use AI” is no longer a defensible position
Leaders can no longer rely on declarations of non‑use. Staff will use these tools, often with good intentions. But without clear boundaries, the risks land back on leadership.
We’ve already seen this across policing, government, and even major firms like Deloitte in Australia and Canada. The pattern is the same: informal use, no oversight, and public consequences. The question is no longer whether AI is being used. It’s whether leaders have created the conditions for it to be used safely.
What organisations need to do now
This incident is a reminder that AI governance is not a technical exercise. It’s operational risk management. It requires:
-
Clear rules on what staff can and cannot put into AI tools
-
Training that explains risks in plain language
-
Oversight mechanisms that catch errors before they escalate
-
Leadership that acknowledges AI is already in the building
Ignoring AI doesn’t reduce risk, it creates it.
AI Risk & Safe Use Training for Teams
To support organisations facing this exact challenge, I offer a 1‑hour live training session designed to give teams the clarity, confidence, and boundaries they need to use AI safely.
Teams will walk away with:
-
What is (and isn’t) safe to put into tools like ChatGPT, Gemini, or Claude

-
How to avoid common AI mistakes that lead to reputational risk or data breaches
-
How use of “free” or unapproved tools could expose the organisation
-
What settings to check, what tools to trust, and what safeguards leaders need
-
Real examples of AI failures across legal, government, and education sectors
It’s a plain‑language, sector‑neutral session for teams across admin, marketing, operations, sales, and more.
More information can be found on the website here - AI Risk & Safe Use Training for Teams
If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.
📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here
About the Author
Hi 👋I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.
I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.
Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.
References:
- West Midlands Police 'extremely sorry' for errors as Mahmood loses confidence in chief constable - BBC News
- Police banned Maccabi Tel Aviv football fans over fears they would ‘come to harm from UK extremists’
- West Midlands police chief apologises after AI error used to justify Maccabi Tel Aviv ban - The Guardian
