Artificial intelligence is already being used in ways that touch the public, often without oversight, clear policies, or even management’s knowledge. The risks are not just about bias in decision-making. They also include reputational damage when AI is used in correspondence or public-facing communication, creating responses that are irrelevant, inaccurate, or carry tell-tale signs of AI use.
A recent UK Tribunal decision illustrates just how serious these risks can become when transparency is lacking. The case, Thomas Elsbury v The Information Commissioner, centred on whether His Majesty’s Revenue and Customs (HMRC), the UK's tax agency, had been using large language models to generate letters and decisions about Research and Development (R&D) tax relief. The outcome sets an important precedent for AI transparency in government and carries big lessons for any organisation deploying AI without strong guardrails.
Thomas Elsbury v The Information Commissioner
Background
The UK First-tier Tribunal has ordered HMRC to disclose whether artificial intelligence (AI) was being used by their R&D Tax Credits Compliance Team. This case began when Thomas Elsbury, a tax expert specialising in R&D claims, submitted a Freedom of Information (FOI) request to HMRC in December 2023, asking whether AI, specifically large language models (LLMs), were being used to assess or reject R&D tax credit claims. The request also sought details on:
Elsbury and his colleagues suspected AI involvement after reviewing HMRC correspondence that showed tell-tale signs such as American spellings, the use of the “em dash” punctuation mark, and content that did not align with the specific facts of the case.
Concerns went beyond formatting. Some letters appeared nonsensical or overly generic, leading to speculation that generative AI could be producing responses without proper authorisation or officer training. Elsbury also raised security concerns, particularly if public LLMs such as ChatGPT had been used, as this could expose sensitive commercial information, including intellectual property in sectors linked to national defence.
HMRC delayed its FOI response and eventually issued a “neither confirm nor deny” (NCND) reply, arguing that disclosure could compromise assessment processes. Elsbury appealed to the Information Commissioner, who upheld HMRC’s NCND stance. He then brought the case before the First-tier Tribunal (General Regulatory Chamber).
The Appellant’s Arguments
The Appellant argued that HMRC and the Information Commissioner failed to demonstrate any real and substantive prejudice to tax collection that would outweigh the benefits of disclosure, relying instead on speculative and unsubstantiated risks. Key points included:
The Appellant also emphasised that the way the R&D tax relief scheme is currently administered is discouraging genuine applicants and likely reducing R&D investment. Small companies are increasingly reluctant to apply, with some opting to repay relief claims worth £5,000–£10,000 with interest rather than endure the time, stress, and legal costs of responding to HMRC’s correspondence.
Disclosure, the Appellant argued, would either reassure the industry or trigger necessary public debate about secure and accountable AI use in tax administration.
Tribunal’s Reasoning and Decision
The Tribunal acknowledged the strong public interest in protecting the public purse and preventing fraud. However, it found that the Appellant’s arguments for transparency, accountability, and public confidence carried considerable weight.
The Tribunal concluded that the NCND stance undermined trust in HMRC’s administration of R&D tax relief, potentially discouraging legitimate claimants and frustrating the government’s policy objectives. It ordered HMRC to disclose whether AI was used in processing the claims, with a compliance deadline of 18 September 2025.
This ruling is significant for two reasons. First, it sets a precedent for transparency in government use of AI, particularly in decisions with financial and economic impact. Second, it aligns with emerging best practice under the European AI Act, which emphasises explainability, reduced bias, and clear documentation of AI involvement in decision-making.
For organisations, this is a warning: if AI is influencing significant outcomes, you must be able to explain how it works, document the process, and demonstrate that decisions are fair, accurate, and accountable. These same principles will increasingly be expected not only in the public sector but in regulated private industries as well.
This ruling shows how quickly trust can be damaged when AI is used without proper oversight. For government departments in particular, the risks are real:
Staff may already be using generative AI to draft responses to the public without authorisation.
In the worst case, AI could be influencing actual decisions without transparency or safeguards.
Even at the “light” end, generic or irrelevant responses that carry tell-tale signs of AI undermine credibility and fuel public distrust.
The Elsbury case highlights exactly how this plays out. Suspicion alone was enough to escalate to a Tribunal and the outcome has now set a precedent for disclosure.
For leaders, the lesson is clear. If AI is in play, even in simple tasks like correspondence, you need guardrails. That means:
Documenting when and how AI can be used
Training staff on appropriate use
Ensuring outputs are accurate, relevant, and secure
Putting governance in place before regulators or the public force your hand
If you suspect AI is already creeping into your workflows, or want to make sure it does not become a reputational risk, get in touch to discuss how to put the right guardrails in place.
📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here
The AI Risk & Readiness Self-Assessment is live! It is a quick, practical way to see where you stand and identify the steps needed to scale AI confidently.
You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE
References: