AI Briefing: Move Fast, Scale Safely – August 2025
Sena Consulting helps organisations build strong AI frameworks so they can innovate quickly, scale safely, and stay ahead of change. Through this newsletter we deliver AI regulatory updates, the top global AI stories, brief AI related case summaries, and a breakdown of what they mean for organisations aiming to move fast without unnecessary risk.
This week has brought big developments in AI, from the launch of GPT-5 to new moves under the EU AI Act, and the pace of change is only accelerating.
Top 5 AI Headlines This Week
- Chat GPT users experience collective grief over loss of GPT-4.
The launch of GPT-5 has sparked an unexpected social reaction, with many TikTok users mourning the loss of GPT-4’s “personality” and the connection they had built with it. This marks the first time society is witnessing emotional attachment to an AI at scale, prompting OpenAI CEO Sam Altman to restore a version of GPT-4 alongside GPT-5 - Mass grief over loss of ChatGPT4 entities, and other stories - 601 real world generative AI use cases
Google Cloud releases 601 real world generative AI use cases from the world’s leading organisations. This collection showcases how generative AI is already being used in real work environments across multiple industries. Examples include AI powered assistants in vehicles from Ford and Mercedes Benz, and tools at the Mayo Clinic that use Vertex AI Search to access vast collections of clinical data. These use cases illustrate how generative AI is transforming both customer experience and research workflows. Check them all out here - 601 real-world gen AI use cases from the world's leading organizations - Australia considers AI copyright exemptions.
The Productivity Commission is exploring a proposal to allow AI systems to use copyrighted content without payment for training and development under 'fair use'. Supporters argue it could accelerate AI innovation, but Australian authors and publishers are warning of serious implications for intellectual property rights, creative industries, and the livelihoods of content creators. The proposal has sparked a growing debate over how to balance technological progress with fair compensation for original work - Australian authors challenge Productivity Commission's proposed copyright law exemption for AI - OpenAI launches GPT-5
OpenAI has released GPT-5, promoting it as a major step forward in reasoning abilities and with new sector-specific capabilities. However, many users are not convinced. Some say the model feels less creative and more cautious in its responses, with long-time users reporting it is less intuitive than GPT-4. The changes have sparked an online backlash, with some describing the new version as a “corporate beige zombie.”
Sam Altman admits OpenAI ‘totally screwed up’ its GPT-5 launch - EU introduces AI Act provider guidelines
The European Commission has released new guidance on the obligations for providers of general-purpose AI models, including generative AI, under the European AI Act. The document sets out expectations for transparency, safety, and accountability, and introduces a voluntary Code of Practice ahead of the Act’s formal enforcement. The guidelines give providers an early opportunity to align with regulatory requirements, reduce compliance risks, and demonstrate responsible AI practices before the rules become mandatory - Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act
Case Summary: UK Tax Transparency Ruling
Thomas Elsbury v The Information Commissioner
BackgroundThe UK First-tier Tribunal has ordered His Majesty’s Revenue and Customs (HMRC) to disclose whether artificial intelligence (AI) was being used by HMRC’s Research and Development (R&D) Tax Credits Compliance Team.
This case began when Thomas Elsbury, a tax expert specialising in R&D claims, submitted a Freedom of Information (FOI) request to HMRC in December 2023, asking whether AI, specifically large language models (LLMs), were being used to assess or reject R&D tax credit claims. The request also sought details on:
- the purposes for which LLMs were employed (e.g. data analysis, decision-making support, enquiry responses, penalty justifications)
- the criteria used for selecting these models
- any custom development or training undertaken by staff
- the providers of the models
- measures in place to ensure the privacy and security of taxpayer data, including any data protection impact assessments.
Elsbury and his colleagues suspected AI involvement after reviewing HMRC correspondence that showed tell-tale signs such as American spellings, the use of the “em dash” punctuation mark, and content that did not align with the specific facts of the case.
Concerns went beyond formatting. Some letters appeared nonsensical or overly generic, leading to speculation that generative AI could be producing responses without proper authorisation or officer training. Elsbury also raised security concerns, particularly if public LLMs such as ChatGPT had been used, as this could expose sensitive commercial information, including intellectual property in sectors linked to national defence.
HMRC delayed its FOI response and eventually issued a “neither confirm nor deny” (NCND) reply, arguing that disclosure could compromise assessment processes. Elsbury appealed to the Information Commissioner, who upheld HMRC’s NCND stance. He then brought the case before the First-tier Tribunal (General Regulatory Chamber).
The Appellant’s Arguments
The Appellant argued that HMRC and the Information Commissioner failed to demonstrate any real and substantive prejudice to tax collection that would outweigh the benefits of disclosure, relying instead on speculative and unsubstantiated risks. Key points included:
- Public oversight: Refusal to disclose undermines the ability to identify flaws or abuses in AI use, potentially increasing fraud rather than preventing it.
- Transparency and trust: Disclosure is essential to maintaining public confidence, especially when dealing with sensitive R&D intellectual property and the use of AI in government decision-making.
- Compliance benefits: Openness would enhance compliance by fostering trust and clarity regarding HMRC’s AI strategies, deterring non-compliance.
- Countering misinformation: Clear disclosure would reduce speculation about unfairness, privacy violations, and inaccuracies associated with AI tools.
- Alignment with regulator guidance: The Information Commissioner’s own guidance stresses accountability and transparency in AI deployment, which is inconsistent with HMRC’s “neither confirm nor deny” (NCND) stance.
- Procedural fairness: HMRC’s handling of the FOI request involved excessive delays and an inconsistent shift from acknowledging information to invoking NCND, undermining FOIA principles.
- Public interest balance: Transparency supports the government’s policy of incentivising R&D and avoids the chilling effect on legitimate claimants caused by distrust and confusing AI-generated correspondence.
The Appellant also emphasised that the way the R&D tax relief scheme is currently administered is discouraging genuine applicants and likely reducing R&D investment. Small companies are increasingly reluctant to apply, with some opting to repay relief claims worth £5,000–£10,000 with interest rather than endure the time, stress, and legal costs of responding to HMRC’s correspondence.
Disclosure, the Appellant argued, would either reassure the industry or trigger necessary public debate about secure and accountable AI use in tax administration.
Tribunal’s Reasoning and Decision
The Tribunal acknowledged the strong public interest in protecting the public purse and preventing fraud. However, it found that the Appellant’s arguments for transparency, accountability, and public confidence carried considerable weight.
The Tribunal concluded that the NCND stance undermined trust in HMRC’s administration of R&D tax relief, potentially discouraging legitimate claimants and frustrating the government’s policy objectives. It ordered HMRC to disclose whether AI was used in processing the claims, with a compliance deadline of 18 September 2025.
Why this Matters
This ruling is significant for two reasons. First, it sets a precedent for transparency in government use of AI, particularly in decisions with financial and economic impact. Second, it aligns with emerging best practice under the European AI Act, which emphasises explainability, reduced bias, and clear documentation of AI involvement in decision-making.
For organisations, this is a warning: if AI is influencing significant outcomes, you must be able to explain how it works, document the process, and demonstrate that decisions are fair, accurate, and accountable. These same principles will increasingly be expected not only in the public sector but in regulated private industries as well.
References:
- Thomas-Elsbury-v-The-Information-Commissioner-2025-UKFTT-915-GRC.pdf
- Court tells HMRC to disclose AI use in R&D claims - www.rossmartin.co.uk
- UK tribunal orders HMRC to reveal AI role in R&D tax credit rejections
Why It Matters and How Sena Can Help
The organisations that will win with AI are those that can move fast while keeping decision making safe, fair, and well governed. That means:
- Having a clear, documented framework for AI use
- Reducing bias and improving decision quality without slowing innovation
- Staying agile as technology and regulations evolve
Sena Consulting works with organisations to put these frameworks in place so AI adoption is not just fast but sustainable. It is about creating the right conditions to accelerate AI adoption without hidden risks or costly delays.
If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.
📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here
Take the Self-Assessment
The AI Risk & Readiness Self-Assessment launched this week. It is a quick, practical way to see where you stand and identify the steps needed to scale AI confidently.
You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE