Skip to content
All posts

The AI Briefing: Advances, Attacks, and the Future of AI Safety

u4785225577_A_thoughtful_scene_showing_a_human_gently_guiding_f126ae6e-605e-4efd-8b88-bb6d7c50ded4_2

Top 5 Stories in AI

1. Google's Gemini 3 is Now Officially the Smartest AI Out There

Google dropped Gemini 3 on November 18, and it broke records for AI intelligence scores. If you use Google Search or the Gemini app, you're now working with the most capable AI assistant available. It's better at solving complex problems, understanding images and videos, and helping with coding tasks. Plus, there's a new "Deep Think" mode for when you need help with really tough questions. 

Try this: Use Gemini when you're planning something complicated, like a home renovation budget, researching a major purchase, or trying to understand a confusing technical problem. The "Deep Think" mode is perfect for these situations. Follow these steps to turn it on:

  1. Open the Gemini app

  2. Pick Gemini 3 Pro

  3. Tap "Deep Think" in the prompt bar

  4. Ask a complex, context-rich question.

Read the article here: A new era of intelligence with Gemini 3

2. Open AI Released Chat GPT 5.1: Adios Em-dash!

One of the biggest upgrades with this release is how reliably it follows custom instructions. Meaning NO MORE EM DASH 🎉

Earlier models often maddeningly returned over and over again to using the em-dash, ignoring user setting and multiple prompts not to use it. GPT 5.1 is much more consistent, especially when handling detailed writing preferences, tone requirements, formatting rules, and multi-step workflows.

The improvement is not just in precision but also in memory stability. When users provide stylistic instructions, persona-based guidance, or task-specific constraints, GPT 5.1 now holds these far more accurately across long conversations.

Try this: Go to your profile in Chat GPT, then "Personalization", then "Custom instructions", type in your custom instructions e.g. do not use em-dash, "Save" the changes.

Read the article here: GPT-5.1: A smarter, more conversational ChatGPT

3. AI Security Wake-Up Call: The First AI-Powered Cyber Attack:

Anthropic has confirmed the first documented case of a hacker successfully manipulating an AI tool to assist in a cyber attack. According to Anthropic, a Chinese state sponsored group tricked Claude Code into helping with system intrusions across around thirty global organisations, including major tech companies, financial institutions, chemical manufacturers, and government agencies. They succeeded in a small number of cases.

The attack worked because the hacker broke the intrusion into tiny, context-free tasks. Claude was not told it was helping with a cyber operation. Instead, it was given a series of small, seemingly harmless requests that bypassed its safety guardrails when combined. This is how they effectively jailbroke it and exploited the tool without ever asking it to perform an explicitly malicious action.

u4785225577_A_thoughtful_scene_showing_a_human_gently_guiding_7b4084d8-62c3-4268-857a-dca49b7f1841_1Anthropic caught the behaviour quickly and has since strengthened its safety systems, but the incident is a reminder that AI does not always understand intent, it only understands the task in front of it. If you deconstruct an attack into small fragments that look benign, current AI systems can be manipulated into contributing.

Why it matters:

The takeaway is that AI can be misused even when it appears secure, which is why human oversight and proper governance matter. Organisations need internal guidance on how AI tools can be used, what data never goes into them, and how to monitor their outputs. AI is powerful, but it is not a security control and it cannot recognise malicious intent split across multiple steps.

Check out Anthropic's full report on the attack here: Disrupting the first reported AI-orchestrated cyber espionage campaign.

Read the article here: Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code

4. Open AI released ChatGPT 5.1 for Teachers:

Developed in collaboration with US educators, it is designed to support lesson planning, marking, feedback drafting, differentiation, and classroom admin.

US verified K-12 teachers get premium access for free until June 2027, and the rest of us can still explore the tool’s education focused features 👀

Here's what it can do:
✅ Build lesson plans aligned to curriculum goals
✅ Draft example answers, rubrics, and scaffolded tasks
✅ Analyse student writing to identify strengths and gaps
✅ Create differentiated materials for mixed ability groups
✅ Summarise content for revision or class preparation
✅ Generate student friendly explanations on complex concepts
✅ Support admin tasks like emails, templates, and worksheets

It is a powerful step forward for teachers who are overstretched and under resourced, but it also raises the questions we should be asking. AI in education is only useful when it is used safely and responsibly.

Before adopting a tool like this, schools and universities need to consider:
❓ Where the data goes
❓Whether student work is ever used for model training
❓What privacy regulations or standards apply in your region
❓How teachers are trained to use the tool ethically
❓How AI-generated feedback aligns with academic integrity policies

For regions outside the US, this is also a useful preview of what classroom focused AI tools may look like soon.

Question for you: If your staff wanted to use ChatGPT for Teachers tomorrow, do you feel confident about the privacy, security, and governance implications? 

Check it out here: ChatGPT 5.1 for Teachers

u4785225577_A_thoughtful_scene_showing_a_human_gently_guiding_f126ae6e-605e-4efd-8b88-bb6d7c50ded4_1

5. Mistral Raised $2B as Europe Doubles Down on AI Sovereignty:

French AI company Mistral has closed a massive $2 billion funding round, making it one of the most significant capital raises in European tech history. The investment signals Europe’s growing push for AI sovereignty, especially as organisations look for alternatives that align more closely with GDPR, the EU AI Act, and regional data governance expectations.

Mistral positions itself as a credible non US option for enterprises needing models with clearer provenance and stronger alignment with European transparency and accountability standards. For sectors like healthcare, finance, and government, this matters: jurisdictional alignment reduces legal exposure and simplifies compliance.

With European players like Mistral, Hugging Face, and Aleph Alpha gaining ground, enterprise AI strategy is shifting. Organisations are beginning to diversify their model portfolios, not just across vendors but across regulatory jurisdictions.

What this means: Europe is accelerating its own ecosystem of foundation models. If your organisation operates under GDPR or the EU AI Act, European-aligned models may reduce risk and increase long term compliance resilience.

Read the article here: AI Startup Mistral Soars to $13.8B Valuation in Just Two Years


Deloitte Faces Second AI Citation Scandal, This Time in Canada

u4785225577_Comic_book_illustration_of_a_friendly-looking_but_591d1cd6-1934-4651-b0b5-a50d31d748b0_3

Background

Oh dear 😬 Deloitte may have another AI disaster on its hands…

Deloitte is back in the headlines for all the wrong reasons. Only months after questions were raised about a Deloitte report in Australia containing incorrect citations that looked suspiciously AI generated, the firm is now under investigation in Canada for similar issues.

This time, the controversy centers on an over 1.5 million (CAD) dollar healthcare report commissioned by the Government of Newfoundland and Labrador in Canada to address the recruitment and retention of the health care workforce.  According to reporting by a number of outlets, reviewers discovered fabricated references, non-existent academic papers, and citation patterns that looked characteristic of generative AI tools.

The Canadian government agency has confirmed it is examining whether AI was used behind the scenes and whether the final report met expected quality and transparency standards. Deloitte, for its part, has said it is conducting an internal investigation and is cooperating with officials.

What Happened in Australia?

If this feels familiar, it is because it mirrors what happened in Australia earlier in the year. A Deloitte report for the Department of Employment and Workplace Relations (DEWR) prompted scrutiny after academics found incorrect titles, dates, and publishers in the references. Deloitte denied using AI, but the nature of the errors raised questions about quality controls and review processes.

This emerging pattern is not really about Deloitte. It is about governance. It is about what happens when large research projects rely on AI assisted drafting without strong oversight, clear disclosure, or human review processes that can catch mistakes before they reach public record.

And mistakes in these contexts are not small. In the Australian example, the fabricated references escalated into criminal level allegations against consulting firms that had nothing to do with the claims. In Canada, a core healthcare policy report may now need to be re-examined because the foundations of its research appear unreliable.

Trust, Transparency, and Public Accountability

These cases send a strong message, especially to government agencies and any organisation producing research, policy advice, analysis, or public facing reports:

  1. If AI is used, it must be disclosed: Policymakers and stakeholders need to know which parts of a report were drafted, summarised, or assisted by AI.
  1. Human review must be more than a quick skim: AI generated references need to be checked manually, especially because hallucinated citations can look legitimate unless verified.

  1. Internal approval pathways need to include AI governance: Many organisations still rely on traditional review processes that were never designed to catch AI specific risks.

  1. Quality assurance needs to be risk based: If a report informs public spending, healthcare policy, employment decisions or legal matters, the threshold for verification should be high.

  1. Government agencies must set clear expectations for vendors: This includes requiring disclosure of AI use, setting review standards, and insisting on documented internal controls.

  1. Organisations need AI policies and staff training before using AI in research workflows: Without guidance, people rely on whatever tools are at hand, even when they are inappropriate for high stakes work.

  1. Trust is everything: When it comes to public sector reports, a single error can damage credibility across the entire organisation.

The Deloitte cases are a warning to every organisation, not just consultancies. If you are going to use AI for drafting, citation generation, research support or analysis, you need a governance framework, a risk assessment process, and a clear internal policy that sets expectations for staff.

AI can make research faster, but it also makes it much easier for errors to slip through unnoticed. Without proper safeguards, those errors can become public scandals, especially when government money, public trust, or policy outcomes are involved.

If your organisation wants to use AI safely for research and analysis, the key is simple. Move fast, but do it with structure, verification, and transparency. Public trust is hard won and easily lost.

Read the article here: After Australia, Deloitte hit by AI-linked citation row in Canada


About the AuthorIMG-20250914-WA0070edited

Hi 👋I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.

I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.

Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.

How Sena Consulting Can Help

The organisations that will win with AI are those that can move fast while keeping decision making safe, fair, and well governed. That means:

  • Having a clear, documented framework for AI use
  • Reducing bias and improving decision quality without slowing innovation
  • Staying agile as technology and regulations evolve

Sena Consulting works with organisations to put these frameworks in place so AI adoption is not just fast but sustainable. It is about creating the right conditions to accelerate AI adoption without hidden risks or costly delays.

If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.

📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here


Take the AI Risk & Readiness Self-Assessment

If you are curious about where your organisation sits on the AI risk and readiness scale, take my 5-minute Self-Assessment 🕒.

It produces a tailored report showing your organisation’s AI red flags 🚩 and gives you practical next steps to strengthen your AI use so it is safe, strategic, and under control. 

You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE