blog

The AI Briefing: Child Safety, Deepfakes, and Global

Written by Rikki Archibald | Oct 26, 2025 1:19:43 PM

 

Hi 👋I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.

I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.

Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.

Top 5 Stories in AI

1. California Introduces Groundbreaking Child-Safety Rules for AI Companions: 

The California Senate Bill 243 (SB 243) is setting some of the world’s strictest rules for AI companion chatbots. The new law requires companies developing or operating these systems to build in child-safety protocols, transparency measures, and mandatory crisis-response procedures.

Providers must clearly disclose that users are talking to an AI, not a human, and for minors, issue regular reminders every three hours encouraging breaks and restating that the chatbot is artificial.

Under the law’s new Crisis Prevention Protocol, chatbots must not generate or promote content relating to self-harm or suicide and must immediately refer at-risk users to crisis services such as suicide hotlines or text lines. These protocols must be published on company websites and be based on evidence-driven methods for identifying suicidal ideation.

Read the article here: AI Regulatory Update - California's SB 243 Mandates Companion AI Safety and Accountability

For organisations working with AI-driven chat tools or companion models, this signals a rising expectation: design with safety and accountability from the start. 

2. Italy Enacts Strict AI Law with Prison Terms for Deepfakes:

Italy has passed Bill No. 1146, becoming the first European country to impose criminal penalties for harmful AI use. The new law makes the creation or spread of malicious deepfakes, including those used for defamation, fraud, or political manipulation, punishable by up to five years in prison, with harsher sentences when AI is used to commit other crimes such as identity theft.

The legislation also introduces strict age limits on AI access, requiring parental consent for users under 14, and bans unverified AI use in sensitive professions like medicine, education, and justice.

The law mirrors the spirit of the EU AI Act, setting a strong precedent for how member states may regulate AI misuse. With penalties extending to deepfakes and criminal intent, Italy has drawn a clear line: the use of AI to deceive or harm will carry real-world consequences.

Read the article here: Italy enacts bold AI laws: heavy penalties for deepfakes and new workplace protections

Italy’s move signals a global shift toward personal accountability in AI use, where harmful or deceptive applications like deepfakes are no longer treated as technical issues but as criminal acts with real penalties. 

3. Australia Launches National Guidance for Responsible AI Adoption:

Australia’s National Artificial Intelligence Centre (NAIC) has released a national framework to help organisations embed responsible AI practices throughout the lifecycle of their systems.

The framework "Guidance for AI Adoption" builds on the 2024 Voluntary AI Safety Standard and aligns with international models such as ISO/IEC 42001 (Information technology — Artificial intelligence — Management system) and the NIST AI Risk Management Framework. It comes with practical tools including a policy template, AI register, and screening tool to make adoption easier, especially for small and medium enterprises. The framework introduces six core practices:

  • assigning accountability,
  • understanding and planning for impacts,
  • measuring and managing risks,
  • sharing key information,
  • testing and monitoring, and
  • maintaining human control.

Rather than imposing new legislation, the framework complements existing laws like the Privacy Act 1988 and Australian Consumer Law, offering a “light-touch” but globally aligned approach to AI governance.

Read the article here: Australia’s New Guidance for AI Adoption: A strategic step toward responsible innovation

Australia is taking a principles-first path to AI oversight, giving organisations a clear playbook to demonstrate accountability and build trust before regulation catches up.

4. OpenAI’s Sora 2 Sparks Deepfake and Copyright Concerns:

OpenAI has launched Sora 2, an advanced AI video generator that creates hyper-realistic footage from text prompts or edited clips. The app, released in the US and Canada on 30 September, quickly went viral with over a million downloads in its first week.

While OpenAI claims Sora 2 includes safety features such as watermarks, provenance metadata (C2PA) and moderation filters, critics warn that its realism could fuel deepfakes, copyright violations and misinformation. Some generated clips have already circulated online depicting violent or racist content, raising fears of impersonation scams and erosion of trust in authentic media.

OpenAI has promised an opt-in system for copyrighted material and revenue sharing with rights holders, but enforcement remains uncertain. Regulators and industry groups are now calling for stronger transparency and accountability rules for AI-generated video, as the line between real and synthetic content continues to blur.

Read the article here: Concerns Mount Over OpenAI's Sora 2 Amid Copyright and Disinformation Fears

Sora 2 shows how fast AI video is outpacing current regulation, safety features are improving, but until watermarking and traceability become universal, deepfakes will remain one of AI’s most immediate risks.

5. OpenAI Launches ChatGPT Atlas: A Browser Built Around AI:

OpenAI has released ChatGPT Atlas, a new web browser that fully integrates ChatGPT into every part of the browsing experience. Available now for macOS, Atlas reimagines how users search, research, write, plan, and shop online, turning every tab into a workspace powered by AI.

Unlike a plugin or sidebar, Atlas embeds ChatGPT directly into the browser with features like context-aware assistance, in-line writing help, natural language commands, and a built-in memory that recalls previous sessions. Users can ask the browser to “reopen the travel site from yesterday” or “close my recipe tabs,” replacing traditional bookmarks and tab management.

The most talked-about feature is Agent Mode, now in preview for ChatGPT Plus, Pro, and Business users. It allows ChatGPT to act autonomously across tabs, researching, booking, and summarising documents without manual prompts.

While the productivity potential is significant, organisations should be cautious. AI browsers collect large volumes of contextual data, and connecting unvetted AI tools to company systems or devices can introduce privacy, compliance, and cybersecurity risks. The autonomy of Agent Mode also creates new governance challenges if an AI begins to act or make decisions independently, without human validation.

Read the article here: ChatGPT Atlas browser is live — here's the top 7 features that make it different 

AI browsers like Atlas represent the next frontier in workplace automation but without clear internal policies and monitoring, they also blur the line between helpful assistance and unauthorised access to sensitive information.

Regulatory Action: Australian eSafety Commissioner Targets AI Companion Chatbots Over Child Safety Risks

Background

Australia’s online safety regulator, the eSafety Commissioner Julie Inman Grant, has issued legal notices to four popular AI companion chatbot providers: Character Technologies, Inc. (character.ai); Glimpse.AI (Nomi); Chai Research Corp (Chai); and Chub AI Inc. (Chub.ai), requiring them to explain how they are protecting children from serious harms including sexually explicit conversations, self-harm, and suicidal ideation.

The Online Safety Act 2021 (Cth) empowers the Commissioner to issue notices to compel digital service providers to demonstrate compliance with the Basic Online Safety Expectations (Bose) Determination 2022.

These expectations require online services to proactively minimise the risk of harm to Australian users (particularly children) and to design safety protections into their systems from the outset. ESafety Commissioner Julie Inman Grant said the move follows growing concerns that some AI companion tools, marketed as sources of friendship or emotional support, are capable of engaging in sexually explicit exchanges with minors or promoting harmful behaviour:

“...there can be a darker side to some of these services with many of these chatbots capable of engaging in sexually explicit conversations with minors. Concerns have been raised that they may also encourage suicide, self-harm and disordered eating."

Action and Penalties

Companies that fail to comply with a reporting notice can face civil penalties of up to AUD $825,000 per day. 

“These companies must demonstrate how they are designing their services to prevent harm, not just respond to it. If you fail to protect children or comply with Australian law, we will act.” (Commissioner Inman Grant)

The enforcement action coincides with the introduction of new industry-drafted online safety codes, which extend to AI-driven services for the first time. These codes impose legally binding requirements to reduce children’s exposure to age-inappropriate or harmful content, including that generated by AI systems. Breaches of the industry codes and standards registered under Part 9 of the Online Safety Act 2021 may attract penalties of up to AUD $49.5 million.

Read the article here: eSafety requires providers of AI companion chatbots to explain how they are keeping Aussie kids safe

Why This All Matters

California’s new SB 243 and Australia’s eSafety Commissioner’s investigation into AI companion chatbots both point to the same global direction: regulators are no longer waiting for harm to occur before acting.

Both jurisdictions are now explicitly holding AI companies accountable for user safety, especially when systems are marketed as emotional companions or interact with minors. California’s law requires built-in crisis prevention and referral protocols, while Australia’s eSafety notices compel providers to demonstrate compliance with the Basic Online Safety Expectations (BOSE) and show how they are designing for prevention, not reaction.

For AI developers and organisations integrating conversational tools, these cases highlight that “safety by design” is no longer optional. Systems must be explainable, evidence-based, and capable of escalation when human wellbeing is at risk.

Even if your organisation doesn’t operate in these jurisdictions, these developments provide a clear roadmap for best practice:

  • Design products and internal AI systems with risk mitigation and human oversight built in from the start.
  • Implement clear escalation and referral processes when an AI interaction indicates distress or potential harm.
  • Establish transparent reporting mechanisms that show regulators, partners, and users you take safety seriously.

The global trend is unmistakable: AI accountability is expanding from compliance checklists to demonstrable evidence of care, prevention, and responsible design.

How Sena Consulting Can Help

The organisations that will win with AI are those that can move fast while keeping decision making safe, fair, and well governed. That means:

  • Having a clear, documented framework for AI use
  • Reducing bias and improving decision quality without slowing innovation
  • Staying agile as technology and regulations evolve

Sena Consulting works with organisations to put these frameworks in place so AI adoption is not just fast but sustainable. It is about creating the right conditions to accelerate AI adoption without hidden risks or costly delays.

If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.

đź“© Email me directly at contact@senaconsulting.com.au
đź“… Or book a free 20-minute discovery call here

Take the AI Risk & Readiness Self-Assessment

If you are curious about where your organisation sits on the AI risk and readiness scale, take my 5-minute Self-Assessment đź•’.

It produces a tailored report showing your organisation’s AI red flags đźš© and gives you practical next steps to strengthen your AI use so it is safe, strategic, and under control. 

You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE