The AI Briefing: AI Friction Points
Top Stories in AI
.png?width=449&height=449&name=Workers%20sabotaging%20AI%20(3).png)
1. Gen Z and Millennial workers are actively sabotaging their company’s AI rollouts
A new survey from Writer and Workplace Intelligence looked at how employees are responding to AI rollouts, and the results were very clear. A large share of workers, especially Gen Z and Millennials, are actively undermining their organisation’s AI strategies.
Reported behaviours include:![]()
- Refusing to use AI tools
- Entering data into unapproved AI systems
- Intentionally generating low‑quality outputs
- Tampering with performance metrics
- Using non‑approved AI tools to bypass internal systems
None of this should be surprising. Many organisations are introducing AI with limited consultation, vague strategy and almost no change management. At the same time, employees are hearing constant messages about job loss and automation. When trust is already low, this combination produces exactly the behaviour the survey captured.
The broader trend is obvious: trust in employers (especially larger corporations) has been eroding for years. Younger workers, in particular, have little faith that their organisation will do right by them, so when AI arrives with zero clarity, zero engagement, and zero guardrails, they respond accordingly.
What Organisations Can Do Instead
The data points toward a more sustainable approach:
- Treat employees as stakeholders from the start
- Identify the tedious parts of their roles
- Understand where they want to add more value
- Build AI workflows around real work, not executive assumptions
- Communicate a clear, coherent strategy
The full survey is worth reading. It shows that the issue is not resistance to technology. It is a predictable response to uncertainty, poor communication and a lack of involvement in the process.
Read the full survey here: Key findings from our 2025 enterprise AI adoption report

3. China Proposes New Controls for Digital Humans and Child Safety
China has released draft regulations aimed at tightening controls on “digital humans,” a category that includes AI companions, avatars, virtual influencers and other interactive AI personas.
The proposal focuses on protecting minors, reducing addiction risks and setting boundaries around emotionally immersive AI systems. It is one of the first regulatory efforts globally to target the interaction layer between humans and AI rather than the underlying models.
The draft rules introduce several core requirements:
- Clear identification: All digital humans must be clearly labelled so users can tell they are interacting with an AI system.
- Protection of minors: Virtual intimate or emotionally dependent relationships with anyone under 18 are prohibited. Systems that could mislead children or encourage addictive use are also restricted.
- Mental health safeguards: Providers must detect signs of emotional distress, self‑harm or suicidal ideation and take steps to intervene or escalate to human support.
- Consent and identity protection: A digital human cannot be created using a person’s likeness, voice or personal data without explicit permission. Digital humans also cannot be used to bypass identity verification requirements.
- Content and security controls: Digital humans are barred from generating content that threatens national security, promotes discrimination or includes sexually suggestive or violent material.
The Cyberspace Administration of China is accepting public feedback until early May.
Read the article here: China moves to regulate digital humans, bans addictive services for children


Background
A U.S. federal jury found Meta Platforms, Inc. liable for negligence and failure to protect minors from child sexual exploitation material shared on its platforms, ordering the company to pay $375 million in damages. The case, tried in the U.S. District Court for the Northern District of California, tested how existing child protection laws, including the Trafficking Victims Protection Reauthorization Act and Section 230 of the Communications Decency Act, apply to social media giants. While Section 230 shields platforms from most user-generated content liability, the court held Meta partly responsible for failing to act against known harmful content circulation.
Facts
New Mexico’s attorney general sued Meta Platforms, Inc. in state court, arguing that Facebook and Instagram were designed in ways that pulled children into harmful spaces and left them exposed to predators and sexual exploitation.
During the trial, jurors saw internal Meta documents and heard from former employees and law‑enforcement officers who said the company knew children were being groomed and sent sexual messages on its apps, but still pushed “addictive” features that kept them online longer.
The state also ran undercover operations, creating fake teen profiles that were quickly flooded with sexualised content and direct approaches from adults, which prosecutors said showed how easy it was for predators to find and target minors on Meta’s platforms. On that evidence, the jury found Meta had violated the New Mexico Unfair Practices Act by misleading people about how safe its products were for young users and by engaging in “unconscionable” business practices, and ordered $375m in civil penalties; Meta says it disagrees and will appeal in the New Mexico courts.
Why it Matters
This ruling could reshape the limits of legal immunity for social media companies. If upheld, it may open the door for more victims to sue tech platforms that fail to intervene when exploitation or trafficking occurs online. Regulators and lawmakers are watching closely, as the case could influence proposed reforms to Section 230 and redefine corporate duties around child safety and content moderation.
Read more here: Meta ordered to pay $375m after being found liable in child exploitation case
About the Author
Hi I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.
I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.
Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.
How Sena Consulting Can Help
The organisations that will win with AI are those that can move fast while keeping decision making safe, fair, and well governed. That means:
- Having a clear, documented framework for AI use
- Reducing bias and improving decision quality without slowing innovation
- Staying agile as technology and regulations evolve
Sena Consulting works with organisations to put these frameworks in place so AI adoption is not just fast but sustainable. It is about creating the right conditions to accelerate AI adoption without hidden risks or costly delays.
If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.
📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here
Take the AI Risk & Readiness Self-Assessment
If you are curious about where your organisation sits on the AI risk and readiness scale, take my 5-minute Self-Assessment 🕒.
It produces a tailored report showing your organisation’s AI red flags 🚩 and gives you practical next steps to strengthen your AI use so it is safe, strategic, and under control.
You can be one of the first to access the AI Risk & Readiness Self-Assessment HERE