AI Risk & Safe Use Training for Teams
A one-hour, all-team training designed to raise AI risk awareness, set clear boundaries for safe use, and help organisations avoid common AI mistakes.
Who this session is for:
This session is designed for organisations that are already using AI, experimenting with AI tools, or aware that staff are using AI informally.
Plain language. Sector-neutral. It is suitable for:
-
Teams across admin, operations, marketing, sales, support, and education
-
Managers and senior leaders responsible for risk, compliance, or governance
-
Organisations without a clear AI policy or approval process yet
-
Organisations operating in regulated or privacy-sensitive environments
What the session covers

-
What AI is and where it already appears in day-to-day work, including embedded AI
-
The difference between generative AI, other AI systems, and AI agents
-
What is and is not safe to put into AI tools like ChatGPT, Gemini, or Claude
-
Common AI risks including hallucinations, unreliable outputs, and data exposure
-
How unapproved or free AI tools can create risk for organisations
-
Practical settings and habits that make AI use safer
-
What teams and leaders should have in place to reduce risk as AI use scales
-
Real-world examples of AI failures from legal, government, and education sectors
What participants will walk away with
-
A shared baseline of understanding about safe AI use
-
Clear boundaries around what should and should not be used with AI tools
-
Practical guidance that applies immediately to daily work
-
Greater awareness of organisational risk and personal responsibility
-
Confidence to ask the right questions before using new AI tools

What’s included
-
One-hour live online training session, delivered to whole teams
-
Pre-session intake form to lightly tailor content to your organisation, tools, and jurisdiction
-
Industry-neutral language with sector-relevant examples
-
Interactive polls and Q&A during the session
-
Post-session PDF tip sheet summarising key points
Who delivers the training
This session is delivered by Rikki Archibald.
-
Australian lawyer by background with experience managing courts and tribunals, overseeing large-scale technology rollouts in government, and reporting on regulatory compliance across public sector agencies
-
Holds an MBA in Leadership and Strategy and a Postgraduate Certificate in AI Ethics, Governance, and Strategy, with a strong focus on EU and UK AI regulation
-
Over a decade of experience in high-privacy sectors including higher education, industrial relations, mental health, and mining
-
Consultant to global education providers across the UK, Australia, France, and Italy, supporting safe technology adoption, automation, and policy alignment
-
Specialises in AI risk, compliance, and ethical implementation, helping teams use AI safely, strategically, and with confidence
This session is not legal advice. It is designed to raise awareness, reduce risk, and support responsible AI use across teams.

Interested in this session?
If you’d like to see whether this training would be a good fit for your organisation, I offer a short, no-obligation call to talk through your context, audience, and any existing AI policies or tools.