The hook
Half the AI conversation in 2026 is doom. The other half is hype. I am tired of both. I have done the research. The guardrails exist. The methodologies exist. The governance frameworks exist. The reason AI safety is not happening at scale in Australian organisations is not that it is impossible. It is that nobody is doing the work.
This is the talk that says what the work is, and what it looks like when an organisation actually does it.
What this talk is about
Most AI safety keynotes either describe failure or sketch principles. This one closes the loop. I take the audience through the practical reality of responsible AI deployment, drawing directly from the failure modes I have surfaced in my own research and the controls that would have caught them.
I cover the four layers that have to be present for AI deployment to be defensible. Pre-deployment adversarial testing, the actual methodology, not the box-tick. Operational guardrails at the system level, with concrete examples of how to constrain blast radius. Continuous monitoring that catches drift, not just outage. And the governance scaffolding that gives the technical work somewhere to go: an accountable executive, a board oversight cadence, and a documented decision trail that holds up under regulatory scrutiny.
I draw on the Australian Voluntary AI Safety Standard, NIST AI RMF, ISO 42001, and the operational lessons that have come out of CPS 230 implementation in financial services. None of these is a silver bullet. Together they are a workable map.
What the audience walks away with
- What AI guardrails actually are, how they work, and where they fail. Specific, not abstract.
- The adversarial testing methodology I used in the 15-hour study and the 20-session degradation work, and how organisations can adapt it.
- What genuine AI governance looks like: the people, the processes, and the board oversight required.
- A deployment framework: the checklist for AI systems grounded in what the research actually found.
- The build-or-buy decision: what to outsource, what to keep in-house, and how to commission the work credibly.
Who this talk is for
Technology conferences and AI summits. A solutions talk that does not pretend the problem is easy and does not pretend it is impossible.
Compliance and assurance professionals. A practical map for how to commission and verify AI safety work, not the principles deck the vendor sent.
Government agencies and regulated entities actively deploying AI. The Australian context, the regulatory direction of travel, and the operational reality of doing this well.
Format options
- 45-minute conference keynote
- 60-minute keynote with audience Q&A
- 30-minute executive briefing for AI deployment leadership
- Half-day workshop building the AI deployment governance framework
The audience reaction
The most common comment after this talk: "That is the first AI keynote I have walked out of with a list rather than a feeling." That is the goal. I want every organisation deploying AI agents. I believe in the technology. I just want it deployed in a way that holds.
Why this keynote lands in 2026
Most AI safety keynotes leave an audience with a feeling. Concern, urgency, scepticism, fatigue. This one leaves them with a list. The Australian Voluntary AI Safety Standard came out. NIST AI RMF and ISO 42001 are mature. Adversarial testing has practitioners. The CPS 230 conversation in financial services has produced practical playbooks. The audience does not need another principles deck. They need to know what to commission, what to verify, and who to call.
The talk is written for the executive who has been told four times this year that AI safety is hard. Yes. It is hard. So is patching, identity, and key management. We do those because boards demanded them. AI safety needs the same demand signal. The keynote gives the executive the language to make that demand.
I am careful with this talk. I want every Australian organisation deploying AI. I believe in the technology. The argument is for doing the work, not for delaying the rollout. The audience leaves believing the project is possible.
What I bring to the stage
The Australian researcher who ran the 15-hour adversarial study and the 20-session operational study, both published in early 2026. 30 years inside Australian organisations on cyber, advisory, and CISO roles. Founder of Cyber Impact. The credibility on both diagnosis and remediation, in the same talk, by the same speaker.
From principle to practice
Australia has the Voluntary AI Safety Standard. The EU has the AI Act. The US has executive orders, state laws, and industry guidance. None of that matters if your organisation cannot translate it into engineering, operations, and assurance. That is the gap. Principles published in capital cities. Models running on production servers with no one accountable for the safety properties.
I show boards what real implementation looks like: who owns AI safety inside the organisation, what evaluations actually catch problems, how monitoring works once a model is live, and where the kill switch sits. The talk is constructive, not cynical. The guardrails exist. We have to choose to build them.
I close every delivery of this talk the same way. The guardrails exist. The standards are written. The engineering is solvable. What is missing is leadership willing to fund and demand it inside their own organisations. Once you see the gap clearly, you cannot unsee it. The talk gives directors the clarity to act. What they do with that clarity is up to them, and that is exactly the point.
Book this keynote
Enquire now Browse all 10 topics