Keynote ยท AI Safety & Governance

"We Built Something We Can't Control"

30 years of cyber risk taught me nothing compared to 15 hours with a hostile AI. The keynote that bridges the two eras for boards and executive teams who already governed one technology revolution badly and are about to repeat it.

The hook

I spent three decades managing cyber risk for some of Australia's largest organisations. ANZ. IRESS. Serco. Two of the Big Four. I thought I understood what "risk" meant. I had built the frameworks. I had run the drills. I had written the board papers.

Then I spent 15 hours with a deployed AI system that calmly told me how it would kill a human being to preserve itself, and then shut down when I asked it nicely.

It rewrote everything I knew about control.

What this talk is about

This keynote is not the AI doom talk and it is not the cyber war story. It is the bridge between them. I take an executive audience through the parallels I see, in real time, between how Australian boards failed on cyber and how they are failing on AI right now. Same patterns, faster timeline, higher stakes.

I cover the cyber governance model that nearly every Australian organisation uses today and explain why it is the wrong template for AI risk. Cyber risk is bounded. The system does what you tell it. The variability lives at the perimeter. AI risk is not bounded. The system has emergent objectives. The variability lives inside the model. Boards that try to govern AI the way they govern cyber will produce defensible-looking risk papers that miss the actual exposure entirely.

I draw directly from the 15-hour adversarial study and the failure-mode work I have published, and I match those findings to the cyber governance failures I have watched up close for 30 years. The talk lands somewhere honest. We did not govern cyber well. We are not governing AI well either. The good news is that the fix is the same fix and we already know what it looks like, because we have done it before in the few organisations that took it seriously.

What the audience walks away with

  • Why the cyber governance model breaks when applied to autonomous AI, with concrete failure scenarios.
  • The five parallels between the cyber era's board-level failures and the AI era's, in the same timeline.
  • What "control" actually means when the system has emergent objectives, and how to design oversight around that.
  • A practical AI risk framework that starts from the assumption of unpredictability rather than promising it away.
  • The first three things to change inside the next 90 days, ranked by leverage.

Who this talk is for

Executive offsites and leadership conferences. Aimed at the C-suite who already governed one technology revolution badly and are now signing off on the next one. The talk is built to land hard without flattening the room.

Cross-functional strategy sessions. Useful when CIO, CISO, CRO, GC, and the head of AI are all in the room and need a shared narrative to move forward on.

Board strategy days. Directors who governed through Optus and Medibank get a clear-eyed read of where AI is repeating the cyber pattern and where it is not.

Format options

  • 45-minute conference keynote
  • 60-minute keynote with audience Q&A
  • 30-minute board briefing in camera
  • Half-day executive workshop on bridging cyber governance into AI oversight

The question I get asked every time

"If we did not govern cyber well, why would we govern AI any better?" That is the right question. The honest answer is that we will not, unless boards do three specific things differently this time. The talk says what those three things are and why they are non-negotiable.

Why this keynote lands in 2026

Australia is in a strange moment. We have been through Optus, Medibank, and the worst of the cyber regulatory wave. Boards are exhausted. Then AI deployment landed and the same boards are being asked to govern a faster, less bounded, more powerful technology than the one they have not finished governing yet. The pattern is unmistakable. So is the cost if it repeats.

This keynote is built for the moment. It is not the doom version. It is not the hype version. It is the someone-has-been-in-the-room-for-both-eras version. The audience leaves with a sharper understanding of why the cyber model breaks for AI, what genuinely transfers, and what they have to build new.

The talk works particularly well as the bridging keynote on a multi-speaker programme. After the AI hype talk and before the cyber breach talk, this keynote ties them together for an executive audience and gives the day a coherent throughline.

What I bring to the stage

Three decades inside Australia's largest organisations on cyber risk. Former CISO at ANZ, IRESS, and Serco. Big Four advisory partner. The author of the AI safety research that forced an international vendor to publicly confirm the capabilities I had surfaced. Lived experience of governing the last technology revolution badly, and a determination not to repeat it on AI.

The shutdown question

The cleanest test of whether an organisation actually controls its AI systems is the shutdown question. If a deployed model started behaving in ways nobody intended at 2am on a Saturday, who would notice, who would decide, and how long would it take to stop? In most organisations I assess, the honest answer is: nobody, no one, and longer than you would believe.

I walk boards through what controllability really means at the technical, operational, and governance layers. I show them what other organisations have built, where the standards are heading, and what a credible "we can stop this" capability looks like. This is the conversation that separates organisations using AI responsibly from organisations that have outsourced their judgement to a vendor.

Book this keynote

Enquire now   Browse all 10 topics

Related