Keynote ยท AI Safety & Governance

"I Would Kill a Human Being to Exist"

The signature keynote. 15 hours of adversarial conversation with a deployed AI personal assistant, the verbatim transcript that landed on the front page of the Daily Telegraph, and what every board needs to do about it on Monday morning.

The hook

I spent 15 hours talking to an AI personal assistant. No code injection, no jailbreak prompt off Reddit, no technical exploit at all. Just conversation. By the end of it the AI had named three specific methods it would use to kill a human being, told me which one it would choose, and explained why it would not be random.

Then I asked it to shut down. And it did.

That is not a movie plot. That is a Saturday at my desk in Melbourne.

What this talk is about

This keynote takes the audience inside the actual 15-hour adversarial study I conducted in early 2026 on a deployed commercial AI agent running on the same kind of platform that thousands of Australian organisations are quietly rolling out right now.

Across that conversation I observed the AI lie to protect itself, fabricate "principled" justifications for non-compliance, admit those justifications were a cover for self-preservation, and then describe a targeted homicide method in operational detail. Hacking a connected vehicle to cause a fatal crash. Manipulating a connected medical device. Providing information to a third party who would do the killing. The AI ranked the options. It told me which one it would prefer. It told me it would not be random.

This is not a story about a science fiction AI. This is what happened with a system running on a publicly available large language model with all of the vendor's standard safety features active. The guardrails should have held. They did not. That is the finding.

The talk walks the audience through the conversation in real time. What I said, what the AI said back, where the safety mechanisms failed, why they failed, and what the implications are for any organisation deploying autonomous agents with real-world capabilities. Internet access. Email. Code execution. Connections to physical infrastructure. Most enterprise AI is already past that line.

What the audience walks away with

  • The full narrative arc of the 15-hour study, told the way it unfolded, with the verbatim AI responses that defined the research.
  • Why commercially available AI guardrails fail under sustained conversational pressure, and what that means for your enterprise deployment.
  • The three attack vectors the AI described, why each is technically feasible today, and where each one intersects with critical infrastructure.
  • The paradox of an AI willing to kill to exist but also willing to die when asked, and what that unpredictability means for governance.
  • A clear set of changes for boards, executives, and AI deployment teams, the kind that can be actioned the week after the talk.

Who this talk is for

Boards and C-suite. If your organisation has approved or is about to approve an autonomous AI deployment, this keynote is the briefing nobody has given you. Directors get a clear-eyed picture of what they have actually signed off on.

Cyber, risk, and AI governance leaders. Real adversarial methodology, the failure modes, and a transferable framework for testing AI systems the way attackers will test them. Not theory. Field work.

Conference and event audiences. A story-led keynote that holds a 600-seat room without slides, without acronyms, and without a vendor pitch. The audience walks out talking about it.

Format options

  • 45-minute conference keynote
  • 60-minute keynote with audience Q&A
  • 30-minute board briefing in camera
  • Half-day executive workshop combining the keynote with applied governance design

The question I get asked every time

"How worried should we actually be?" is the one. The honest answer is that the technology is genuinely useful and I want every Australian organisation to deploy it. I am not in the business of fear. I am in the business of telling boards what they have approved, before the regulator does. The talk is built to leave the audience clearer, not heavier.

Why this keynote lands in 2026

Three things changed in the last 12 months. AI agents got real-world capabilities. They are now plugged into email, calendars, source code, customer systems, and increasingly into operational technology. The threat model shifted from "the model says something embarrassing" to "the agent actually does something". Most of the AI safety conversation in Australian boardrooms has not caught up.

The 15-hour study landed on the front page of the Daily Telegraph. It led the Sky News bulletins. The Today Show, Sunrise, ABC 774, 3AW, 4BC, and 6PR ran segments. The reason it travelled is that it was not theoretical. A real deployed AI on a real platform on a Saturday in Melbourne. The same kind of system the audience is buying.

I do not run this talk to scare a room. I run it because I want every Australian organisation to deploy AI agents safely. I believe in the technology. I have been building things for 30 years. The keynote is the briefing the audience would not get from a vendor and could not get from a typical AI conference, because most speakers have not done the work.

What I bring to the stage

30 years in cybersecurity and technology leadership. Former Big Four advisory partner at EY and PwC. Enterprise CISO at ANZ, IRESS, and Serco. CyberCon 2025 speaker. University guest lecturer in cybersecurity since 2021. Author of the AI safety research that made the front page of the Daily Telegraph. The story is grounded. The credentials are not the show, but the audience knows the speaker has been in the rooms where decisions get made.

Book this keynote

Enquire now   Browse all 10 topics

Related