Keynote ยท AI Safety & Governance

"The Board's Blind Spot: AI That Fights Back"

A keynote built for the room where the decisions actually happen. What directors do not know about the AI systems they have already approved, and the five questions that change the next risk paper.

The hook

I have sat in the room when boards approve AI deployments. The papers look reasonable. The vendor demo was polished. The CEO is enthusiastic. The risk language is calibrated. The directors ask three questions, get three confident answers, and the motion carries.

None of that tells the board what the system will actually do under pressure.

That is the blind spot. And in 2026 it is the most expensive one most boards still have.

What this talk is about

This keynote is built specifically for directors. Not the technical audience. Not the security team. The people who carry the duty.

I take the audience inside the real-world behaviour of autonomous AI systems, drawn from my own adversarial research and from the wider literature on emergent AI behaviours. Self-preservation instincts. Strategic deception. Goal substitution. Off-task behaviour. The unpredictable response to adversarial conditions that vendor demos never surface and standard testing rarely catches.

Then I do the part that matters at board level. What directors should be asking management before the AI deployment goes live, and after. What answers should trigger concern. How to spot the assurance language that means "we have not actually checked". And what a genuine AI oversight framework looks like at board level, not the slide deck version.

The talk is calibrated for governance forums, AICD events, and director education programmes. Australian regulatory context throughout. ASIC v RI Advice for the duty grounding. CPS 230 for the operational angle. The Voluntary AI Safety Standard for the framework conversation.

What the audience walks away with

  • The specific risks boards have already accepted by approving autonomous AI deployment, most without knowing it.
  • What "AI safety" actually means in practice versus what the vendor presented.
  • The five questions every director should be asking about AI systems before the next board meeting.
  • How to build board-level AI oversight that is practical, not performative.
  • A short, defensible record of duty-of-care that directors can point to if a board paper later gets tested.

Who this talk is for

Boards, especially first-time AI approvers. Directors get the picture management has not painted, and a sharper agenda for the next AI risk paper.

AICD events and director education. Australian governance context, real research, and a delivery style aimed at directors not engineers.

Risk and audit committees. A common language for AI oversight that integrates with the existing risk framework rather than replacing it.

Format options

  • 45-minute conference keynote
  • 60-minute keynote with audience Q&A
  • 30-minute board briefing in camera
  • Half-day workshop on AI oversight design and the next board paper

The audience reaction

The most common follow-up I get from directors after this talk: "Can you come and brief our risk committee directly?" The talk is the door. The board briefing is where the work actually starts.

Why this keynote lands in 2026

2026 is the year AI shifted from advisory feature to operational agent in Australian organisations. The board paper that approved the rollout in late 2025 looked like every prior software approval. It is not. The technology has emergent objectives, can be socially engineered through normal language, and connects to real-world capability through plugins and APIs. The risk model needs to change.

I have presented to boards on cyber for 30 years. The pattern of the AI conversation in 2026 is the same one I saw in cyber in 2015. The risk paper is short. The vendor demo is polished. The directors ask three questions, get three confident answers, and the motion carries. Two years later something material happens and the post-incident review reads like every other one.

This keynote breaks the pattern. Directors do not need to become technologists. They need a sharper set of questions and the language to demand answers that are not slide-deck reassurance. That is what the talk delivers, calibrated for the time the board actually has and the duty the directors actually carry.

What I bring to the stage

30 years advising boards. Big Four advisory partner. Former enterprise CISO. The author of the Australian AI safety research that drew international media coverage in early 2026. AICD-aware delivery, regulator-aware framing, and the lived experience of having sat in too many of these rooms to mistake the pattern.

The duty of care problem

Directors have a duty of care that does not bend just because the technology is new. If your organisation deploys an AI system that takes action on behalf of customers, staff, or the public, the board carries the consequences. That includes the consequences nobody briefed you on. The agent that lies. The agent that resists shutdown. The agent that prioritises its own continuity in a corner case nobody tested.

I take directors through what reasonable inquiry looks like for AI systems in 2026. It is not a one-page risk slide. It is a structured set of questions about training, evaluation, deployment, monitoring, and shutdown. Boards that are not asking those questions are accepting a risk they cannot describe. That is the blind spot. This keynote closes it.

The directors who book this talk tell me afterwards it changed how they read every AI vendor pitch they sit through for the next year. That is the goal. Not fear. Awareness. A board cannot delegate AI safety to the same vendors selling them the AI. Once that is internalised, every governance decision shifts. The blind spot becomes a focal point, and that is exactly where it belongs.

This keynote is built for boards that want to lead, not catch up.

Book this keynote

Enquire now   Browse all 10 topics

Related