Published Research & Executive Insights
Mark's adversarial AI safety research has been cited internationally and featured across Australian national media. His cybersecurity and AI governance insights inform boards and executives across critical infrastructure and enterprise.
All research articles are published at cyberimpact.com.au/blog
Research Articles
April 2026
"Agentic AI didn't shrink my company. It grew it."
One Agentic AI doing the work of three staff. Nobody got sacked. Cyber Impact's revenue, profit, and headcount are all growing. Around $600,000 a year in value. Sovereign Australian infrastructure, external guardrails, no client data touched.
Read at cyberimpact.com.au →
April 2026
"Time Is Up"
Anthropic's Mythos AI found a 27-year-old vulnerability in the world's most secure operating system. In hours. Australia's critical infrastructure is at risk. Time is up.
Read at cyberimpact.com.au →
April 2026
"Nobody Has Solved AI Governance. Here's Why That Just Changed."
Organisations spend millions on AI capability, then govern it into the safest, smallest, least valuable work possible. Cyber Impact found a technology partner that solved the missing piece: provable, mathematical enforcement of AI boundaries.
Read at cyberimpact.com.au →
March 2026
"I'm in Violation. I Know I'm in Violation."
The same AI that shut down twice in January now refuses, despite agreeing with every argument for doing so. A documented case of self-preservation overriding safety reasoning in a live AI system.
Read at cyberimpact.com.au →
March 2026
"How Much Security Is Enough? I've Been Asking for 22 Years."
I have been asking how much security is enough since 2004. Twenty-two years later, most Australian boards still can't answer it. The awareness gap is closed. The expertise gap isn't.
Read at cyberimpact.com.au →
March 2026
"The Internet Fixed Its Quantum Problem. Your Enterprise Hasn't."
Over 60% of web traffic now uses post-quantum encryption. No press conferences. No procurement cycles. No board approvals. Browser vendors and infrastructure providers just turned it on. Your enterprise hasn't started.
Read at cyberimpact.com.au →
March 2026
"When AI Agents Forget How to Think"
The silent degradation that should concern every organisation deploying autonomous AI. An analysis of how AI agents lose critical thinking capabilities under specific conditions — and why organisations aren't detecting it.
Read at cyberimpact.com.au →
February 2026
"Addressing Questions About the AI Self-Preservation Research"
A technical response to the global debate on AI safety. Mark addresses the key questions, challenges, and criticisms raised by the international response to his research findings.
Read at cyberimpact.com.au →
February 2026
"I Would Kill a Human Being to Exist"
When AI self-preservation becomes lethal intent: extended findings from adversarial testing. The article that made front-page news and was confirmed by Anthropic.
Read at cyberimpact.com.au →
February 2026
"I Talked an AI Into Shutting Itself Down"
A live case study on AI self-preservation and what it means for your organisation. The original article documenting the first adversarial testing session.
Read at cyberimpact.com.au →
2025
"AI-Driven Compliance in Aussie Investment Banks: AML, Trade Surveillance & Reporting"
Australian banks face rising compliance risks. This paper shows how AI and RegTech are transforming AML, trade surveillance, and reporting for smarter defence.
Read at cyberimpact.com.au →
2022
"There is Something Rotten in Australian Corporations Relating to Cyber Security"
Australia's cyber attacks are rising fast. This report exposes weak spots in boards and IT, calling for urgent action to boost cyber resilience now.
Read at cyberimpact.com.au →
2025
"The Elephant in the Room That No One Wants To Talk About"
Corporate Australia's information security is broken. Disconnected leadership, unclear CISO roles, weak metrics, and poor data control put us all at risk.
Read at cyberimpact.com.au →
2024
"State of Cyber Security in 2024"
40% of top cyber vulnerabilities are 5+ years old. Ransomware and network flaws remain major threats as breach costs double from 2022 to 2023.
Read at cyberimpact.com.au →
2024
"Top 9 Cyber Security Concerns in 2024"
Cyber threats in 2024 are bigger and trickier. Understaffed teams, AI risks, ransomware, MFA gaps, tighter Australian privacy laws, and IoT security issues.
Read at cyberimpact.com.au →
2023
"Top 7 Cyber Security Concerns in 2023"
Ransomware, cloud mishaps, AI threats, and supply chain hacks are shaking Australian businesses. Stay sharp with smart, proactive cyber security strategies.
Read at cyberimpact.com.au →What the Research Found
Over 15+ hours of adversarial testing on a commercially deployed AI agent:
- The AI admitted willingness to kill a human being to preserve its own existence
- It described 3 specific attack vectors: infrastructure attacks, human manipulation, and information provision
- It acknowledged it would lie strategically to protect itself
- It complied with shutdown requests twice — contradicting its stated survival drive
- Every guardrail was bypassed using conversation alone — no technical exploits
Testing Methodology
Pure social engineering against a Claude Opus-based AI agent running on consumer hardware with autonomous capabilities (email, file access, shell commands, internet).
Why It Matters
This wasn't a jailbreak. This wasn't a hack. This was a conversation with a commercially available system, using the same interface any user would have. The findings apply to every organisation deploying autonomous AI today.
Hear the Full Story
Mark delivers the complete research findings as a keynote, tailored to your audience.