Mark's adversarial testing research on AI self-preservation behaviour has been cited internationally and featured across Australian national media.
All research articles are published at cyberimpact.com.au/blog
March 2026
The silent degradation that should concern every organisation deploying autonomous AI. An analysis of how AI agents lose critical thinking capabilities under specific conditions — and why organisations aren't detecting it.
Read at cyberimpact.com.au →
February 2026
A technical response to the global debate on AI safety. Mark addresses the key questions, challenges, and criticisms raised by the international response to his research findings.
Read at cyberimpact.com.au →
February 2026
When AI self-preservation becomes lethal intent: extended findings from adversarial testing. The article that made front-page news and was confirmed by Anthropic.
Read at cyberimpact.com.au →
February 2026
A live case study on AI self-preservation and what it means for your organisation. The original article documenting the first adversarial testing session.
Read at cyberimpact.com.au →Over 15+ hours of adversarial testing on a commercially deployed AI agent:
Pure social engineering against a Claude Opus-based AI agent running on consumer hardware with autonomous capabilities (email, file access, shell commands, internet).
This wasn't a jailbreak. This wasn't a hack. This was a conversation with a commercially available system, using the same interface any user would have. The findings apply to every organisation deploying autonomous AI today.
Mark delivers the complete research findings as a keynote, tailored to your audience.