Artificial intelligence is improving fast at cybersecurity tasks. Good news for defenders or an open door for abuse? OpenAI says both — and explains how it plans to harden models and build safeguards so these capabilities help defenders without making attacks easier.
What's changing and why it matters
AI models have upped their game in CTF challenges remarkably: from 27% with GPT-5 in August 2025 to 76% with GPT-5.1-Codex-Max in November 2025. That means AIs can help a lot with audits, vulnerability hunting, and patching. But it also means techniques that once took lots of time and expertise can be sped up.
Does this mean AI will replace security teams? No. It means the risk–reward balance shifts: defenders need more powerful tools, and leaders must manage the risk of malicious use.
Approach: defense in depth and practical measures
OpenAI proposes a layered approach, because there’s no single fix that solves everything. Key measures include:
