OpenAI announces Trusted Access for Cyber alongside GPT‑5.3-Codex, its frontier model with more capabilities for cybersecurity tasks. The idea is clear: put very powerful tools in the hands of those who protect, not those who attack, and speed up software security improvements without making abuse easier.
What is Trusted Access for Cyber
Trusted Access is a framework of identity and trust. In practice, it means OpenAI will test a system to verify who uses the most advanced capabilities of GPT‑5.3-Codex when it's for cybersecurity work. Why? Because these models are no longer just code completers: they can work for hours or days to achieve complex goals. That helps defense, but it can also be used to attack.
The bet is twofold: allow fast defensive use and reduce the risk of malicious use. To support that transition, OpenAI also announces a $10 million commitment in API credits for defense programs.
What GPT‑5.3-Codex does and why it matters
GPT‑5.3-Codex is described as a frontier reasoning model with cyber capabilities. Practically, it can speed up vulnerability discovery, help prioritize patches, and automate repetitive analysis that eats time. Imagine a security team using the model to scan critical dependencies and get mitigation suggestions in minutes instead of days.
But it's not all upside. With greater capability comes greater risk: tools that make finding flaws easier can also make exploiting them easier if they fall into the wrong hands. That's why Trusted Access tries to balance defensive speed with security controls.
How access will work and what you can do
- Individual users can verify their identity at chatgpt.com/cyber to request access to cybersecurity-related capabilities.
- Companies can request trusted access for their teams through their OpenAI representative.
- Researchers and security teams who need more capacity or flexibility can apply to participate in an invitation-only program.
Important: even those who get access must follow the Use Policies and Terms of Service. OpenAI will also use automated classifications to detect suspicious signals and has trained the model to refuse clearly malicious requests, though those mitigations may impact well-meaning users while being tuned.
Risks, mitigations, and what changes for defenders
Tools change the pace, but not the responsibility.
Mitigations include training the model to reject harmful requests and automated detection systems. Still, there will be friction: legitimate requests like “find vulnerabilities in my code” can trigger as suspicious. Trusted Access aims to reduce that friction for verified defenders, not to remove protection entirely.
For security teams this means a practical opportunity: faster access to compute power to find and fix flaws, shorter response times, and a higher defensive baseline across the ecosystem. For small organizations, it can mean access to capabilities that used to be available only to large labs.
Incentives and support for defense
OpenAI commits $10 million in API credits through its Cybersecurity Grant Program. They seek teams with a proven track record of identifying and remediating vulnerabilities in open source software and critical infrastructure. It's a clear signal: they want to fund real defensive work and speed adoption of these tools by the community that protects.
What to expect and how to prepare
If you work in security:
- Get ready for identity verification processes and to justify legitimate use.
- Document workflows and audits that show your goal is defensive.
- Consider how to integrate model outputs into human review processes to minimize false positives and avoid destructive automated actions.
If you are responsible for products or development:
- Use the opportunity to improve dependency hygiene and vulnerability response.
- Consider applying for the credit program if you manage critical components or open source projects.
Final reflection
This announcement isn't just new tech: it's an attempt at practical governance. OpenAI prioritizes that the most powerful capabilities go first to those who defend, and backs that with controls and funding. Will it be perfect? No. Is it a necessary step so defense doesn't fall behind as attack capabilities advance? Yes.
