ChatGPT launches Lockdown Mode and Elevated Risk labels | Keryc
OpenAI announces Lockdown Mode and the standardization of 'Elevated Risk' labels in ChatGPT and related products. Why does it matter? Because as AIs connect more to the web and your apps, new risks appear — the most serious today is prompt injection — and these tools aim to give you more control and visibility.
What is Lockdown Mode
Lockdown Mode is an optional, advanced setting designed for a small group of high-risk users: executives, security teams, or organizations with very sensitive data. Do you need it? Probably not if you're a typical user, but it gives stronger protections for those who do.
In practice, Lockdown Mode deterministically limits how ChatGPT interacts with external systems. That means some tools and capabilities are turned off when they could allow an attacker, using techniques like prompt injection, to exfiltrate sensitive data outside the safe environment.
For example, web browsing under uses only cached content. No live network requests are made outside OpenAI's controlled network. That restriction reduces the chance that confidential information gets sent to an attacker via browsing.
Lockdown Mode
Controls for companies and admins
Lockdown Mode sits on top of existing enterprise protections and is available in plans like ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare and ChatGPT for Teachers. Admins can enable it from Workspace Settings by creating a new role.
Admins also keep granular control over apps: they can decide which applications and which actions within those apps remain allowed even in Lockdown Mode. For auditing and oversight, the Compliance API Logs Platform provides detailed visibility into app usage, data shared and connected sources.
OpenAI says it plans to bring Lockdown Mode to consumers in the coming months, but for now it's aimed at environments with strict security requirements.
What are 'Elevated Risk' labels
To clarify when a capability introduces extra risk, OpenAI standardizes a label called 'Elevated Risk' in ChatGPT, ChatGPT Atlas and Codex. The idea is simple: when you see that label, the feature may increase the attack surface and deserves a conscious evaluation.
A concrete example: in Codex you can grant network access so the assistant can look up documentation on the web. That settings screen will now show the 'Elevated Risk' label along with an explanation of what changes, which risks are introduced and when it makes sense to enable it.
OpenAI also notes it will remove the label once security improvements make the feature safe for general use.
Why now and what is prompt injection?
Prompt injection happens when a third party manipulates the instructions the AI receives to make it reveal information or perform unwanted actions. Imagine someone inserting malicious code inside a document the AI processes and tricking it into sharing secrets. Sounds like sci‑fi, right? But it's already happening in real contexts.
As integrations to the web and apps increase, useful capabilities also open new attack vectors. That's why OpenAI combines several defenses: sandboxing, protections against URL-based exfiltration, monitoring and enterprise controls like roles and logs. Lockdown Mode and the labels are an extra layer on top of that work.
Practical recommendations
If you're a standard user: you probably don't need Lockdown Mode. Stick to good habits: don't share sensitive secrets in conversations and review app permissions.
If you're an admin of a workspace with sensitive data: consider evaluating Lockdown Mode, create specific roles and use the Compliance API Logs Platform for auditing.
For developers using Codex or other integrations: check what actions you ask the AI to perform on the network. If you see 'Elevated Risk', decide whether the benefit outweighs the risk.
Brief reflection
This isn't just new tech: it's a practical response to real threats. The good news is you now have clearer options to decide how much risk you accept and how you manage it. How connected do you want your assistant to be to the outside world? That's both a technical and a trust decision, and now it's easier to make it informed.