Every week, 800 million people use ChatGPT to think, learn, create, and handle very personal parts of their lives. Can you imagine someone asking for mass access to those private conversations? That's exactly what's happening now: The New York Times is demanding 20 million private ChatGPT chats from users.
What The New York Times is asking for and why it matters
- The request is for OpenAI to hand over 20 million randomly selected conversations between December 2022 and November 2024.
- The Times' justification: to look for examples of people trying to evade its paywall. Does it make sense to ask for millions of chats to find that? OpenAI says no.
- This isn't the first time: originally the Times asked for 1.4 billion conversations and other restrictions like preventing users from deleting their chats. OpenAI fought those demands and partially won.
What OpenAI is doing to protect you
OpenAI has taken several actions and is litigating to limit the scope of the request.
- They offered lower-risk alternatives for privacy, for example
targeted searcheswithin the sample or aggregated reports on how ChatGPT was used. Those proposals were rejected. - They're applying processes of de-identification to remove personally identifiable information, passwords, and sensitive data before any legal access.
- Content under this order is kept in a separate system and under legal custody. Only a small, audited legal and security team can access it when necessary.
- OpenAI says it will continue exploring all legal options to protect user data.
- Looking ahead, they're accelerating security features like client-side encryption so messages would be inaccessible even to OpenAI itself, and automated systems to detect serious abuse.
Dane Stuckey, Chief Information Security Officer, OpenAI
Who does this actually affect?
- Probably only consumer ChatGPT users in the period mentioned (December 2022 to November 2024). Conversations outside that range aren't included.
- It does not affect ChatGPT Enterprise, ChatGPT Edu, ChatGPT Business (formerly Team), or API users.
- Still, any court-ordered access carries risk: Times lawyers and hired technical consultants could see the data, even if legally obligated to keep it private.
Risks and limits of the protections
- De-identification reduces risk but doesn't eliminate it; sensitive information can leak if not handled with extreme care.
- Legal obligations can force compliance, even if OpenAI keeps appealing. In other words, relying solely on legal processes may not be enough for people handling highly sensitive information.
- OpenAI says it will maintain extra measures and limit exposure, but final control may depend on the court's decision.
What you can do now
- Review and delete sensitive conversations in your account if you don't need them. OpenAI has defended users' right to delete chats in the past.
- Avoid sharing passwords, bank details, or extremely personal information with online assistants.
- If you handle critical information, consider enterprise plans or tools with encryption and stricter controls.
- Stay tuned to OpenAI updates about new protections and to the court's decisions.
The dispute highlights a bigger question: how do we balance journalistic investigation and the right to privacy in the age of AI? It's not just a legal issue but a practical concern for millions who use digital assistants every day. Technology promises a lot of utility, but its real value depends on whether we can trust that our private conversations stay private.
