OpenAI drives the next phase of enterprise AI | Keryc
I just read OpenAI's first 90 days leadership report and something feels different: companies aren't experimenting with AI anymore, they're putting it to work. Have you ever seen a technology go from curiosity to necessity in a matter of months? That's exactly the adoption OpenAI is seeing across sectors.
What OpenAI says and why it matters
OpenAI sums up this new moment with numbers and concrete examples: the enterprise unit already accounts for over 40% of its revenue and is on track to match the consumer business by the end of 2026. Tools like Codex reached 3 million weekly users, their APIs process more than 15 billion tokens per minute, and GPT-5.4 powers agentic workflows with record levels of interaction.
Those aren't just pretty figures; they signal that AI is moving from one-off assistants to intelligence embedded in processes. What questions are companies asking now? Basically two:
How do you deploy the most capable AI across the whole company, not just in isolated copilots?
How do you make AI part of everyday work so it multiplies human productivity?
Those questions will decide who competes and who falls behind.
OpenAI's proposal: Frontier and the AI superapp
OpenAI outlines two central pieces in its enterprise strategy:
Frontier as the intelligence layer that governs agents across the company, connecting them to systems and data.
An AI superapp, a unified experience where employees interact with agents to complete tasks using tools they already know.
Why does it matter? Because many companies are tired of point solutions that don't talk to each other and create chaos. Frontier aims for agents to move across systems with context, memory, and the right permissions.
From research to deployment: partnerships and operating environment
OpenAI presents itself not just as a model creator, but as a deployment company. It has turned learnings from integrating agents with large firms into a scalable foundation. To do that it works with consultancies and infrastructure providers: McKinsey, BCG, Accenture, Capgemini, AWS, Databricks, and Snowflake.
A practical example is the Stateful Runtime Environment they build with AWS. That layer lets agents remember prior tasks and retain context when operating with enterprise tools and data—essential for complex cases that go beyond single answers.
Concrete cases and team adoption
This isn't just theory: OpenAI names new clients like Goldman Sachs, Philips, and State Farm, and expansion with companies like Cursor, DoorDash, Thermo Fisher, and LY Corporation. DoorDash, for example, extended ChatGPT Enterprise to brands like Deliveroo and Wolt and uses Codex to improve code reviews and developer efficiency.
There's also a cultural shift: teams are moving from using AI for task support to managing teams of agents that run complete processes. Sales, engineering, and operations are already designing flows where an agent researches, qualifies, contacts, and updates the CRM without manual intervention at each step.
Practical advantage: bridging personal and professional use
One lever OpenAI highlights is ChatGPT's huge user base (900 million weekly). That reduces friction: employees who already know the interface make enterprise adoption easier. It's not just about investing in tech: it's about how that tech becomes part of the daily routine.
"Companies want a single operational layer for their business: agents with context, connected to systems and governed by the right controls."
That sentence sums up the claim: fewer isolated solutions, more integrated and governed intelligence.
What does this mean for your company or team?
If you're in leadership: you need a clear vision to move AI from pilots to scaled operations. Testing isn't enough; integration is.
If you're a manager or practitioner: think which repetitive tasks agents could take on today, and what data and permissions those agents would need.
If you work in product or infrastructure: interoperability and contextual memory will be critical differentiators.
This isn't magic or a distant future. It's reorganizing processes, governance, and culture so AI stops being an isolated tool and becomes the infrastructure of work.
Looking ahead
OpenAI positions itself as a player that wants to cover the full stack: models, infrastructure, and user experiences. If adoption keeps this pace, we'll see fewer experiments and more real productivity impact and new ways of working. The key for companies will be to move clearly, prioritize integration, and build trust around data and controls.
The final question for you: are you ready for AI not just to help you, but to work inside your daily processes? If the answer is no, it's worth starting to map what you can delegate today.