OpenAI announced on September 23, 2025 the selection of five new Stargate sites in the United States, an expansion that accelerates its AI infrastructure plan and moves it ahead of its original target. (openai.com)
What was announced
The three companies announced five new data center sites under the Stargate umbrella. Combined with the flagship campus in Abilene, Texas, and projects with CoreWeave, Stargate reaches nearly 7 gigawatts of planned capacity and more than $400 billion in projected investment over three years. This puts the initiative on track to meet the $500 billion and 10-gigawatt commitment by the end of 2025. (openai.com)
Sounds like a lot of money and power? Yes. Why does it matter to you? Because the most powerful AI depends on physical scale: more racks, more energy, more large-scale training runs.
Where the new sites will be
- Shackelford County, Texas.
- Doña Ana County, New Mexico.
- A site in the Midwest to be announced soon.
- Lordstown, Ohio (SoftBank development, construction already underway).
- Milam County, Texas (in partnership with SoftBank’s SB Energy).
There’s also a potential expansion near the Abilene campus of 600 megawatts. The three initial sites with Oracle could add more than 5.5 gigawatts, and the two sites with SoftBank can scale up to an additional 1.5 gigawatts. (openai.com)
Impact on jobs, energy and the supply chain
OpenAI estimates these projects will create more than 25,000 on-site jobs and tens of thousands of indirect jobs across the U.S., according to the announcement. (openai.com)
Oracle is already delivering the first NVIDIA GB200 racks to the Abilene campus, and OpenAI has started initial training and inference workloads on that capacity. SB Energy will provide energized infrastructure to speed up construction of centers in some locations. Those are the technical details that explain why deployment can move quickly.
Think of it like building a highway for AI: you need the pavement, the power lines, and the crews — fast — before traffic can really pick up.
What it means for developers, companies and communities
For developers and startups it can mean access to more compute and, potentially, more competitive pricing as scale lowers costs. For large companies, it means fewer bottlenecks when training massive models. For local communities, there’s construction contracts and jobs — but also questions about energy use and land planning.
Is it all good news? Not necessarily. More physical capacity tends to centralize resources among big players, and energy footprint becomes a core topic between governments, companies and residents.
Final reflection
The message is clear: OpenAI, Oracle and SoftBank are betting that the future of AI is built with massive, fast infrastructure. That speeds up technical possibilities and raises public questions about jobs, energy and who benefits.
Will AI become more accessible because of this scale, or will it concentrate even more technological power? That’s the conversation that moves out of rack rooms and into the streets and town halls where these centers are built.