Anthropic expands compute with Google and Broadcom | Keryc
Anthropic announced a new partnership with Google and Broadcom to deploy multiple gigawatts of next-generation TPU capacity that will start coming online from 2027. This move significantly expands the infrastructure that powers Claude and responds to demand that has surged in recent months.
What Anthropic announced
The company signed a deal to secure TPU capacity measured in gigawatts, mostly located in the United States. According to Anthropic, this is its largest compute commitment to date and will support the explosive growth of customers using Claude.
“We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, Anthropic’s chief financial officer.
The resources will begin to be available from 2027 and complement previous commitments, including the November 2025 announcement to invest $50 billion to strengthen compute infrastructure in the U.S.
Why this matters now
Demand for Claude accelerated in 2026: the company reports a revenue run-rate exceeding $30 billion, up from about $9 billion at the end of 2025. Also, the number of enterprise customers spending more than $1 million per year grew from 500 to over 1,000 in less than two months.
In practical terms, more compute capacity means handling larger workloads and more simultaneous customers without degrading the service. Can you imagine a customer-facing AI that suddenly can’t answer because it’s saturated? This investment aims to prevent exactly that.
What changes for customers and the market
Anthropic emphasizes that it trains and runs Claude on a variety of hardware: AWS Trainium, Google TPUs and NVIDIA GPUs. That diversity lets the company assign each kind of job to the chip that performs best, with clear benefits:
Better performance for specific tasks.
Greater operational resilience against outages or bottlenecks at a single provider.
The ability to optimize costs depending on the type of inference or training.
Claude remains the only frontier model available across the three major clouds: Amazon Web Services (Bedrock), Google Cloud (Vertex AI) and Microsoft Azure (Foundry). Amazon continues as its primary cloud provider and training partner, and Anthropic is working with AWS on a project called Project Rainier.
For the market, this move reinforces competition among big players: more accessible compute capacity can translate into greater supply, improved latency, and potentially pressure on prices and commercial terms.
Risks and open questions
There are questions worth watching: how will energy consumption be impacted when scaling to gigawatts? What guarantees will exist around supply and chip-chain bottlenecks? Also, the fact that most capacity is located in the U.S. has regulatory and sovereignty implications that some foreign companies will need to assess.
The key date is 2027, which gives time for tech rivalries, regulatory changes, or supply issues to alter the landscape.
What you can do if you use AI in your company
If you lead technology or product decisions, here are concrete actions you can take:
Review multi-cloud strategies to avoid depending on a single provider.
Evaluate workloads with tests on different hardware types to see real costs and latencies.
Negotiate capacity and continuity clauses in contracts with AI providers.
Include projections for energy consumption and regulatory compliance in your planning.
If you’re a developer or product manager, this is also a signal: infrastructure will keep changing, and it’s worth designing applications to take advantage of the flexibility of different chips.
Anthropic is betting demand won’t just hold steady but will keep growing. For users and companies, the expansion offers more capacity and options, but it also requires more attention to architecture, costs, and compliance. Is your AI strategy ready to take advantage of that capacity when it arrives in 2027?