2028: Two scenarios for global AI leadership | Keryc
Anthropic published a technical essay that asks a worrying question: who will be in charge of artificial intelligence by 2028, democracies or authoritarian regimes? Here I explain to you, without unnecessary technobabble, what’s at stake, why compute is the key lever, and how two very different futures depend on policy and technical choices we can make today.
Why compute is the decisive variable
Did you know the most important ingredient for frontier AI isn’t just talent or data, but the chips used to train models? Compute means both the power to train new models and the capacity to serve them in production (inference). Without enough compute, brilliant algorithms and talented teams don’t get very far.
Today that power mostly comes from companies and supply chains in democratic countries: NVIDIA, TSMC, ASML, Micron, Samsung and the like. Technologies such as EUV, high-bandwidth memory (HBM) manufacturing and advanced manufacturing equipment (SME) are hard to replicate quickly. Plus, gains follow economies of scale: more predictably improves performance and lets AI accelerate its own R&D.
compute
How China has been closing the gap (and why it’s not just talent)
China has top-tier talent, enormous data, and cheap energy. So why doesn’t it already dominate? Because it lacks cutting-edge compute. Still, it’s been narrowing the distance through two main routes: evasive access to chips and distillation attacks.
Evasive access: illegally imported chips, advanced-chip servers diverted to China, and remote use of data centers in third countries to train models with hardware legally exported to those sites.
Distillation attacks: creating thousands of fake accounts to query frontier models, collect their outputs, and train replicas. It’s a way to appropriate capabilities without buying the chips or doing the original R&D.
Cases and names appear in the essay: from charges over server diversions to reports pointing to big companies training on chips controlled outside mainland territory. And the arrival of models like Mythos Preview showed how quickly those capabilities can be applied (for example, finding security bugs much faster than before).
Two scenarios for 2028
Scenario 1: Democracies with a wide lead (12–24 months)
If allies act to close holes in export controls and curb distillations, democracies can maintain—possibly expand—a 12 to 24 month lead at the frontier. That margin matters: it lets norms, standards and safety practices be shaped by societies that protect rights and freedoms.
In this world, companies in democratic countries dominate the global AI supply, collaborate on safety standards, and responsible AI adoption drives breakthroughs in health, science and cybersecurity. Technical leadership also strengthens diplomatic influence to negotiate global rules.
Scenario 2: Close competition and authoritarian risk
If nothing is done, the mechanisms allowing China to catch up—chip smuggling, offshore data center access, and distillation—could leave PRC firms only months from parity in model intelligence. Even if their chips aren’t equivalent, mass adoption and state integration (AI+) can yield strategic advantages.
In that scenario, near-frontier models get used for automated surveillance, cyberoffense and social control, and global norms end up shaped by authoritarian states. The risk isn’t only technological: it’s geopolitical and ethical.
What policies and technical measures they recommend and why they work
Anthropic proposes three complementary fronts:
Close export gaps: not only regulate chip sales but also remote access to data centers in third countries and the whole logistics chain (SME, maintenance and servicing). Technically, this means controls on hardware traceability and international cooperation for inspections and enforcement.
Defend innovations: make distillation harder through legal action, improve detection and intelligence sharing between labs and the state, and deploy technical countermeasures like rate limits, mass-scraping detection and API traceability. This helps prevent capabilities from being transferred for free to actors that don’t follow the rules.
Promote democratic adoption of AI: export trustworthy infrastructure and models with safety and privacy standards, so global markets choose solutions aligned with civil rights rather than repression.
These measures work together: reducing illicit access to compute lowers the maximum acceleration; stopping distillations protects IP and competitive advantage; and exporting trust steers global adoption toward democracies.
Technical and governance risks you can’t ignore
Extreme dual use: frontier models can accelerate R&D in semiconductors, biotech and cyberweapons. One AI advance speeds up the whole tech stack.
Open weights and insufficient evaluation: publishing open weights without robust evaluations (including CBRN risks) makes malicious use easier. The paper notes that few labs in China publish safety evaluations comparable to leading democracies’ practices.
Neck-and-neck race: when both sides are close, pressure to ship reduces incentives for pre-deployment safety. That’s the worst mix for responsible governance.
What the technical community and you as a professional or entrepreneur can do
If you work at an AI company: prioritize abuse detection (monitoring for anomalous use), harden APIs and collaborate on open safety standards with peers.
If you’re a regulator or policymaker: focus on closing evasion vectors (logistics, remote access), fund enforcement, and create legal frameworks to deter distillation.
If you’re an investor or entrepreneur: bet on reliable infra and on models that include risk assessments across the whole value chain.
Time is short: we’re in a window where policy and technical decisions can compress or widen the democratic advantage.
The remaining question is simple but urgent: will we protect the technological lead built by years of investment and collaboration, or let it erode through regulatory gaps and illicit practices? Acting now isn’t just geopolitics; it’s a choice about how AI will shape our lives.