A year ago what many started calling the "DeepSeek moment" happened: a turning point that activated not just models but a whole chain of engineering, infrastructure and open collaboration. What has happened since, and why does it matter if you work in research, product, or tech policy?
Un mapa de actores y estrategias
China didn’t just produce more models. It changed the way people work. Big companies, startups and labs moved from thinking about isolated models to designing reusable pieces inside engineering systems. The result? An ecosystem where open source is the default choice for design and deployment.
DeepSeek consolidated itself as the most-watched actor in open communities. Its R1 was the catalyst that allowed other organizations to deploy and validate at scale.
Alibaba pushed Qwen as a family of models: multiple sizes, modalities and constant updates. By mid-2025 Qwen led in derivatives: about 113k models using Qwen as a base and 200k repositories tagging Qwen, far above Llama and DeepSeek.
Tencent evolved from integrating mature models to releasing its own capabilities under the Tencent Hunyuan (Tencent HY) brand, focusing on vision, video and 3D.
ByteDance chose a selective approach: open high-value components while keeping product-level advantages. Its app ecosystem reached massive scale — for example Doubao passed 100 million DAU by the end of 2025.
Baidu, after prioritizing closed models, re-entered with open releases like Ernie 4.5 and revitalized PaddlePaddle alongside the Kunlunxin chip, which announced its IPO on January 1, 2026.
Startups such as Moonshot, Z.ai and MiniMax accelerated quickly: Kimi K2, GLM-4.5 and MiniMax M2 entered open model rankings and some announced IPOs during 2025.
Investigación y herramientas comunitarias
Groups like BAAI and Shanghai AI Lab redirected efforts toward toolchains, evaluation and deployment: projects such as FlagOpen, OpenDataLab and OpenCompass strengthen the foundation, not just chase benchmark records. It’s a bet on resilience and long-term operability.
Infraestructura: cómputo, energía y despliegue
The change isn’t only software. The national strategy "East Data, West Compute" and the creation of 8 compute hubs and 10 data center clusters laid the physical groundwork. Public data estimates around 1590 EFLOPS of total capacity in 2025, with smart compute growing about 43 percent annually.
Efficiency improvements matter too: average PUE fell to roughly 1.46, which means more efficient infra and lower cost per large-scale training run. Those numbers explain why deploying models at scale stopped being an option only for giants with unlimited resources.
Qué cambió en términos técnicos
The novelty isn’t just more parameters or a new record. It’s the shift toward composite systems:
Models as reusable components that are assembled into pipelines and agents.
Coordination between models, chips and frameworks: a concrete example is integrating models with Kunlunxin and PaddlePaddle.
More flexible training paths and localized deployment, designed for control, latency and compliance.
This means system design today assumes reusability, composability and auditability from the start. For engineers it changes priorities: less tuning for records, more integration and deployment testing.
Implicaciones para desarrolladores y reguladores
For you as a developer: opening artifacts means access to pieces you can integrate, audit and improve. The relevant metrics are no longer just perplexity or SOTA on a benchmark, but inference latency, cost per token, ease of composition and reproducibility.
For policymakers: a national compute-and-energy infrastructure + an open ecosystem raises new questions about interoperability, export controls and shared governance. The Chinese model shows that opening up is not the same as giving up control; it’s about reconfiguring where and how control is exercised.
¿Hacia dónde va esto?
In 2025 China stopped chasing performance peaks and started building something that works at scale: training pipelines, distributed deployment, full stacks combining model, hardware and platform. The near future seems oriented to:
Greater integration of AI into industrial processes and autonomous agents.
More localized and controllable deployments for companies and governments.
Ecosystems where value comes from orchestration, not just the central model.
Open questions that remain critical: how will the global community collaborate with an increasingly self-sufficient Chinese ecosystem? What governance and auditing standards will we need to interoperate without friction?
In the end, the important thing is that open source stopped being a marginal option and became the default way to design AI systems in China. Is it magic? No — it’s engineering, infrastructure and community agreements that are already changing how artificial intelligence is built and delivered worldwide.