Microsoft introduces MOSAIC, a proposal to break what they call the "networking wall" that today limits the efficiency of AI clusters. What's the problem? Connections between GPUs consume a lot of energy, don't reach far, or break often, and that reduces the real utilization of the hardware.
What MOSAIC proposes and why it matters
MOSAIC is an optical interconnect architecture that changes the usual recipe: instead of a few very fast lanes, it uses hundreds of parallel low-speed channels driven by microLEDs and image fibers. The idea sounds simple, doesn't it? But it solves a real technical trade-off between reach, power, and reliability that today forces painful choices in data centers. (microsoft.com)
In practical terms: MOSAIC achieves ranges comparable to current fibers (up to 50 meters), offers up to 68% less power per cable, and promises reliability up to 100x better than conventional optical links. This isn't just theory: the authors show a prototype with 100 channels at 2 Gbps each and explain how to scale to 800 Gbps or more. ()
