By replacing traditional electronic packet processing with direct, physical paths of light, optical circuit switching fundamentally changes how data moves across the network. This shift to an all-optical core has already proven its value at hyperscale, with deployments delivering 40% power savings and 50 times less downtime compared to traditional electronic switching. Yet, while OCS removes the electrical switching bottleneck, it concentrates new complexities directly into the physical layer.
The AI scaling curve has quickly outpaced the first generation of OCS products. As training clusters expand to support pods with tens of thousands of GPUs, the physical layer must handle unprecedented demands on optical routing and density. Operators are publicly requesting switch matrices that far exceed the 300-port ceiling of most current vendor capabilities.
The question for data center architects is a practical one: can OCS platforms scale their port counts and manufacturing maturity fast enough to match the pace of AI infrastructure buildouts? Solving this requires treating the switch and the surrounding physical layer as a single, holistic platform.
The Four Attributes That Define Hyperscale Optical Circuit Switching Performance
As AI training clusters expand toward hundreds of thousands of accelerators, the optical circuit switches at the core of the network require a proportionally larger switching matrix. Port count dictates the network architecture, but four attributes determine whether a high-radix switch can actually perform at hyperscale.
The Radix Problem
The latest AI superpods require nearly 14,000 optical ports each, pushing demand for individual switches with more than 300 ports. To minimize network layers across multi-tens-of-thousands of GPU pods, hyperscale data center operators are actively requesting switch matrices with thousands of ports. Scaling to this level introduces a secondary physical challenge of balancing ultra-dense optical cabling with practical field serviceability and mean time to repair (MTTR).
Insertion Loss
Every decibel of loss matters when optical spans double in an OCS architecture. Standard far-reach (FR) optics carry limited link budgets of roughly 4 dB to 6 dB depending on the module. Any internal loss from the switch eats directly into those tight margins, reducing reach and usable optics margin. Maintaining these strict budgets at scale relies on embedded test and telemetry to continuously monitor performance and validate optical paths.
Reliability at Scale
The system-wide impact, or “blast radius,” of an optical switch failure can halt an entire AI training job. At this scale, even 99.999% component reliability can translate to unacceptable system-level downtime. High component ratings only matter if they translate directly into acceptable system mean time between failures (MTBF) and overall availability. Rapid fault recovery also plays a critical role, as switching latency dictates whether dynamic workflows can continue without disrupting active AI jobs.
Manufacturing Readiness
A lab prototype and a production-grade platform are two very different things. Hyperscale operators require vendors capable of delivering thousands of units on schedule through proven, high-volume manufacturing processes. Moving from initial designs to mass production hinges on consistent assembly techniques and rigorous testing protocols to maintain yield.
The Molex High-Radix Optical Circuit Switch Platform
Molex developed the High-Radix Optical Circuit Switch (OCS) Platform to solve the scaling limits of a major global hyperscaler. The design builds upon nearly twenty years of micro-electro-mechanical systems (MEMS) technology deployment and more than two million devices shipped into optical networking applications.
Breaking the Radix Barrier
At 544×544, the Molex solution represents the highest-radix MEMS-based OCS announced to date. The higher radix allows architects to build flatter superpod architectures with fewer switches and fewer hops. Achieving this density relies on a patented optical design that utilizes the full MEMS tilt range. This approach reduces the necessary MEMS deflection angle by 50%, allowing the system to scale massively while using a proven, highly stable structural design.
Optical and Switching Performance
This high-radix switch maintains stable, low insertion loss across all paths, featuring a typical insertion loss of 3 dB. Keeping loss predictable preserves tight optical link budgets across the network. The 544x544 switch also supports predictable switching behavior for dynamic reconfiguration and fault recovery, operating with a maximum switching time of under 150 ms, with ongoing development targeting 100 ms. This sub-second performance allows operators to rapidly reroute traffic and bypass hardware failures to maintain continuous cluster operation.
Built for Production Deployment
Because MEMS operates in the optical domain without active media in the light path, the Molex High-Radix OCS Platform remains agnostic to protocol and data rate, supporting 800Gbps to 1.6Tbps and beyond without hardware changes. Operating entirely in the photonic domain keeps power consumption remarkably low, drawing 245 watts to manage over 500 ports. Molex supports the 544x544 switch with global optical manufacturing and advanced assembly capabilities. The assembly process relies on mechanical construction combined with software-driven calibration to establish and align all optical links.
The Complete Optical Ecosystem for AI
A high-radix optical circuit switch solves the core routing challenge, but it represents only one node in a massive physical web. Building an all-optical network at hyperscale requires a complete physical interconnect infrastructure.
Terminating over a thousand fibers on a single switch faceplate, using up to 600 pairs of LC-APC or LC-UPC adapters to maintain low insertion loss, creates immense physical density and routing demands. The ecosystem extends far beyond the chassis itself. Maintaining strict link budgets across the entire data center floor relies on high-performance optical cable assemblies, advanced fiber management and rigorous end-to-end testing.
Building an all-optical data center requires manufacturing depth across both the core switch and the surrounding physical layer infrastructure. By owning the full interconnect path, Molex equips architects with the complete physical ecosystem required to build and scale all-optical data center networks.
Discover how the Molex High-Radix OCS Platform provides the scale, reliability and performance required for next-generation AI data centers.
Share