By Robert Bruce and Peter Hook, Iridian Spectral Technologies
Demand for data center storage and data transfer capacity has increased dramatically over the past decade1, exponentially increasing the load placed on backplane-to-backplane data server interconnects within each center.
Further, demand for data center interconnect bandwidth will continue to grow, driven by several factors. These factors include improved technologies (e.g., flash memory and solid-state drive) that enhance the attractiveness of cloud storage, as well as requirements for dynamic allocation of server, storage, and network resources.
Both commercial and consumer users expect continuous data availability, facilitated by distributing virtual computing and storage resources across numerous physical devices, and they expect ever-increasing speed attached to that access. Since a single request can trigger multiple data exchanges between servers in one or more data centers, cloud providers’ warehouse-scale data center servers have had to keep up, progressing from 10 GbE to 100 GbE network adapters in common use, with 800 GbE (and even to 1.6 TbE) forecasted to become standards within the next few years.2
In a large data center utilizing typical Clos topology (Fig. 1), hundreds of thousands of interconnects are required to maintain efficient communication across servers not only in that facility, but in data centers in other locations. Clos provides a more direct interconnection between top-of-rack (TOR) switches and other servers, but every leaf switch in the leaf spine architecture connects to every switch in the network fabric. The spine switches have the same level of connection to the leaf switches as the leaf switches to the TOR switches.