Jul 20 2010
Networking

InfiniBand in the Enterprise Data Center

The switching fabric's high-speed connection benefits attract more than just HPC users.

Before investing in a new technology, it pays to understand how it performs under extreme workloads. Nowhere is that more possible than with InfiniBand, a high-speed interconnect technology that has become a rising star in many of the world’s high-performance computing facilities.

But what is it about InfiniBand that has data center chiefs, analysts and vendors now banging the IB drum for its adoption in high-throughput transaction processing environments, particularly those using server virtualization, blade and cloud computing technologies?

In particular, two things: reliability and speed. Plus, increasingly, network device makers are creating InfiniBand products for full-scale deployment in data center infrastructures, as opposed to HPC clusters. Some prime examples include the Cisco Systems, HP and Voltaire InfiniBand switches, along with network adapters from Sun Microsystems and QLogic.

Virtualization Virtuoso

In federal data centers, where agencies have launched major consolidation and virtualization efforts, the use of InfiniBand makes sense. “What’s the reason for virtualization? It brings the needs of high-performance computing to the enterprise,” says Gilad Shainer, senior director of technical marketing for Mellanox Technologies. “In high performance computing, you want every possible cycle of the CPU dedicated to the application. You want the CPU cores to communicate as fast as they can. You don’t want those CPUs to have to wait for data.”

Shainer points out that before virtualization, enterprise data centers didn’t use all available CPU cycles. As the virtual machine-to-server ratio grows, CPU cycle efficiency becomes much more critical.

Ethernet Alternative

How does InfiniBand differ from Ethernet? “What makes it better is higher performance, lower latency, shared I/O and four times the throughput of 10 Gigabit Ethernet,” says Marc Staimer, president of Dragon Slayer Consulting.

A large part of InfiniBand’s “secret sauce” comes from the protocol’s use of remote direct memory access (RDMA) to bypass much of the CPU overage cycles and operating system overhead typically involved in Ethernet-based TCP/IP communications.

The result is an “IB fabric [that] has grown a lot more intelligent than any other fabric we’ve seen before,” says Jeff Boles, Taneja Group senior analyst and director of validation services. “It tends to understand application flows a lot easier than other fabrics because of these RDMA roots. This means InfiniBand vendors can make their devices more intelligent.”

In Action

A look at IB’s use for HPC at the Oak Ridge National Laboratory offers insight into its scalability and potential for broader use.

At 208 systems, representing nearly 42% of the Top 500 supercomputer sites, InfiniBand is the only growing standard interconnect technology.


SOURCE: Mellanox Technologies, Top500.org

InfiniBand has become an integral part of the Scalable I/O Network (SION) at the lab’s National Center for Computational Sciences, according to Galen Shipman, technology integration group leader. SION offers what Shipman describes as a highly scalable “backplane of services” that interconnect all major platforms, including partitions on the organization’s Cray-based Jaguar supercomputer with those used for varied simulation, development and visualization projects.

SION is a double data rate IB network with more than 889 gigabits of bandwidth over 3,000 InfiniBand ports. It uses IB switches, routers and channel adapters from Mellanox and Cisco Systems, along with IB-enabled storage controllers.

Despite its proven track record in HPC deployments, however, IB is not likely to replace Ethernet any time soon in every large data center. Instead, Taneja’s Boles maintains InfiniBand will likely be a welcome localized fabric and valuable adjunct to existing Ethernet fabrics in the enterprise. Mellanox’s Shainer notes there’s still some pro-Ethernet religiosity as well. And despite Shipman’s success with IB, he also notes there’s still “room for improvement” in documentation and management tools for large-scale deployment of IB fabrics.

Yet, the rock-solid capabilities, high performance and low latencies warrant a careful look, Shipman advises. “Depending on your specific requirements, InfiniBand could be considered for system area networks with demanding bandwidth and latency requirements and has proved a viable alternative to Fibre Channel–attached storage in our center.”

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT