Mellanox Connect-IB™ Single-Port InfiniBand Host Channel Adapter Card
Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High-Performance Computing, Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
World Class Performance
Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.
Connect-IB offloads the CPU protocol processing and the data movement from the CPU to the interconnect, maximizing the CPU efficiency and accelerate parallel and data-intensive application performance. Connect-IB supports new data operations including noncontinuous memory transfers which eliminate unnecessary data copy operations and CPU overhead. Additional application acceleration is achieved with a 4X improvement in message rate compared with previous generations of InfiniBand cards.
The next level of scalability and performance requires a new generation of data and application accelerations. MellanoX Messaging (MXM) and Fabric Collective Accelerator (FCA) utilizing CORE-Direct™ technology accelerate MPI and PGAS communication performance, taking full advantage of Connect-IB enhanced capabilities. Furthermore, Connect-IB introduces an innovative transport service Dynamic Transport, to ensure unlimited scalability for clustering of servers and storage systems.
High Performance Storage
Storage nodes will see improved performance with the higher bandwidth FDR delivers, and standard block and file access protocols can leverage InfiniBand RDMA for even better performance. Connect-IB also supports hardware checking of T10 Data Integrity Field / Protection Information (DIF/PI) and other signature types, reducing the CPU overhead and accelerating the data to the application. Signature translation and handover are also done by the adapter, further reducing the load on the CPU. Consolidating compute and storage over FDR InfiniBand with Connect-IB achieves superior performance while reducing data center costs and complexities.
All Mellanox adapter cards are supported by all Windows and Linux distributions. Connect-IB adapters support OpenFabrics-based RDMA protocols and software, and are compatible with configuration and management tools from OEMs and operating system vendors.
- World-class cluster, network, and storage performance
- Guaranteed bandwidth and low-latency services
- I/O consolidation
- Virtualization acceleration
- Power efficient
- Scales to tens-of-thousands of nodes
- Greater than 100Gb/s over InfiniBand
- Greater than 130M messages/sec
- 1us MPI ping latency
- PCI Express 3.0 x8
- CPU offload of transport operations
- Application offload
- GPU communication acceleration
- End-to-end internal data protection
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization