+(1) 408-943-8000
My Cart 0

You have no items in your shopping cart.

Save 40%

Mellanox Connect-IB™ Single-Port FDR 56Gb/s InfiniBand Host Channel Adapter Card - Part ID: MCB191A-FCAT

Connect-IB Host Channel Adapter, single-port QSFP, FDR 56Gb/s, PCIe3.0 x8, tall bracket, RoHS R6


20 lbs
Ships within 3-5 Days



Availability In stock

Call us for special price 408 895 5000 x 1
To buy this product please contact with us. Call : +(1) 408-943-8000 or "Enquire Now"


Mellanox Connect-IB™ Single-Port InfiniBand Host Channel Adapter Card

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High-Performance Computing, Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

World Class Performance

Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.

Connect-IB offloads the CPU protocol processing and the data movement from the CPU to the interconnect, maximizing the CPU efficiency and accelerate parallel and data-intensive application performance. Connect-IB supports new data operations including noncontinuous memory transfers which eliminate unnecessary data copy operations and CPU overhead. Additional application acceleration is achieved with a 4X improvement in message rate compared with previous generations of InfiniBand cards.

Unlimited Scalability

The next level of scalability and performance requires a new generation of data and application accelerations. MellanoX Messaging (MXM) and Fabric Collective Accelerator (FCA) utilizing CORE-Direct™ technology accelerate MPI and PGAS communication performance, taking full advantage of Connect-IB enhanced capabilities. Furthermore, Connect-IB introduces an innovative transport service Dynamic Transport, to ensure unlimited scalability for clustering of servers and storage systems.

High Performance Storage

Storage nodes will see improved performance with the higher bandwidth FDR delivers, and standard block and file access protocols can leverage InfiniBand RDMA for even better performance. Connect-IB also supports hardware checking of T10 Data Integrity Field / Protection Information (DIF/PI) and other signature types, reducing the CPU overhead and accelerating the data to the application. Signature translation and handover are also done by the adapter, further reducing the load on the CPU. Consolidating compute and storage over FDR InfiniBand with Connect-IB achieves superior performance while reducing data center costs and complexities.

Software Support

All Mellanox adapter cards are supported by all Windows and Linux distributions. Connect-IB adapters support OpenFabrics-based RDMA protocols and software, and are compatible with configuration and management tools from OEMs and operating system vendors.


  • World-class cluster, network, and storage performance
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes


  • Greater than 100Gb/s over InfiniBand
  • Greater than 130M messages/sec
  • 1us MPI ping latency
  • PCI Express 3.0 x8
  • CPU offload of transport operations
  • Application offload
  • GPU communication acceleration
  • End-to-end internal data protection
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Technical Specification

Feature Summary* 

  • IBTA Specification 1.2.1 compliant
  • FDR 56Gb/s InfiniBand
  • Hardware-based congestion control
  • 16 million I/O channels
  • 256 to 4Kbyte MTU, 1Gbyte messages
  • Hardware-based reliable transport
  • Extended Reliable Connected transport
  • Dynamically Connected transport service
  • Signature-protected control objects
  • Collective operations offloads
  • GPU communication acceleration
  • Enhanced Atomic operations
  • T10-compliant DIF/PI support
  • Hardware-based data signature handovers
  • Remote boot over InfiniBand
  • Single Root IOV**
  • Up to 16 physical functions, 256 virtual functions
  • Address translation and protection
  • Dedicated adapter resources
  • Multiple queues per virtual machine
  • Enhanced QoS for vNICs and vHCAs
  • VMware NetQueue support
  • OpenMPI, IBM PE, Intel MPI, OSU MPI (MVAPICH/2), Platforms MPI, UPC, Mellanox SHMEM
  • SRP, iSER, NFS RDMA, SMB Direct
  • uDAPL


  • PCI Express 2.0 or 3.0 compliant
  • Auto-negotiates to x8, x4, x2, or x1
  • Support for MSI-X mechanisms
  • Interoperable with InfiniBand switches
  • Passive copper cable with ESD protection
  •  Powered connectors for optical and active cable support
  • Novell SLES, Red Hat Enterprise Linux (RHEL), and other Linux distributions
  • Microsoft Windows Server 2008/CCS 2003, HPC Server 2008
  • VMware ESX 5.1
  • OpenFabrics Enterprise Distribution (OFED)
  • OpenFabrics Windows Distribution (WinOF)

Manufacturer Warranty

1 Year

* This brief describes hardware features and capabilities. Please refer to the driver release notes on www.mellanox.com for feature availability.

** FutureSupport


Write your own review

You're reviewing: Mellanox Connect-IB™ Single-Port FDR 56Gb/s InfiniBand Host Channel Adapter Card - Part ID: MCB191A-FCAT

Customer Reviews

There are no reviews for this product yet.


Subscribe Now to get Saving Offers
Supermicro Partner