+
+(1) 408-943-8000
Save 17%
  • This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data b

Supermicro Superblade Networking AOC-IBH-XDD

Dual-Port, Low Latency InfiniBand Adapter Cards For SuperBlade
SKU:

AOC-IBH-XDD

Weight:
1 lbs
Shipping:
Ships within 3-5 Days
Price:

$619.08

$515.90

Availability In stock

FREE SHIPPING AVAILABLE

Overview

Dual-Port, Low Latency InfiniBand Adapter Cards For SuperBlade AOC-IBH-XDD

  • This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments.
  • Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
  • AOC-IBH-XDD simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments.
  • In addition to this outstanding InfiniBand capability, the AOC-IBH-XDD can be configured alternatively as a 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002M 10-Gigabit Pass-Through module or the SBM-XEM-X10SM 10Gbps Ethernet switch. 

Technical Specification

Compliance

RoHS

Download

See Appendix A of the SuperBlade Network Modules User's Manual for Installation Instructions

Firmware

Firmware

Highlights

  • Dual 20Gb/s InfiniBand ports or 10Gb/s Ethernet ports
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • Full support for Intel I/OAT

Specification

InfiniBand:

  • Mellanox ConnectX IB DDR Chip
  • Dual 4X InfiniBand ports
  • 20Gb/s per port
  • RDMA, Send/Receive semantics
  • Hardware-based congestion control
  • Atomic operations

Interface:

  • SuperBlade Mezzanine Card

Connectivity:

  • Interoperable with InfiniBand switches through SuperBlade InfiniBand Switch (SBM-IBS-001)
  • Interoperable with 10 Gigabit Ethernet switches through SuperBlade 10G Ethernet Pass-Through Module (SBM-XEM-002)

Hardware-based I/O Virtualization:

  • Address translation and protection
  • Multiple queues per virtual machine
  • Native OS performance
  • Complimentary to Intel and AMD I/OMMU

CPU Offloads:

  • TCP/UDP/IP stateless offload
  • Intelligent interrupt coalescence
  • Full support for Intel I/OAT
  • Compliant to Microsoft RSS and NetDMA

Storage Support:

  • TIO compliant data integrity field support
  • Fibre Channel over InfiniBand or Fibre Channel over Ethernet

Operating Systems/Distributions (InfiniBand):

  • Novell, RedHat, Fedora and others
  • Microsoft Windows Server

Operating Systems/Distributions (Ethernet):

  • RedHat Linux

Operating Conditions:

  • Operating temperature: 0 to 55°C

Compatible Servers

  • All Enterprise Blade Servers

Reviews

Write your own review

You're reviewing: Supermicro Superblade Networking AOC-IBH-XDD

Customer Reviews

There are no reviews for this product yet.

Supermicro Partner
[profiler]
Memory usage: real: 23592960, emalloc: 23228240
Code ProfilerTimeCntEmallocRealMem