The Liquid Computing LiquidIQ system

Introduction
HPC Architecture
  1. Shared-memory SIMD machines
  2. Distributed-memory SIMD machines
  3. Shared-memory MIMD machines
  4. Distributed-memory MIMD machines
  5. ccNUMA machines
  6. Clusters
  7. Processors
    1. AMD Opteron
    2. IBM POWER5+
    3. IBM BlueGene processors
    4. Intel Itanium 2
    5. Intel Xeon
    6. The MIPS processor
    7. The SPARC processors
  8. Networks
    1. Infiniband
    2. InfiniPath
    3. Myrinet
    4. QsNet
    5. SCI
Available systems
  1. The Bull NovaScale
  2. The C-DAC PARAM Padma
  3. The Cray X1E
  4. The Cray XT3
  5. The Cray XT4
  6. The Cray XMT
  7. The Fujitsu/Siemens M9000
  8. The Fujitsu/Siemens PRIMEQUEST
  9. The Hitachi BladeSymphony
  10. The Hitachi SR11000
  11. The HP Integrity Superdome
  12. The IBM eServer p575
  13. The IBM BlueGene/L&P
  14. The Liquid Computing LiquidIQ
  15. The NEC Express5800/1000
  16. The NEC SX-8
  17. The SGI Altix 4000
  18. The SiCortex SC series
  19. The Sun M9000
Systems disappeared from the list
Systems under development
Glossary
Acknowledgments
References

Machine type Distributed memory multi-processor system
Model LiquidIQ
Operating system Linux (RedHat EL4)
Connection structure Crossbar
Compilers Fortran 95, ANSIC, C++, Berkeley UPC
Vendors information Web page www.liquidcomputing.com/product/product_product.php
Year of introduction 2006

System parameters:

Model LiquidIQ
Clock cycle 2.8 GHz
Theor. peak performance  
Per core (64-bits) 5.6 Gflop/s
Maximal 10.7 Tflop/s
Main memory  
Memory (per chassis) ≤ 1.28 TB
No. of processors ≤ 960
Communication bandwidth  
Point-to-point 16 GB/s
Aggregate (per chassis) 16 GB/s

Remarks:

Liquid Computing announced the LiquidIQ system in 2006. The systems are very densely built with up to 20 4-processor compute modules in a chassis. A maximum of 12 chassis can be put together to form a system with very high bandwidth (16 GB/s) and very low communication latency. The basic processor in the compute modules is a dual-core AMD Opteron from the 800 series. The 8 cores within a compute module can access the maximally 64 GB in the module in SMP mode. So, the system is capable of supporting hybrid OpenMP/MPI programming style.

The chassis that house the modules are cable-less: the modules connect to a backplane that acts as a crossbar between the modules. According to the documentation the guaranteed throughput bandwidth of the backplane is 16 GB/s (2 GB/s per communication plane of which there are 8). The inter-chassis bisection bandwidth ratio is 1.1. So, the inter-chassis bandwidth is 17.6 GB/s in order to combat the delays between chassis. The latency as given in the documentation is low: 2.5 µs irrespective of the relative positions of the communicating processors. This is brought about by a combination of the chassis backplanes and Liquid's multi-chassis switches. Liquid provides its own MPI implementation which is as yet at the level of MPICH 1.2. OpenMP is available via the supported compiler set and also Berkeley's UPC is supported.

Liquid Computing prides itself in making the systems rather energy-efficient: about 14 KW/chassis peak. So, a maximally configured system (not counting the I/O configuration) has a maximal power consumption of 168 KW.

Measured Performances:
There are as yet no independent performance results available for the system. Liquid Computing itself has published some partial results from the HPCC Benchmark [22] in press releases but there is no official entry for the system on the HPCC web site yet.