The Hitachi BladeSymphony

Introduction
HPC Architecture
  1. Shared-memory SIMD machines
  2. Distributed-memory SIMD machines
  3. Shared-memory MIMD machines
  4. Distributed-memory MIMD machines
  5. ccNUMA machines
  6. Clusters
  7. Processors
    1. AMD Opteron
    2. IBM POWER5+
    3. IBM BlueGene processors
    4. Intel Itanium 2
    5. Intel Xeon
    6. The MIPS processor
    7. The SPARC processors
  8. Networks
    1. Infiniband
    2. InfiniPath
    3. Myrinet
    4. QsNet
    5. SCI
Available systems
  1. The Bull NovaScale
  2. The C-DAC PARAM Padma
  3. The Cray X1E
  4. The Cray XT3
  5. The Cray XT4
  6. The Cray XMT
  7. The Fujitsu/Siemens M9000
  8. The Fujitsu/Siemens PRIMEQUEST
  9. The Hitachi BladeSymphony
  10. The Hitachi SR11000
  11. The HP Integrity Superdome
  12. The IBM eServer p575
  13. The IBM BlueGene/L&P
  14. The Liquid Computing LiquidIQ
  15. The NEC Express5800/1000
  16. The NEC SX-8
  17. The SGI Altix 4000
  18. The SiCortex SC series
  19. The Sun M9000
Systems disappeared from the list
Systems under development
Glossary
Acknowledgments
References

Machine type RISC-based distributed memory multi-processor
Models BladeSymphony
Operating system Linux (RedHat EL4), MS Windows
Connection structure Fully connected SMP nodes (see remarks)
Compilers Fortran 77, Fortran 95, Parallel Fortran, C, C++
Vendors information Web page www.hitachi.co.jp/products/bladesymphony_global/products03.html
Year of introduction 2003.

System parameters:

Model BladeSymphony
Clock cycle 1.66 GHz
Theor. peak performance  
Per core (64-bits) 6.64 Gflop/s
Per Frame of 64 proc.s 850 Gflop/s
Memory  
Memory/frame ≤ 128 GB
No. of processors 4–64
Communication bandwidth  
Point-to-point ---

Remarks:

The Hitachi BladeSymphony is one of the many Itanium based parallel servers that are currently on the market. Still there are some differences with most other machines. First, a BladeSymphony frame can contain up to 4 modules which contain a maximum of 8 two-processor blades. The 16 processors in a module constitute an SMP node, like the nodes of the IBM eServer p-series. Four of such modules are housed in a frame and can communicate via a 4×4 crossbar. Unfortunately Hitachi nowhere mentions bandwidth data for the communication between modules nor within a module. Hitachi offers the blades with processors of various speeds. The fastest of these runs at 1.66 GHz from which it can be derived that the dual-core Montecito processor is used. This makes the Theoretical Peak speed for a 64-processor frame 850 Gflop/s.

Another distinctive feature of the BladeSymphony is that also blades with 2 Intel Xeon processors are offered. In this case, however, only 6 blades can be housed in a module. Theoretically, modules with Itanium processors and Xeon processors can be mixed within a system, although in practice this will hardly occur.

Hitachi makes no mention of a connecting technology to cluster frames into larger systems but this can obviously been done with third party networks like Infiniband, Quadrics, etc. In all, there is the impression that Hitachi is hardly interested in marketing the system in the HPC area but rather for the high-end commercial server market.

Like the other Japanese vendors Hitachi (see, e.g., the PRIMEQUEST and Express 1000) very much stresses the RAS features of the system. About all failing components may be replaced while the system is in operation which makes it very resilient against system-wide crashes.

Note: Large HPC configurations of the BladeSymphony are not sold in Europe as they are judged to be of insufficient economical interest by Hitachi.

Measured Performances:
The BladeSymphony was introduced in November 2005 and as yet no independent performance figures are available.