|
System parameters:
Remarks: There is a multitude of high end servers in the eServer p-series. However, IBM singles out the POWER5+ based p575 model specifically for HPC. The eServer p575 is the successor of the RS/6000 SP. It retains much of the macro structure of this system: multi-CPU nodes are connected within a frame either by a dedicated switch or by other means, like switched Ethernet. The structure of the nodes, however, has changed considerably, see POWER5+. Up to 8 Dual Chip Modules (DCMs) are housed in a node totaling 8 or 16 cores in a node depending on whether the dual or single core version of the chip is used. For High-Performance Computing IBM recommends to employ the 8 CPU, single core nodes because a higher effective bandwidth from the L2 cache can be expected in this case. For less data intensive work that primarily uses the L1 cache the difference would be small while there is a large cost advantage using the 16-CPU nodes. The dual-core chips run at a lower clock frequency because of power dissipation considerations. The p575 is accessed through a front-end control workstation that also monitors system failures. Failing nodes can be taken off line and exchanged without interrupting service. The so-called Federation switch is the fourth generation of the high-performance interconnects made for the p575 series. The Federation switch is, like its predecessors, an Ω-switch as described in the section on SM-MIMD systems. It has a bi-directional link speed of 2 GB/s and an MPI latency of 5–7 µs. Although we mentioned only the highest speed option for the communication, the high-performance switch, there is a wide range of other options that could be chosen instead, e.g., Infiniband or Gbit Ethernet is also possible. Applications can be run using PVM or MPI. IBM used to support High Performance Fortran, both a proprietary version and a compiler from the Portland Group. It is not clear whether this is still the case. IBM uses its own PVM version from which the data format converter XDR has been stripped. This results in a lower overhead at the cost of generality. Also the MPI implementation, MPI-F, is optimised for the p575-based systems. As the nodes are in effect shared-memory SMP systems, within the nodes OpenMP can be employed for shared-memory parallelism and it can be freely mixed with MPI if needed. In addition to its own AIX OS IBM also supports some Linux distributions: both the professional versions of RedHat and SuSe Linux are available for the p575 series. The standard commercial models that are marketed contain up to 128 nodes. However, on special request systems with up to 512 nodes can be built. This largest configuration is used in the table.
Measured Performances: |