<
High Performance Computing

HPC Cluster Software



Running Programs as Jobs

Each unit of work in a cluster is called a job and, at the simplest level, a job encapsulates a single command to run a program. The scheduling of jobs is provided by a piece of software called a batch system or batch scheduler. This ensures that programs can only run on compute nodes which are free and so programs do not conflict with each other by running on the same compute node(s) at the same time. If the required number of compute nodes needed by a job cannot be satisfied at a given time, then the batch system will ensure that the job waits until sufficient resources become available for it to run. There are many different batch systems provided by different suppliers but Sun Grid Engine (SGE) is probably the most widely used at Liverpool and, for this reason, it is described in more detail on a separate page.

Parallel Processing

The key to speeding up the execution of computionally intensive programs is to split them into smaller parts and to execute these at the same time i.e. concurrently or "in parallel". Where the different parts are able to communicate with each other via shared areas of memory, programs can be run very efficiently on a single compute node. This is an example of shared memory parallelism and is supported by language extensions such as openMP (for C/C++ and FORTRAN). The commercial MATLAB software package can exploit shared memory parallelism in some of its low level linear algebra functions to allow run times to be reduced for certain problems.

OpenMP works very well on a single node but cannot be used to run programs across multiple nodes since they do not (usually) share any common areas of memory. This limits the number of processing elements (cores essentially) that can be put to work in solving a problem and hence the overall speed up. By contast, distributed memory parallelism can be used run programs across multiple nodes without the need for any shared memory areas. In order to do this, data needs to exchanged between nodes as and when it is needed and this communication can be supported by special programming libraries such as the Message Passing Interface (MPI). Many free-to-use scientific codes make use of MPI and there are also commercial applications that can exploit distributed memory parallelism to speed things up.

Software Support for Parallel Processing

Clusters provide a suite of software libraries to support shared memory and distributed memory parallelism - usually based on openMP and MPI respectively. There will also be available compilers/linkers capable of adding this to user-written software. Many third party applications also have openMP and MPI support built-in and only need run time access to the libraries (MATLAB and parallel R are examples of this).