<
High Performance Computing

chadwick Software

Installed Software

The chadwick cluster supports a wide range of application software as well as compilers, libraries and profilers used to build and optimise application software. Applications need to be run on the compute nodes of the cluster by submitting jobs to the Sun Grid Engine (SGE) job scheduler. If you are new to this form of batch processing you can find general information on SGE here as well as detailed information specific to chadwick on this page.

To keep the chadwick environment as straightforward as possible, modules are used to set up the search paths and environment variables needed to access applications. More information on how to use modules is given below.

For some applications we have created "run scripts" which carry out most of the steps needed to get applications running on the compute nodes via the SGE scheduler. In most cases all you need to do is specify is the required number of cores and any input or output files. More information is available below.

A comprehensive (although possibly not exhaustive) list of installed software is given here:


Bioinformatics Software

application version(s) access via licensed
hmmer2 2 modules
tebreak/various bioinformatics tools -- modules/ /opt/software/local/bioinformatics

Chemistry Software

applicationversion(s)access vialicensed
CASTEP 6.0.1 /opt/software/local/chemistry/CASTEP-6.0.1
CP2K 2.5.1 /opt/software/local/chemistry/cp2k/cp2k-2.5.1
Gaussian9*, 16 modules
NAMD 2.11 modules
ORCA 3.0.3 run scripts/ /opt/software/local/chemistry/orca
Spartan 14 /opt/software/local/chemistry/spartan X
VASP 5.3, 5.4.1 /opt/software/local/chemistry/VASP

Engineering Software

application version(s) access vialicensed
Abaqus6.12-3, 6.13-3, 6.14-2*, 6.14-5 modules/run scripts X
Ansys ICEM16.2modules
Fluent14, 14.5, 15.0*, 16, 16.2 modules X
Fluent Commercial14, 16.2modulesX
MATLABR2016a, R2071a* modules/run scriptsX
MSC NASTRAN2013.1/opt/software/local/Engineering/msc/MSC_Nastran/20131 X
MSC PATRAN2014.1modules X
OpenFOAM3.0.1modules
tecplot 14.0.2*, 2017 modules

Scientific Computation Software

application version(s) access via licensed
PETSc 3.5.2 modules

Compilers and Profilers etc

applicationversion(s)access vialicensed
GNU gcc compiler 4.8, 4.9*, 5.2 modules
Intel compilers 12.1, 13, 14, 15* modules
PAPI 5.2, 5.4* modules
NAG Fortran compiler 6.0, 6.1* modules
PGI compiler 2013 2014, 2015* modules
Periscope Tuning Framework (PTF) ?? modules
Python interpreter 2.7*, 3.3 modules
Score-P profiler 2.0 modules
Sun Studio compilers 12.4 modules
Threadspotter profiler 2012.1.1 modules

Libraries

applicationversion(s)available vialicensed
Maths Kernel Library 11.3.1 modules
MPICH 3.1 modules
nVidia CUDA 7.5 modules
openMPI 1.4, 1.5.3, 1.6.5* modules

*indicates default version, click on links for more information.

Using modules

On chadwick modules provide a simple way of setting up the environment needed to run applications - this includes the search path and load library path. You can see which modules are available by using the command module avail e.g.:

$ module avail

--------------------------------------------- /usr/share/Modules/modulefiles -----------------------------------------
dot         module-git  module-info modules     null        use.own

---------------------------------------------------- /etc/modulefiles ------------------------------------------------
abaqus                                 intel-32bit-mpi/intel-mpi-4.0.3        openmpi-1.5.3-x86_64
compat-openmpi-x86_64                  intel-32bit-mpi/intel-mpi-4.1.1        openmpi-x86_64
Fluent                                 intel-mpi/intel-mpi-2017.2.174         papi/5.2
Fluent14                               intel-mpi/intel-mpi-4.0.3.008(default) papi/5.4
Fluent14.5                             intel-mpi/intel-mpi-4.1.1              patran
Fluent16                               intel-mpi/intel-mpi-4.1.3              petsc/3.5.2
Fluent16.2                             ls-dyna/MPP                            petsc-openmpi/3.5.2
FluentCommercial14                     ls-dyna/SMP(default)                   pgi/2013
FluentCommercial16.2                   MATLAB/R2016a                          pgi/2014
Fluent-standard                        MATLAB/R2017a(default)                 pgi/2015(default)
gaussian                               mkl/mkl-13(default)                    ptf
gaussian16                             mkl/mkl-14                             python/2.7(default)
gcc/4.8                                mpi/openmpi-x86_64                     python/3.3
gcc/4.9(default)                       mpi/openmpi-x86_64_18                  scorep/2.0
gcc/5.2                                mpich-x86_64                           sunstudio
hidden                                 mvapich2-x86_64                        tau
hmmer2                                 nag-fortran/6.0                        tebreak
ICEM-desk162                           nag-fortran/6.1(default)               tecplot
ICEM-HPC162                            NAMD                                   tecplot2017
ICEM-standard                          nvidia                                 telemac/liv-v6p2r1(default)
intel/intel-12.1                       oasys                                  telemac/v7p1r1
intel/intel-13                         open64                                 threadspotter
intel/intel-14                         openfoam/3.0.1
intel/intel-15(default)                openmpi-1.4-x86_64

------------------------------------------------ /opt/modules/modulefiles --------------------------------------------
bullxde/3.1                  bullxmpi/bullxmpi-1.2.8.4    oscar-modules/1.0.3(default)
        

In some cases there are multiple versions of the same software which are listed in the form application/version. For example in the case of the gcc compiler there are three versions - namely 4.8, 4.9 and 5.2. To set up the environment for a particular application use the module load command e.g.

$ module load NAMD
        
This would set up the environment necessary to access the NAMD chemistry software. To remove these settings, the module unload command can be used e.g.
$ module unload NAMD
        
This would remove the environment variables and search path changes used for NAMD.

Where there are multiple versions of the same software you can specify the desired version number or omit it to get the default version. For example this would set up the environment for gcc version 4.8:

$ module load gcc/4.8
        

while this would load the default version (here 4.9):

 
$ module load gcc
        

To remove all of the environment settings introduced using modules and get back to your ordinary on-login environment use the module purge command.

You can see the effect of the module command by examining the search path e.g.:

$ echo $PATH
/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib64/alliance/bin:/home/smithic/bin
$ module load NAMD
$ echo $PATH
/opt/software/local/chemistry/NAMD_2.11_Linux-x86_64-ibverbs:/usr/lib64/openmpi/bin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib64/alliance/bin:/home/smithic/bin
$ module purge
$ echo $PATH
/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib64/alliance/bin:/home/smithic/bin
        

Using run scripts

A number of "run scripts" have been created which allow users to submit jobs to the SGE scheduler without having to create their own job submission files. A quick command summary of each is given below (just enter the command without any options to see full details):


Abaqus


run_abaqus
run_abaqus613
run_abaqus613_debug
run_abaqus614
run_abaqus_safe
run_abaqus-inplace
run_abaqus_memory
Usage

 run_abaqus... c jobname input_file [other Abaqus job options] 

where  

c             - number of cores per node; c can be
                between 1 and 16 
jobname       - the abaqus jobname - also used by Grid Engine
input_file    - the input file name. The script will prepend
                the current working directory to a file name
                if an absolute path is not given. 

ORCA


run_orca
run_orca8G
run_orca_mass
 run_orca... N input_file [output_file] 

where  
N             - the number of cores required for this run. N must be
                a multiple of 4 for parallel execution; 1 for serial
input_file    - the input file name. The script will prepend
                the current working directory to a file name
                if an absolute path is not given

Gaussian


run_g09
run_g09_memory
  usage:    run_g09... filename [jobname] [test | runtimehours]

MATLAB

See separate page on MATLAB applications.