Gromacs is a versatile package for molecular dynamics simulations, which solves the Newtonian equations of motion for systems with hundreds to millions of particles. Gromacs has been primarily designed for biochemical molecules (such as proteins, lipids and nucleic acids) but has also been used for research on non-biological systems, e.g. polymers.
The guide shows how to
- prepare a Gromacs job submission script and
- submit the Gromacs job.
Gromacs on ARC clusters
Several versions of Gromacs are installed on the ARC clusters, mostly with only single precision support. Gromacs is managed through the module utility; use module avail gromacs to see the modules available, and also use module to load the appropriate Gromacs.
Running a Gromacs job
Here is an example of a submission script (Torque version on the left, Slurm on the right) for a multi-node Gromacs job on arcus and arcus-b, respectively:
The scripts enable_arcus_mpi.sh and enable_arcus-b_mpi.sh define the environment variable MPI_HOSTS, which is used by mpirun to launch Gromacs on distributed resources. In this example, Gromacs takes the file example.tpr as an input and runs for up to a fraction (0.8) of one hour.
Note: Gromacs 4.5.x had a large number of independent executables. These were brought together into a single executable in version 5.1.x, that covers all the old functionality and is controlled via extra command line parameters. For example the command
in 4.6.x became
in 5.1.x and newer.
Also, here is an example of a Slurm submission script for Gromacs on the GPU partition of arcus-b. This job makes use of the GPU acceleration features in Gromacs.
module load gromacs
mpirun $MPI_HOSTS gmx_mpi mdrun -s example.tpr \
-noconfout -resethway \
-nsteps 100000 -testverlet
The example above runs a job across 2 nodes using 2 MPI processes per node. The number of MPI processes has to match the number of GPUs available per node.