Application Guide - Gromacs

Introduction

Gromacs is a versatile package for molecular dynamics simulations, which solves the Newtonian equations of motion for systems with hundreds to millions of particles.  Gromacs has been primarily designed for biochemical molecules (such as proteins, lipids and nucleic acids) but has also been used for research on non-biological systems, e.g. polymers.

The guide shows how to

  • prepare a Gromacs job submission script and
  • submit the Gromacs job.

Gromacs on ARC clusters

Several versions of Gromacs are installed on the ARC clusters, mostly with only single precision support.  Gromacs is managed through the module utility; use module avail gromacs to see the modules available, and also use module to load the appropriate Gromacs.

 

Running a Gromacs job

Here is an example of a submission script (Torque version on the left, Slurm on the right) for a multi-node Gromacs job on arcus and arcus-b, respectively:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16
#SBATCH --time=01:00:00
#SBATCH --job-name=testGromacs
module purge
module load gromacs
. enable_arcus-b_mpi.sh
mpirun $MPI_HOSTS gmx_mpi mdrun \
-s example.tpr -maxh 0.80

 

The scripts enable_arcus_mpi.sh and enable_arcus-b_mpi.sh define the environment variable MPI_HOSTS, which is used by mpirun to launch Gromacs on distributed resources.  In this example, Gromacs takes the file example.tpr as an input and runs for up to a fraction (0.8) of one hour.

Note: Gromacs 4.5.x had a large number of independent executables.  These were brought together into a single executable in version 5.1.x, that covers all the old functionality and is controlled via extra command line parameters.  For example the command

mdrun_mpi

 

in 4.6.x became

gmx_mpi mdrun

 

in 5.1.x and newer.

Also, here is an example of a Slurm submission script for Gromacs on the GPU partition of arcus-b.  This job makes use of the GPU acceleration features in Gromacs.

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --job-name=gmxGPU
#SBATCH --gres=gpu:2
#SBATCH --time=20:00:00
module purge
module load gromacs
export OMP_NUM_THREADS=8
. enable_arcus_mpi.sh
mpirun $MPI_HOSTS gmx_mpi mdrun -s example.tpr \
       -noconfout -resethway \
       -nsteps 100000 -testverlet

 

The example above runs a job across 2 nodes using 2 MPI processes per node.  The number of MPI processes has to match the number of GPUs available per node.

Service area: 

Written by IT Services. Latest revision 28 March 2017