| Name | APPS/CHEM/GROMACS-MPI-3.3.3 |
|---|---|
| Description | Gromacs: fast, free and flexible MD |
| Status | Alpha, interface might change based on user suggestions |
| Last update | 2008-05-06 |
Taken from http://www.gromacs.org: GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
Only this development version currently available.
The runtime environment takes care that mdrun_mpi can be executed simply by having the following line in the job script:
mpirun $MPIARGS mdrun_mpi
MPIARGS environment variable will be set in site specific way by the runtime environment.
The job description file gromacs_np4.xrsl
&
(executable=run_gromacs_mpi.sh)
(jobname=gromacs_np4)
(stdout=std.out)
(stderr=std.err)
(gmlog=gridlog)
(cputime=60)
(memory=1000)
(disk=4)
(&
(runtimeenvironment>=APPS/CHEM/GROMACS-MPI-3.3.3)
)
(inputfiles=
("topol.tpr" "topol.tpr.np4")
)
(count=4)
The job script
#!/bin/sh echo "Hello parallel Gromacs!" mpirun $MPIARGS mdrun_mpi exitcode=$? echo "Bye parallel Gromacs!" exit $exitcode
See separate installation guide.
Here is a sample runtime environment script for PBS/Maui. The most complicated part is translating the number of processes to nodes and cores.
#!/bin/bash
#
# Nordugrid ARC runtime environment script for Gromacs parallel runs.
#
# shared directory for application installation
application_base_path="/grid/nordugrid-arc/appl/"
# version
gromacs_version="3.3.3"
case "$1" in
0 )
# Number of cores per node (ppn property)
coresPerNode=8
# Requested number of processes by the user
numRequestedCores=$joboption_count
# check if the count is a multiple of the cores per node
coremod=$((numRequestedCores%coresPerNode))
# calculate the number of needed nodes
if [ $coremod -ne 0 ]; then
numNodes=$(( (numRequestedCores/coresPerNode)+1 ))
else
numNodes=$((numRequestedCores/coresPerNode))
fi
echo "$numRequestedCores cores, $coresPerNode cores per node, $numNodes nodes" 1>&2
# rewrite the count to get the correct amount of nodes
declare joboption_count=$numNodes
# append the number of cores to the joboptions
i=0
# we need indirect reference here so this looks a bit nasty
eval jonp=\${joboption_nodeproperty_$i}
while [ ! -z $jonp ] ; do
(( i++ ))
eval jonp=\${joboption_nodeproperty_$i}
done
declare joboption_nodeproperty_$i="ppn=$coresPerNode"
# write the number of cores to a file in the session directory
# to relay the info to the process on the worker node
echo $numRequestedCores > $joboption_directory/.numcores
;;
1 )
# set the openmpi environment
export PATH=/pack/openmpi-1.2.5-gnu-ib/bin/:$PATH
export LD_LIBRARY_PATH=/pack/openmpi-1.2.5-gnu-ib/lib/:$LD_LIBRARY_PATH
# and the gromacs environment
source $application_base_path/gromacs/$gromacs_version/bin/GMXRC
# export the number of cores (optional, for debugging)
export NSLOTS=`cat .numcores`
# set the mpirun arguments (mandatory)
export MPIARGS="-np $NSLOTS"
;;
2 )
# no cleanup necessary
;;
* )
# Now, calling argument is wrong or missing.
# If call was made from NorduGrid ARC, it is considered
# an error. If this script is to be used also to initialize
# MPI environment for local jobs in cluster, raising error here
# could be improper.
return 1
;;
esac
Contact olli.tourunen@csc.fi if you have any grid use specific questions. Contact your local Gromacs guru in MD related questions.