You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is an example job script with a number of errors and issues on UCL clusters:
#!/bin/bash
# Batch script to run LAMMPS 29th September 2021 Update 2 GNU basic + FFTW version on Kathleen
# GNU 10.2.0 and OpenMPI module mpi/openmpi/4.0.5/gnu-10.2.0
# 1. Force bash as the executing shell.
#$ -S /bin/bash
# 2. Request twelve hours of wallclock time (format hours:minutes:seconds).
#$ -l h_rt=12:0:0
# 3. Request 1 gigabyte of RAM per process.
#$ -l mem=160G
# 4. Set the name of the job.
#$ -N LAMMPS-FFTW-rhodo80-12
# 5. Select the MPI parallel environment with 80 processes (2 nodes)
#$ -pe mpi 80
# 6. Set the working directory to somewhere in your scratch space. This is
# a necessary step with the upgraded software stack as compute nodes cannot
# write to $HOME.
# Replace "<your_UCL_id>" with your UCL user ID :)
#$ -wd /home/<your_UCL_id>/Scratch/LAMMPS_examples/bench
module load beta-modules
module load gcc-libs/10.2.0
module load compilers/gnu/10.2.0
module load mpi/openmpi/4.0.5/gnu-10.2.0
module load python3/3.9-gnu-10.2.0
module load fftw/3.3.9/gnu-10.2.0
module load lammps/29sep21up2/basic-fftw/gnu-10.2.0
mpirun -np $NSLOTS lmp_mpi -var x 8 -var y 8 -var z 5 -in in.rhodo.long.scaled -log in.rhodo.long.log-$JOB_ID
The text was updated successfully, but these errors were encountered:
Here is an example job script with a number of errors and issues on UCL clusters:
The text was updated successfully, but these errors were encountered: