Description
The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling.
Local support is not available. The package is supported by its developers through their documentation site. VASP is licensed software and licenses are issued to individual research groups. Each group must build and maintain its own copy of the code.
Software Category: chem
For detailed information, visit the VASP
website.
Available Versions
To find the available versions and learn how to load them, run:
module spider vasp
The output of the command shows the available VASP
module versions.
For detailed information about a particular VASP
module, including how to load the module, run the module spider
command with the module’s full version label. For example:
module spider vasp/5.4.4
Module | Version |
Module Load Command |
vasp | 5.4.4 |
module load vasp/5.4.4
|
vasp | grads544 |
module load vasp/grads544
|
Building VASP
VASP is typically built with the Intel compiler and relies on Intel’s Math Kernel Libraries (MKL). VASP users should read our documentation for this compiler before beginning. VASP version 5.4.1 and up provides a sample makefile.include.linux_intel that can be modified for local requirements and for different distributions of MPI.
We recommend that users copy makefile.include.linux_intel
from the arch subdirectory to makefile.include
in the top-level VASP directory, i.e.
cp arch/makefile.include.linux_intel ./makefile.include
This makefile.include
is preconfigured to use the Intel compiler, IntelMPI, and the Intel MKL libraries. We recommend a few local modifications:
- VASP is written primarily in Fortran and on the HPC system the compiler option
-heap-arrays
should be added to the makefile.include
. This can be added to the FFLAGS variable, e.g. FFLAGS = -heap-arrays -assume byterecl -w
- It is advisable to change the SCALAPACK library name to
-lmkl_scalapack_lp64.so
.
To use OpenMPI, the user must also change the Fortran compiler to FC=mpif90
and the BLACS
library to -lmkl_blacs_openmpi_lp64
while leaving SCALAPACK = -lmkl_scalapack_lp64.a
.
Installation details can be found on the VASP wiki: 5.x, 6.x.
Example Slurm script
To run VASP, the user prepares a group of input files with predetermined names. The path to the vasp binary must be provided to the Slurm process manager srun
; in the example below we assume it is in a directory bin
under the user’s home directory. All input and potential files must be located in the same directory as the Slurm job script in this example.
#!/bin/bash
#SBATCH --account my_acct
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=40
#SBATCH --time=3-00:00:00
#SBATCH --output=thermo.out
#SBATCH --partition=parallel
module load intel
srun ~/bin/vasp_std
Known issues
Slow CHGCAR
file write
We have received a few reports that a VASP job may occasionally appear to hang at the end during the “writing wavefunction” step. The slowness actually happens to CHGCAR
instead of WAVECAR
(the cause of which is unclear). You can disable the file write in INCAR
:
LCHARG = .FALSE.
Alternatively, if you set up VASP jobs using ASE’s Python package, you can disbale CHGCAR
writing using:
lcharg = False
vasp_gam
on AMD node
When running vasp_gam
on AMD nodes (i.e. all nodes in parallel
, Afton nodes in standard
), ScaLAPACK must be disabled or else your job may hang at the first electronic step. In INCAR
:
LSCALAPACK = .FALSE.
The ASE Python pacakge disables ScaLAPACK through the line:
lscalapack = False
Alternatively, if your job fits on 40 cores or fewer, you can choose not to disable ScaLAPACK and run it in standard
with the rivanna
constraint so that it will not land on an AMD node:
#SBATCH -p standard
#SBATCH -C rivanna
All ASE tags for the INCAR
can be found in the GitHub repository for ASE’s VASP calculator.