FORCE BASED OPTIMIZERS

The NEB and min-mode following (dimer/Lanczos) saddle point finding methods use a force projection in order to direct the optimizers towards minimum energy paths and saddle points. These modifications to the force mean that the energy is no longer consistent with the force being optimized. Because of this, we can only use optimizers that are solely based upon the force (and not the energy). The quasi-Newton and quick-min (IBRION=1 and 3 respectively) optimizers that are built into VASP are both force-based, but the conjugate-gradient method (IBRION=2) is not.

Here, we present a set of optimizers that are all force-based so they can be used with the NEB and min-mode following methods. To use them, the INCAR must set IBRION=3 and POTIM=0, to disable the built in optimizers. Then, the IOPT parameter will select one of the following methods.

This version of quick-min is essential the same as what has been implemented in vasp. The conjugate-gradient method is different in that is uses a Newton’s line optimizer, and are entirely force based. The LBFGS is also different in that the NEB can be optimized globally, instead of image-by-image. The FIRE optimizer is an interesting new optimizer which has similarities to quick-min, but tends to be faster. The steepest descent method is provided primarily for testing. We recommend using CG or LBFGS when accurate forces are available. This is essential for evaluating curvatures. For high forces (far from the minimum) or inaccurate forces (close to the minimum) the quick-min or FIRE methods are recommended. These two methods do not rely on curvatures, and tend to be less aggressive, better behaved, but also less efficient than CG/LBFGS.

Here is a recent paper discussing the performance of these different optimizers with the NEB.

The min-mode following methods can only be used with one of these methods. If IOPT is not set in a dimer run, the job with die with a corresponding error message.


Optimizer input parameters

The following parameters are read from the INCAR file.

(IOPT = 0) Use VASP optimizers specified from IBRION (default)

(IOPT = 1) LBFGS = Limited-memory Broyden-Fletcher-Goldfarb-Shanno

(IOPT = 2) CG = Conjugate Gradient

(IOPT = 3) QM = Quick-Min

(IOPT = 4) SD = Steepest Descent

(IOPT = 7) FIRE = Fast Inertial Relaxation Engine

(IOPT = 8) ML-PYAMFF = Machine learning (PyAMFF)


Required Parameters

For these parameters, the listed values must be used:

Parameter

Value

Description

IBRION

3

Specify that VASP do molecular dynamics (with zero time step)

POTIM

0

Zero time step so that VASP does not move the ions

For these parameters, the user may use desired values:

Parameter

Recommended

Description

IOPT

3

Quick-Min is the most beginner-friendly (default is IOPT=0)

NSW

100

Number of ionic relaxation steps (see VASP doc on NSW)

EDIFFG

-0.01

Must be negative (see VASP doc on EDIFFG


LBFGS Parameters (IOPT = 1)

Parameter

Default

Description

MAXMOVE

0.2

Maximum allowed step size for translation

ILBFGSMEM

20

Number of steps saved when building the inverse Hessian matrix

LGLOBAL

.TRUE.

Optimize the NEB globally instead of image-by-image

LAUTOSCALE

.TRUE.

Automatically determines INVCURV

INVCURV

0.01

Initial inverse curvature, used to construct the inverse Hessian matrix

LLINEOPT

.FALSE.

Use a force based line minimizer for translation

FDSTEP

5E-3

Finite difference step size for line optimizer


CG Parameters (IOPT = 2)

Parameter

Default

Description

MAXMOVE

0.2

Maximum allowed step size for translation

FDSTEP

5E-3

Finite difference step size to calculate curvature


QM Parameters (IOPT = 3)

Parameter

Default

Description

MAXMOVE

0.2

Maximum allowed step size for translation

TIMESTEP

0.1

Dynamical time step


SD Parameters (IOPT = 4)

Parameter

Default

Description

MAXMOVE

0.2

Maximum allowed step size for translation

SDALPHA

0.01

Ratio between force and step size


FIRE Parameters (IOPT = 7)

Parameter

Default

Description

MAXMOVE

0.2

Maximum allowed step size for translation

TIMESTEP

0.1

Dynamical time step

FTIMEMAX

1.0

Maximum dynamical time step allowed

FTIMEDEC

0.5

Factor to decrease dt

FTIMEINC

1.1

Factor to increase dt

FALPHA

0.1

Parameter that controls velocity damping

FNMIN

5

Minimum number of iterations before adjusting alpha and dt


ML-PYAMFF Parameters (IOPT = 8, only in vtstcode6.3)

The ML-PyAMFF optimizer trains a Behler-Parrinello neural network (see Physical Review Letters 2007, 98 (14) for detail) and uses it as the basis for a separate optimization process in order to reach a local minimum or a saddle point on the surrogate PES. While the model is being retrained with the updated training set for each cycle, the overall number of DFT force calls required for the optimization will eventually decrease while retaining the accuracy level below a pre-set threshold value.

Parameter

Default

Description

PYAMFF_MODEL

mlff.pyamff

Input file name for neural network parameters

PYAMFF_CONV

GRADNORM

Convergence criteria of neural network training (GRADNORM, RMSE)

PYAMFF_ETOL

0.001

Energy RMSE tolerance of neural network training

PYAMFF_FTOL

0.01

Force RMSE tolerance of neural network training

PYAMFF_TOL

0.001

Grandnorm tolerance of neural network training

PYAMFF_FCOEFF

1.0

Parameter that controls contribution of force loss to the training

PYAMFF_MAXEPOCH

2000

Maximum number of epochs of neural network training

PYAMFF_OPT

RPROP

Type of optimizer for neural network training (RPROP, ADAM, LBFGS)

PYAMFF_SWFTOL

0.05

Criteria of switching optimizer type from ML-optimizer to LBFGS

PYAMFF_MAXMOVE

0.5

Maximum allowed step size in total for translation by ML-optimizer

PYAMFF_MAXITER

30

Maximum number of relaxation steps on ML potential energy surface

ML-PyAMFF optimizer setup

The latest version (revision 195) requires a formatted input model file (set by PYAMFF_MODEL) to use ML-PYAMFF optimizer. The formatted file needs following information:

  1. Fingerprint types (Behler-Parrinello)

  2. Element types

  3. Minimum distance between elements

  4. Behler-Parrnello fingerprint parameters

  5. Neural network parameters

An example of the file can be found in the section below. If you are already a PyAMFF user, you can use your model output file (mlff.pyamff) as it is.

Example

ML-PyAMFF optimizer output

The OUTCAR file has details of the calculation. This information is prefixed with the ML-PyAMFF tag. There are two main information:

  1. Training information: The number of epochs to optimize models, their final loss, energy and force RMSE values

  2. Machine-learned potential: The predicted force by the machine-learned potential and the number of steps taken on the potential.

NOTE from developers

ML-PyAMFF optimizer has been recently added to vtstcode6.3. It is being actively developed currently so we are welcome anyone to try and report bugs! Please use our discussion forum to ask questions or report bugs.