NEB convergence

Vasp transition state theory tools

Moderator: moderators

Post Reply
sinfire
Posts: 12
Joined: Tue Aug 21, 2018 4:03 am

NEB convergence

Post by sinfire »

Hi all,

I'm doing a CI-NEB calculation. Just wanted to check if this procedure makes sense.

At first, I did a continuous FIRE(EDIFFG = -1.0) -> L-BFGS(EDIFF = -0.15) job using the default TIMESTEP parameters.

The FIRE step was taking some time and when consulting the nebef.pl result, I noticed one of the image has very high force so I checked the force "history", and the force was keep increasing as the step increased even for FIRE. So I performed a new job doing FIRE with a smaller TIMESTEP (=0.02) and then sent it to L-BFGS.

This time the FIRE job converges, and the L-BFGS job is in progress.

I have a question here, if I look at the max force of a particular image, some images have a sudden "spike" in the force for some steps. Is this a common thing to happen? For instance,

======================================
(result from grep "FORCES:" OUTCAR | tail -n 10)
...
FORCES: max atom, RMS 1.455110 0.340077
FORCES: max atom, RMS 1.041318 0.200334
FORCES: max atom, RMS 0.882275 0.147660
FORCES: max atom, RMS 0.785884 0.113878
FORCES: max atom, RMS 0.948421 0.170291
FORCES: max atom, RMS 1.218201 0.242196
FORCES: max atom, RMS 9.328758 1.656945
FORCES: max atom, RMS 1.265518 0.220428
FORCES: max atom, RMS 1.073508 0.191102
FORCES: max atom, RMS 0.609962 0.137045
...
=======================================

You can see the force "spiked" to a 9.32 and then got lower. Is it OK for this kind of thing to happen?

Thanks in advance.
graeme
Site Admin
Posts: 2256
Joined: Tue Apr 26, 2005 4:25 am
Contact:

Re: NEB convergence

Post by graeme »

No, this is not good. The goal of the initial FIRE (or quickmin) calculation is to systematically reduce any high forces to about 1 eV/Ang. At this point, you can also check to make sure that the path is reasonable. Simply looking at a movie of the path and an energy profile can often reveal problems, or alternatively, show that things make sense. Then you can use any faster optimizer, such as LBFGS, to fully converge the path.

Forces jumping between 1 and 10 eV indicates an instability in the optimizer. You may need to run the FIRE calculation longer and/or lower the INVCURV parameter for LBFGS. While it is possible that this calculation manages to converge, this is certainly not the behavior that you want.
sinfire
Posts: 12
Joined: Tue Aug 21, 2018 4:03 am

Re: NEB convergence

Post by sinfire »

Thank you for the answer prof. Henkelman.

Some additional questions:

1. What does exactly "run the FIRE calculation longer" mean? Are you suggesting lowering the time step and/or lowering the EDIFFG of the FIRE job?

2. As I look at the images, the distance between the starting geometry of the images was quite reasonable ( < 0.4), but during the run some distances are now 0.64 or 0.60. Is the number of images sufficient if the distance between the starting POSCARs are low enough, or should I consider running with more images if the distance gets larger during the run? My guess is that the large distance is a artefact from the instable optimizer but just to be sure.

EDIT: 3. Oh, and does it make sense to tweak the INVCURVE keyword and leaving LAUTOSCALE as true? should I also turn it to false as I tweak the INVCURVE parameter?

Thanks.
graeme
Site Admin
Posts: 2256
Joined: Tue Apr 26, 2005 4:25 am
Contact:

Re: NEB convergence

Post by graeme »

1. By running the FIRE calculation longer, I was meaning lowering EDIFFG.

2. I do not think that you can use the distance between images to determine the appropriate number of images. The required number of images is really based upon the curvature of the path, and making sure that the path can be approximated by a set of linear segments. It is certainly true that an unstable optimizer can increase the length of the path, especially if you don't have any frozen atoms or soft modes in the system.

3. I think that LAUTOSCALE overrides the INVCURVE keyword.
Post Reply