I have been using the NEB technique, and on the whole it seems to be a rather useful technique. However, sometimes the system seems to reach a minimum energy state on the order expected; and then begins to wildly diverge (with the EBF value increasing to infinitum).
Now, this may simply be due to the fact that the initial and end state configurations are insufficient to provide a reasonable interpolation between them. However, it would be useful to develop an error framework within the code which recognises if the EBF begins to diverge and possibly to output the value of the global minimum (instead of simply continuing until the pre-set timestep limit).
Has anyone already implemented a code capable of this? Otherwise, I am curious to try to develop some source code; and this seems like a relatively simple (and useful for me!) place to begin?
Any advice for a first-time code developer (I have SOME LIMITED experience coding with C++).
I’m sure there are options/checks that could be usefully added
to NEB. My basic advice is don’t add code that changes
things that already work or affects their speed. Or that
obfuscates the way the code works.
The only change would be to implement a means of checking the minimization to avoid divergence to infinity. I.e., to actually identify the MEP with respect to the EBF; by stopping the simulation if the EBF increases by, say, 10% or 20%… If dumpfiles are being created using a universe command, they would be output for this MEP state, not the final state.
This does not have any effect on the NEB technique, except to add an additional stopping constraint for the Fire or Quickmin minimizers…
What are your thoughts about this proposal?
The minimizers are distinct from NEB and can
be used w/out NEB, e.g. to perform a standard
minimization. So it’s not clear to me why
the minimizer needs a new stopping criterion,
if NEB has formulated the minimization