Problems with min_style fire

I have had significant problems getting NEB calculations to converge for
a system I've been studying. I eventually tracked the problem to what I
suspect is a typo in min_fire.cpp. My question: should line 91, which
currently reads,
   double dtmax = TMAX * dt;
   double dtmax = TMAX * update->dt;
? The problem I'm having stems from the fact that dtmax (the limit on the
time step used for this minimizer iteration) is chosen to be less than
TMAX (currently 10.0) times the /current/ time step. Let's say I specify
   timestep 1.0E-4
in my input file, so the first iteration it might choose dt = 0.001. Then
the next time it chooses dt = 0.01. Then 0.1. By the time the atoms move
enough that the time step might back off again, it's already moving the
atoms with dt = 1 ps, or something similar.

Perhaps this is intended behavior (and I should therefore NOT use the
FIRE algorithm for this type of calculation...), but I suspect not. The
following script reproduces the problem (note that Fe_mm.eam.fs is the
file from the potentials directory):

variable mpirank uloop 1000
units metal
atom_modify map array sort 0 0.0
lattice bcc 2.86
region box block 0 10 0 10 0 10
create_box 1 box
create_atoms 1 box
mass 1 56
pair_style eam/fs
pair_coeff * * Fe_mm.eam.fs Fe
create_atoms 2 single 5.25 5.5 5.0
thermo_modify flush yes
thermo 100
minimize 0 0 10000 10000
reset_timestep 0
timestep 0.001
fix 1 all neb 1.0
dump 3 all atom 1000 dump.breakme.${mpirank}
min_style fire
neb 0.0 1.0E-4 50000 20000 100 final.breakme

I added some debugging lines to show the time step it chose; the first
iteration chooses 0.01, the second block (time step 100) chooses 0.1,
the third block (time step 200) chooses dt = 0.5, and it aborts before
finishing step 300. Using quickmin seems to work, as does the change in
the source code above.


Karl D. Hammond
University of Tennessee, Knoxville

I don’t recall why Fire is setting dt = update->dt in init(), separate
from the iterate() call. Probably that is incorrect. However, I don’t
see what you mean by:

in my input file, so the first iteration it might choose dt = 0.001. Then
the next time it chooses dt = 0.01. Then 0.1. By the time the atoms move
enough that the time step might back off again, it’s already moving the
atoms with dt = 1 ps, or something similar.

This is all happening within NEB, whcih makes 2 calls to min->run()
which in turn calls MinFire::iterate(), separated by a call to MinFire::init().
So it seems both calls to iterate() should start out with the update->dt.
I.e. the final dt from the first call should not carry over to the 2nd call.
What am I missing?


It calls MinFire::iterate every time the NEB output gets printed, but only
calls MinFire::init once (well, actually twice: once before the "regular"
NEB, once before the climing image NEB).

The problem is that dtmax can then go up every time the NEB output is
printed, so if you have frequent NEB output, the time step can get quite
large under the right conditions.

Karl D. Hammond
University of Tennessee, Knoxville

now I see … this is kind of an odd way NEB is invoking iterate()
in chunks of steps simply for output purposes

It seems like the right thing to do is have Fire (or QMin)
operate as if they were running continuously, without
pausing for output. What if dtmax and alpha were not reset
at all at the beginning of iterate, but were set once in init() ??
Maybe last_negative should also be in that category (for
Fire and Qmin). I think alpha_final is not used, just there
for compatibility with the CG-style minimizers.

Does that solve your problem?


Hi all,

There seems to be a problem with USER-ATC, specifically in
ATC_TransferHardy.cpp. When there is a loop over the neighbours of each
particle, the NEIGHMASK is not applied, while this is the case in, e.g.,
compute_rdf.cpp. Therefore, neighbour numbers can become negative
resulting in a segmentation fault when used as a index in arrays.

This situation occurs six times in ATC_TransferHardy.cpp. The one in
ATC_TransferHardy::compute_pair_map() can be resolved by adding the line
lammps_j &= NEIGHMASK;

straight after
int lammps_j = firstneigh[lammps_i][j];
The segmentation fault then goes away. For the other five places in the
cpp file similar fixes can be applied.


Yes, I think that's exactly what the problem is, and moving those
parameters to init would solve it. I can't speak for whether it makes
sense to change anything else around, of course.


Karl D. Hammond
University of Tennessee, Knoxville

I’ll refer this to Reese. You must be running a model
with bonds, where the exclusion bond info is encoded
in the neigh list? If this is allowed by ATC, then
it does need to apply the neighbor masks to discard
those extra bits.


Hi Bart
The version of ATC that is currently supplied with lammps didn’t support bonded interactions and predates the NEIGHBOR MASK I think.
We have a development version that does. If you can send me a simple input file I can make sure the new code works for you.


Reese Jones
[email protected]…3…
(925) 294-4744

I think this was a simple fix. I just added the
NEIGHMASK line a few places in ATC_TransferHardy.cpp
and insured lmptype.h was included in LammpsInterface.h
which defines the bit mask.


Thank you both. I am indeed running a model with bonds. Reese, I will send you the input files, in case you want to test it more.

Kind regards,

I just posted a 8Aug patch for this -
please try it out and see if your convergence
problem goes away …


I think that solved it. The test case I still had from before doesn't
converge, but I think that's my fault for choosing a bad starting path.
It does not, however, segfault anymore!


Karl D. Hammond
University of Tennessee, Knoxville

"You can never know everything, and part of what you know is always
   wrong. A portion of wisdom lies in knowing that. A portion of courage
   lies in going on anyway."
"Nothing ever goes as you expect. Expect nothing, and you will not be