Fix qeq/reax CG convergence failed during NVE run using ReaxFF

Hi All,

I’m encountering a fatal error (“Fix qeq/reax CG convergence failed”) when I try to perform an NVE simulation using the reax force field on a 45 x 45 x 45 Angstrom box containing 2000 silicon dioxide molecules. I’ve appended this message with the input file and the “param.qeq” file that I used, as well as the full error message. I should add a few more details for clarity: (1) The force field file that I used came from van Duin and was parameterized for use with silica. (2) I tried to read in the parameters contained in the “param.qeq” file from the force field file using the command “fix 2 all qeq/reax 1 0.0 10.0 1e-6 reax/c”, but this also resulted in the same error. (3) I have the same problem if I use “pair style reax”. (4) I was able to perform an NVE simulation using the same input structure using a different force field.

Any ideas on where I should look for the origin of this problem or are there any obvious issues with my inputs?

Thanks,
Vanessa

input file:

Hi All,

I'm encountering a fatal error ("Fix qeq/reax CG convergence failed") when I
try to perform an NVE simulation using the reax force field on a 45 x 45 x
45 Angstrom box containing 2000 silicon dioxide molecules. I've appended
this message with the input file and the "param.qeq" file that I used, as
well as the full error message. I should add a few more details for clarity:
(1) The force field file that I used came from van Duin and was
parameterized for use with silica. (2) I tried to read in the parameters
contained in the "param.qeq" file from the force field file using the
command "fix 2 all qeq/reax 1 0.0 10.0 1e-6 reax/c", but this also resulted
in the same error. (3) I have the same problem if I use "pair style
reax". (4) I was able to perform an NVE simulation using the same input
structure using a different force field.

Any ideas on where I should look for the origin of this problem or are there
any obvious issues with my inputs?

does the same setup work for a system of - say - 50 molecules?

axel.

The error is not in the convergence, that's just a warning.
There error is you are trying to allocate 7 Gb of memory
and you can't. That usually indicates something bad about
the geometry of your problem and that ReaxFF can't deal
with it. Aidan may have further suggestions.

Steve

Pair/c reax is telling you that it needs 7GB to store three-body
interactions. I'm guessing that you are using an incorrect threebody
cutoff (either distance or bond order). You could try modifying thb_cutoff
in the control file:

thb_cutoff: cutoff value for the strength of bonds to be considered in
three body interactions. (default value = 0.001)

You could also try using pair_style reax.

However, the big mistake you made was skipping over the *validation* stage
of your project. This means comparing your LAMMPS results with the
published results for the potential you want to use. It is a lot easier to
find a problem in a 64 atom quartz cell than in 6000 atoms of
who-knows-what structure.

Aidan

Hi Aidan,

The three body cutoff that I was using was already set to the default and I had already tried to use the pair_style reax as well to no avail. Your point about validation is well-taken, the simulation I was initially trying to do was actually an attempt to replicate the simulation of the formation of amorphous glass performed by van Duin et al. (JCP, 132, 174704 (2010)), where the use of this particular version of reax force field parameters was first reported.

In any case, I did follow your advice to perform a simple simulation (NVE) on a much smaller quartz cell with results that (to me) seemed curious. I performed a simulation of crystalline quartz (108 atoms) and the simulation was able to finish without any obvious problems (e.g., no large perturbations in any of the energetic terms). However, when I increased the size of the simulation box to 366 atoms, some of the energetic terms (particularly e_coul and e_pol) spiked in magnitude and many of the atoms “flew” out of the simulation box. It seems odd that these two simulations should behave so differently when the only differences between them are simulation size. Is this behavior symptomatic of any particular problem? Any idea on what to check to track down the origin of the problem? I’ve appended some of the input files used in this simulation below. Thanks- Vanessa

in.sio2

Hi Aidan,

The three body cutoff that I was using was already set to the default and I
had already tried to use the pair_style reax as well to no avail. Your point
about validation is well-taken, the simulation I was initially trying to do
was actually an attempt to replicate the simulation of the formation
of amorphous glass performed by van Duin et al. (JCP, 132, 174704 (2010)),
where the use of this particular version of reax force field parameters was
first reported.

In any case, I did follow your advice to perform a simple simulation
(NVE) on a much smaller quartz cell with results that (to me) seemed
curious. I performed a simulation of crystalline quartz (108 atoms) and the
simulation was able to finish without any obvious problems (e.g., no large
perturbations in any of the energetic terms). However, when I increased the
size of the simulation box to 366 atoms, some of the energetic terms
(particularly e_coul and e_pol) spiked in magnitude and many of the atoms
"flew" out of the simulation box. It seems odd that these two simulations
should behave so differently when the only differences between them are
simulation size. Is this behavior symptomatic of any particular problem? Any
idea on what to check to track down the origin of the problem? I've appended
some of the input files used in this simulation below. Thanks- Vanessa

this looks a lot like a "polarization catastrophe", i.e. the coulomb
forces increasing through self polarization faster than the
repulsion from the van der waals.

this may be an indication of improper balance between
polarizability and van der waals repulsion. it requires
larger fluctuations to be triggered, so a finite size effect
is quite to be expected.

does this also happen, if you reduce the time step?

axel.

this looks a lot like a “polarization catastrophe”, i.e. the coulomb
forces increasing through self polarization faster than the
repulsion from the van der waals.

this may be an indication of improper balance between
polarizability and van der waals repulsion. it requires
larger fluctuations to be triggered, so a finite size effect
is quite to be expected.

does this also happen, if you reduce the time step?

axel.

It does still happen if I reduce the timestep to 0.1 fs (from 0.25 fs), but not exactly with the same pattern. Similarly, it happens if I reduce the time step to a ridiculously small value of 0.025 fs. (I’ve appended the energies for the runs using these two time step values below.) Is this behavior necessarily indicative of improper force field parameters, or could it be due to implementation?

Thanks again for your help! -Vanessa

energies from run using 0.1 fs time step

this looks a lot like a "polarization catastrophe", i.e. the coulomb
forces increasing through self polarization faster than the
repulsion from the van der waals.

this may be an indication of improper balance between
polarizability and van der waals repulsion. it requires
larger fluctuations to be triggered, so a finite size effect
is quite to be expected.

does this also happen, if you reduce the time step?

axel.

It does still happen if I reduce the timestep to 0.1 fs (from 0.25 fs), but
not exactly with the same pattern. Similarly, it happens if I reduce the
time step to a ridiculously small value of 0.025 fs. (I've appended
the energies for the runs using these two time step values below.) Is this
behavior necessarily indicative of improper force field parameters, or could
it be due to implementation?

i am not an expert on reaxFF, but the fact
that you see effectively the same behavior
with a very short time time step hints at
a problem due to parametrization or implementation.

since the polarization goes out of whack, i would first
check out the qeq/reaxc file. since you use reax/c, have
you tried to use fix qeq/reax with the reax/c option instead
of a param.qeq file? have you seen the comment in
the documentation of param.qeq needing twice
as large an eta value as the reaxff parameter file?

cheers,
    axel.

Congratulations on successfully narrowing the problem to a very small,
short simulation.

Axel is right. Based on the very large and rapid changes in electrostatic
and qeq energies on the very first timestep, I am guessing that the
positive and negative charges are growing without bounds. You can verify
this by writing a dump file with the charge values. This can happen if
ions get too close together i.e. a bad initial configuration. If the
simulations behaves for 108 atoms, but not for 366 atoms, maybe the latter
config is bad. If it is good, you should get the same per-atom energies in
both systems at timestep 0.

The cause could also be due to an error in the parameters that you have
specified. Implementation error is unlikely, since we have validated our
implementation on all the potentials given in potententials/README.reax.
Unfortunately for you, SiO2 is not included.

Aidan

Hi Aidan,

Thanks for the troubleshooting advice. It’s indeed the case that the charges for the simulation containing 366 atoms grow without bound (beginning around the 5th timestep), whereas the simulation containing 108 atoms doesn’t behave this way. Because “compute pe/atom” cannot be applied with the use of “pair_style reax/c”, I switched to simulations employing “pair_style reax” to print out the potential energy per atom and found that the per atom energies for the 108 atom simulation are clearly different from that of the 366 atom simulation at timestep 0. (I should note that for simulations using the command “pair_style reax” didn’t show much of any change in the atom charges until the 35th timestep, which is considerably delayed from that observed with simulations using the “pair_style reax/c” command.)

In any case, it looks as if the 366 atom simulation is starting with a highly unfavorable starting configuration. In your last email you state: “The cause could also be due to an error in the parameters that you have specified.” Does this statement still stand for the case where the starting configuration is “bad”? For me, there seem to be a few counter-intuitive things that make it difficult for me to understand the nature of this problem. Specifically, the input file the “bad” starting configuration is created using these commands:

lattice custom 5.4054 a1 0.9095 0.0000 0.0000 a2 -0.4547 0.7876 0.0000 a3 0.0000 0.0000 1.0000 &
basis 0.4697 0.0000 0.0000 basis 0.0000 0.4697 0.6667 basis 0.5303 0.5303 0.3333 basis 0.4135 0.2669 0.1191 &
basis 0.2669 0.4135 0.5475 basis 0.7331 0.1466 0.7857 basis 0.5865 0.8534 0.2142 basis 0.8534 0.5865 0.4524 &
basis 0.1466 0.7331 0.8809

region simbox block 0 3 0 3 0 3 units lattice

I would have thought that a starting structure based on the crystal lattice should produce a configuration that is at a low energy configuration regardless of the size. This assumption would seem to be re-inforced by the fact that simulations using the BKS potential run without obvious problems for this starting configuration. Obviously my assumptions are wrong, so I’d appreciate some insight into why this is the case.

Regardless, I’d appreciate any help on how I might go about producing a starting configuration for larger simulations that don’t cause these problems. Or is it the case that I need to determine whether or not my reax force field parameters are correct before proceeding- although this might be desireable in any case?

Thanks again,
Vanessa

Because "compute pe/atom" cannot be applied with the use of "pair_style
reax/c", I switched to simulations employing "pair_style reax" to print
out the potential energy per atom and found that the per atom energies
for the 108 atom simulation are clearly different from that of the 366
atom simulation at timestep 0.

This tells me that there is something wrong with your starting
configuration(s). You need to be sure that the periodic simulation cell
and atom positions are consistent. This is not a LAMMPS issue, but rather
a general issue with periodic boundaries and crystals.

I believe the latest release of LAMMPS supports per-atom energy with
pair_style reax/c. Also, you don't need to dump per-atom energies, you can
just divide the total energy contributions by the number of atoms, or have
LAMMPS do it for you:

thermo_modify norm yes

Aidan

Hi Aidan,

Thanks again for your help. I understand your point about making sure that the the “lattice”, “create_box” and “create_atom” commands are correctly implemented such that atom positions of a given crystal structure are consistent with the periodic box. What is the best way to ensure that I’ve created a atomic positions consisent with the periodic box? I compared those “bad” (larger) starting structures, which resulted in the failure of a short MD simulation, with those “good” starting structures and identified potentially problematic atoms by inspection and found that I was able to run the simulation if I reduced the box size along the box dimension that contained the suspect atoms. But this simplistic solution is obviously not a realistic solution to this problemvim . Is there a more efficient way to choose the optimal box size given a particular custom lattice?

Also, I’m still confused as to why my input script works when I use a BKS potential, but fails when I use a reax potential. In other words, if I was creating a simulation box where atoms from one side of the simulation where overlapping with the adjacent periodic cell, then shouldn’t both simulations fail?

Thanks again,
Vanessa