LAMMPS stucked and no output

Dear LAMMPS users,

I recently got a problem which makes me pretty confuse. I tried to run a simulation with half million or one million atoms with Reaxff potential. The system is deformed by uni-axial tension test on y direction. When I restarted the system, sometime it stucked at a specific timesteps, and after that there is no output anymore, no dump, no thermo, no nothing. But the Lammps is still running. There is no error message showing. I checked the atomic system and output before it stucked, everything is good, stress is good, energy is good, temperature is good. No any surprise parameters. And I tried to restart the exactly same system with different number of processors, the problem solved, the simulation run through that specific timestep. And then after sometime it stucked again, then I have to change the number of processors again to let the simulation keep running. I know change number of processors can slightly change the results. i just wonder what the reasons cause this kind of problems and how can I fixed it. And I found out the stuck always happened during the fracture of the system (after reaching the max strength). It also happened when I changed the potential from tersoff to reaxff. If I use tersoff there is no more such problem.

the attachment are my input file and output file.

Here is my input file:

units real
dimension 3
processors * * *
boundary p p p
atom_style charge

ATOM DEFINITION

#read_data strained-sige.dat
read_restart restart_silica_cnt

SETTINGS

#pair_style reax/c NULL
#pair_coeff * * ffield.reax Si O

pair_style hybrid lj/cut 6.0 reax/c NULL airebo 3.0
pair_coeff * * reax/c ffield.reax Si O NULL
pair_coeff * * airebo CH.airebo NULL NULL C
pair_coeff 1 3 lj/cut 1.153017500000000 3.000 6.0
pair_coeff 2 3 lj/cut 1.153017500000000 3.000 6.0

timestep 0.5

compute 1 all temp
compute PE all pe/atom
compute KE all ke/atom
compute eng all pe/atom
compute eatoms all reduce sum c_eng
compute peratom all stress/atom NULL

VARIABLE DEFINITION

variable natoms equal “count(all)”
variable teng equal “c_eatoms”
variable length equal “lx”
variable ecoh equal “v_teng/v_natoms”
variable p2 equal “-pxx/10000”
variable p3 equal “-pyy/10000”
variable p4 equal “-pzz/10000”

variable vacuum_width equal 230.0
variable deform_temp equal 300.0

variable initLength_x equal 1192.0132
variable initLength_y equal 832.72004
variable initLength_z equal 16.3201
variable timestep equal “dt”

fix 9 all qeq/reax 1 0.0 10.0 1e-6 reax/c

energy.out (31.3 KB)

input.in (4.09 KB)

before reporting problems with an older version of LAMMPS, please always make a test whether the same issue prevails with the latest version. there may have been bugfixes pertinent to the issues you are experiencing. for sure, no developer will take a closer look until this is confirmed. furthermore, you should make tests to see, if the same issue can be reproduced with a much smaller system, since it is near impossible and would require a lot of effort to debug such a huge system.
finally, your time step may be too aggressive for ReaxFF; have you tried 0.25fs or 0.1fs?
if you have the time, you should also compile and test the CPU-only kokkos version of the reax pair style. this was written to avoid some of the inherent problems with the code in USER-REAXC.

axel.

In addition to Axel’s comments, you are combining force fields that are not meant to be combined. If you insist on using input that the developers of REAX have stated is incorrect, you should expect to have issues and get incorrect results.

Jim

Thank you for your replying.

I think I have already found the solution, I reduced the number of processors from 400 to 200. Then the simulation dose not show stuck behavior any more. I don’t know why, but right now simulation is running ok for me.

Thank you again!!

About James common he said,

“combining force fields that are not meant to be combined” in my simulation. I found out Axel said in other conversations that:

“please note that when including atom types A and B with any manybody potential, you do not only compute A-B, A-A, and B-B interactions, but also are looking other n-tuples of atoms.”

Is this the mistake I made? Please give me some advises!

Thank you so much!!!

About James common he said,

"combining force fields that are not meant to be combined" in my
simulation. I found out Axel said in other conversations that:

"please note that when including atom types A and B with any manybody
potential, you do not only compute A-B, A-A, and B-B interactions, but also
are looking other n-tuples of atoms."

Is this the mistake I made? Please give me some advises!

​this issue can not be responsible for your calculations stalling. however,
you should be aware that by using an older version of LAMMPS you are using
code with known (and fixed) bugs. that issue aside, what jim is referring
to is a *principal* problem of the physical validity of your computations.

there are a few things to think about:
- when using classical force fields, you always have errors. they can be
small or large. how small or large depends on the system and state you are
modeling, how "flexible" the force field's functional form is, and what
kind of information you want to extract from the simulations. as a general
rule, reproducing "absolute" properties (e.g. melting point) is much harder
than trends (e.g. how the melting point changes with pressure).

- when using pair style hybrid, you are combining force fields that have
different functional forms. as a matter of principle, this will add to your
error. however, whether this is a significant contribution or not depends a
lot on the individual composition of your system and the kind of force
fields you are using. even combining force fields that have (essentially)
the same functional form (e.g. CHARMM and AMBER) will introduce an error,
since those have different (and thus inconsistent) parameterization
strategies and use different models for solvent, which leads to
inconsistent cross-term interactions.

- when using manybody potentials in pair style hybrid, things get even more
complicated, since you cannot easily subdivide the interactions into
additive pairs. while all pair styles use pair-wise neighbor lists,
many-body potentials have components that depend on the positions of
neighbors of neighbors and with pair style hybrid, you are cutting these
off.

- what happens with pair style hybrid is, that you are essentially
computing the interactions of each pair style separately, so they behave as
if they are in vacuum and have no atoms next to them and then you add the
pairwise interactions for the mixed terms, which do not contribute to any
manybody terms. for EAM styles, thus you are missing the contribution to
the embedding function, for AIREBO, the contributions to angle and dihedral
terms.

- ReaxFF is the most complex manybody force field and thus the error from
using pair style hybrid is the largest. simplified you could say, that due
to using charge equilibration and having implicit angle/dihedral and
hydrogen bond terms, it combines the errors kind of errors that you have
with EAM and the likes of Tersoff, Stillinger-Weber or AIREBO.

​with all that in mind, you have to ask yourself the question? do i have a
system, where this error is rather large or rather small? and is the
magnitude of the error acceptable? errors from pair style hybrid are the
smallest, if your system consists of two separate subsystems, that have
only a small contact area. examples are setups like nanomachining, where
you would scrape a metal surface with a diamond or silicon tool. here the
approximation of using LJ for the contact area is quite good, since the
details of the metal-tool interactions don't matter much. you typically
primarily care about the behavior _inside_ the tool and metal. in the other
extreme, you would have mixtures (e.g. alloys) or molecules with different
constituent atoms represented by different (many-body) force fields. here
the error can be massive.

since the principal sources of errors for ReaxFF are the most and it is a
very complex force field, using it for hybrid computations should not be
done without significant and careful validation. despite the general
flexibility of ReaxFF, its parameter sets are not as transferable as for
less complex force fields, and thus it is almost always advisable to not
use ReaxFF at all or use a parameter set that can be applied to the entire
system (and has been parameterized for the classes of compounds you have or
may have).

​you mentioned that you are doing simulations with a very large number of
atoms. have you made any validation checks with smaller test systems? you
may be wasting a large amount of computer time and your personal time...

axel.

Thank you so much for your replying!!!

There is so much information in your emails, I have to discuss all of this with my advisor tomorrow.

Really appreciate the experts like you Axel and James can give important advises to the beginners like me. Really appreciate the contributions your guys made in MD simulations community.

Thank you again!!!