problem with granular LAMMPS - no continuity in runs

Dear all,

I have found a 'bug' in granular LAMMPS. Basically when I run the attached script I get different energies as a thermo output if I do either of 2 things:

(a) type run 1000000 instead of two times run 500000
(b) delete zero atoms versus commenting out the delete command (this second might be due to the same problem that causes a though)

This problem is more serious in parallel. (a) causes slight energy changes both in parallel and serial, but could be due to the chaotic nature of the simulation
(b) however appears only in parallel and in a stage of the simulation where the atoms have been compacted and the process is not so chaotic

I suspect that the problem has something to do with shear history but I am not 100% sure.

Can anyone help?
Thank you,

George

in.discontinuousPar (2.34 KB)

Dear all,

I have found a 'bug' in granular LAMMPS. Basically when I run the attached script I get different energies as a thermo output if I do either of 2 things:

(a) type run 1000000 instead of two times run 500000

if you want a "perfect" continuation of a trajectory, you have to use
the "post no" flag on the first run and "pre no" on the second. this
avoids running through the "setup" phase, which results in one
additional force calculation and does affect (some of) the granular
pair styles. you have the code

  int shearupdate = 1;
  if (update->setupflag) shearupdate = 0;

in the pair style class that you are using.

(b) delete zero atoms versus commenting out the delete command (this second might be due to the same problem that causes a though)

doing the delete when?

the delete_atoms command doesn't check whether atoms were actually
deleted, but forces a re-initialization of relevant internal
structures anyway. it may be worth considering to change its behavior
to not touch the system, if no atoms are deleted.

This problem is more serious in parallel. (a) causes slight energy changes both in parallel and serial, but could be due to the chaotic nature of the simulation
(b) however appears only in parallel and in a stage of the simulation where the atoms have been compacted and the process is not so chaotic

I suspect that the problem has something to do with shear history but I am not 100% sure.

see above,
      axel.

It's more fundamental than that. From the pair_gran
and read_restart doc pages:

These pair styles will not restart exactly when using the
"read_restart"_read_restart.html command, though they should provide
statistically similar results. This is because the forces they
compute depend on atom velocities. See the
"read_restart"_read_restart.html command for more details.

Certain pair styles will not restart exactly, though they should
provide statistically similar results. This is because the forces
they compute depend on atom velocities, which are used at half-step
values every timestep when forces are computed. When a run restarts,
forces are initially evaluated with a full-step velocity, which is
different than if the run had continued. These pair styles include
"granular pair styles"_pair_gran.html, "pair dpd"_pair_dpd.html, and
"pair lubricate"_pair_lubricate.html.

Steve

Axel, Steve,

Thanks for the replies and all the help.
I had a look into the 'problem' again and I agree with you that it is most probably due to the fact that the
pairs are velocity-dependent.

Regarding my delete_atoms question I made it work much better by setting the compress flag to 0 through the delete_atoms compress no input switch.

I should have done this anyway- Axel I believe this is what you meant.
But I suggest we make a small change to the code so as to make sure
that when atoms that have a shearhistory get deleted the tags do not get reset. Resetting tags will
mess up with shear history.

So I suggest we replace line 81 of the delete_atoms.cpp file so that it reads:
  if (atom->molecular == 0 && compress_flag && force->pair_match("history",0)==NULL ) {

Best wishes,

George