Restarting rRESPA calculations

Dear LAMMPS users,

I am having issues restarting rRESPA calculations.

LAMMPS version: LAMMPS (29 Sep 2021 - Update 1)

Problem description:

While trying to figure out the appropriate parameters for respa in NVE runs ​(no barostats, thermostats, fixed regions etc.), I encountered the following problem. When trying to restart from a restart file written during a respa run, LAMMPS reads fewer atoms than are present in the system and throws the error message:

ERROR: Did not assign all restart atoms correctly (…/read_restart.cpp:471)

Observations and additional tests:

The error happens even if one uses a restart file generated after just 1 integration step or using a small time step (1 fs). Interestingly, the number of atoms read from the restart in these two cases is also different. The error does not occur, if 1-level respa is used.

The classical explanation I found is that atom coordinates may be outside the bounds of the non-periodic box, actually, my box is periodic. Using the “remap” flag fixes the issue, though, suggesting atom coordinates may be indeed outside the bounds of the box.

I am wondering, though, why the error happens only in multilevel respa, even if the time step is half as large as in the one-level case.

After using remap, all energies in the first energy output of the new run a are identical to the last step from the previous run, as it should be.

However, the pressure is not.

Questions:

Did anyone experience the same kind of behavior and what are your thoughts on that?

Is it intended that one has to use “remap” if one restarts respa runs?

Why would the pressure be different, or is it expected after a remap?

If the level of detail is not sufficient, I will provide all the inputs and descriptions to reproduce the behavior

eddi

I don’t recall that this kind of behavior has ever been reported and I have no hunch what may be causing it. Atoms may move outside their individual subdomain in between neighbor list updates, but - as you noted - this should not be a problem with periodic boundaries.

This is unusual enough to warrant a closer look, so my suggestion is to report this as a bug report issue on the LAMMPS issue tracker on GitHub: Issues · lammps/lammps · GitHub
Please prepare a small/minimal input example that can reproduce the issue quickly and without running in parallel or failing that with only a small number of parallel tasks (<= 4 is best), and attach it to the issue.

Thanks, Axel. Will do.

eddi

This can happen, for example, if you run with a Kspace style and fix npt. During a run some of the Kspace parameters are recomputed during volume change, but not all (e.g. the FFT grid for PPPM, or the damping parameter) and with a volume change, those may be recomputed to a different value during initialization. If you want those to remain constant, you would need specify them all explicitly instead of letting LAMMPS compute optimal values based on the convergence parameter.

is it the combination of kspace and npt?
I am running just NVE.
Just posted the issue on GitHub, like you suggested.

Thank you very much,

eddi

Yes, or some other thing that changes the volume or the domain decomposition.