Possible bug causing "Out of range atoms - cannot compute PPPM "

Dear all,

I recently have been performing MD simulations with electrostatic interactions, and I have had a few issues with the error: „Out of range atoms - cannot compute PPPM “. I think it may be caused by a bug/limitation in LAMMPS. Here are my observations and a possible explanation:

  1. The error occurred only when the volume of the simulation box changed (e.g. npt runs or by using controlled box deformations). All the simulations have been running fine under fixed volume (nve, nvt).

  2. I had the problem with large volume variations, but also for small fluctuations during stabilized npt runs. Changing barostat parameters to slow down the volume changes did delay the problem, but not solve it.

  3. The system dynamics appear to be good: energy and forces do not increase dramatically before the error, nor do the atoms move oddly or overlap. Moreover, if I rerun a simulation from a restart just previous to the error, it seems to run fine, at least for a while (see possible explanation below).

  4. Neighboring lists are created every timestep without delay in all simulations. Changing neighbor skin only delays the problem.

  5. The issue occurred on the latest lammps version, but also on 2015 and 2014 ones. I experienced this problem with several systems, consisting of small molecules or very big ones.

I think the issue is that the pppm mesh does not adapt to the simulation box as it changes, be it through npt or fix deform. I checked the pppm.cpp file and did not find any command that recalculates the mesh as the box dimensions are modified (though I may have missed it, as I am not a code expert). Thus, some atoms are bound to be mapped incorrectly somewhere along the simulation. On the other hand, meshes are recalculated at the beginning of each run, which explains why rerunning the simulation from a restart file temporarily solves the problem.

Any idea on this issue and how it could be fixed?

Many thanks in advance,

Dr. Daniele Savio

Fraunhofer Institute for Mechanics of Materials IWM, Freiburg, Germany

I think the issue is that the pppm mesh does not adapt to the simulation box as it changes, be it through npt >or fix deform.

Yes, the code does this. Note however, from the kspace_style doc page:

Note that style pppm only computes the grid size at the beginning of a simulation, so if the length or triclinic tilt of the simulation cell increases dramatically during the course of the simulation, the accuracy of the simulation may degrade. Likewise, if the kspace_modify slab option is used with shrink-wrap boundaries in the z-dimension, and the box size changes dramatically in z. For example, for a triclinic system with all three tilt factors set to the maximum limit, the PPPM grid should be increased roughly by a factor of 1.5 in the y direction and 2.0 in the z direction as compared to the same system using a cubic orthogonal simulation cell. One way to ensure the accuracy requirement is being met is to run a short simulation at the maximum expected tilt or length, note the required grid size, and then use the kspace_modify mesh command to manually set the PPPM grid size to this value.

The “grid size” that this refers to is Nx by Ny by Nz. That does not change within a single run. However the grid

is remapped to the box size (if it is changing) at every reneighbor. So this text is referring

to a loss of accuracy (e.g. if a much larger box size should use a different grid size to achieve

the desired accuracy).

When you say: “Neighboring lists are created every timestep without delay in all simulations”,

do you mean literally that neighbor lists are rebuilt every single timestep? If this is the case,

I don’t see how you can get an out-of-range error. That error refers to an atom that has moved
too far since the last reneighbor for PPPM to map it to the grid correctly. Can you post a simple, small (runs quickly)

script that reproduces the problem?

Steve

Dear all,

I recently have been performing MD simulations with electrostatic interactions, and I have had a few issues with the error: „Out of range atoms - cannot compute PPPM “. I think it may be caused by a bug/limitation in LAMMPS. Here are my observations and a possible explanation:

  1. The error occurred only when the volume of the simulation box changed (e.g. npt runs or by using controlled box deformations). All the simulations have been running fine under fixed volume (nve, nvt).

  2. I had the problem with large volume variations, but also for small fluctuations during stabilized npt runs. Changing barostat parameters to slow down the volume changes did delay the problem, but not solve it.

  3. The system dynamics appear to be good: energy and forces do not increase dramatically before the error, nor do the atoms move oddly or overlap. Moreover, if I rerun a simulation from a restart just previous to the error, it seems to run fine, at least for a while (see possible explanation below).

  4. Neighboring lists are created every timestep without delay in all simulations. Changing neighbor skin only delays the problem.

  5. The issue occurred on the latest lammps version, but also on 2015 and 2014 ones. I experienced this problem with several systems, consisting of small molecules or very big ones.

I think the issue is that the pppm mesh does not adapt to the simulation box as it changes, be it through npt or fix deform. I checked the pppm.cpp file and did not find any command that recalculates the mesh as the box dimensions are modified (though I may have missed it, as I am not a code expert). Thus, some atoms are bound to be mapped incorrectly somewhere along the simulation. On the other hand, meshes are recalculated at the beginning of each run, which explains why rerunning the simulation from a restart file temporarily solves the problem.

Any idea on this issue and how it could be fixed?

Many thanks in advance,

Dr. Daniele Savio

Fraunhofer Institute for Mechanics of Materials IWM, Freiburg, Germany

Dear Steve and Arun,

Thank you for the detailed response and suggestions. I am still working on finding example cases where the problem occurs after a short run.

I understand the accuracy problem due to the fixed nx,ny,nz for the pppm grid. However, in some of the systems I forced the pppm grid to the initial nx,ny,nz values and the estimated accuracy was not changed significantly, nor was the energy. Partial charges are ok. Potential is OPLS, so nothing particularly fancy. Performing minimizations during equilibration seemed to help, but so did intermittently interrupting the simulation and restarting it (new run command).

Concerning the neighbor lists, I used the command neighbor_modify delay 0 every 1 check yes in addition to neighbor 2.0 bin in all the simulations.

I am still checking the code to find where the grid is remapped to the simulation box. If I am correct this is performed in pppm.cpp by the function pppm::set_grid_local() (according to the comment // reset portion of global grid that each proc owns in pppm::setup_grid()). However, in the function pppm::compute() I see no call to set_grid_local prior to mapping the atoms on the pppm grid. I have a feeling that this may be related to the issue. What do you think? Or am I off track?

Best regards,

Dr. Daniele Savio

Fraunhofer Institute for Mechanics of Materials IWM, Freiburg, Germany

Dear Steve and Arun,

Thank you for the detailed response and suggestions. I am still working on finding example cases where the problem occurs after a short run.

I understand the accuracy problem due to the fixed nx,ny,nz for the pppm grid. However, in some of the systems I forced the pppm grid to the initial nx,ny,nz values and the estimated accuracy was not changed significantly, nor was the energy. Partial charges are ok. Potential is OPLS, so nothing particularly fancy. Performing minimizations during equilibration seemed to help, but so did intermittently interrupting the simulation and restarting it (new run command).

Concerning the neighbor lists, I used the command neighbor_modify delay 0 every 1 check yes in addition to neighbor 2.0 bin in all the simulations.

I am still checking the code to find where the grid is remapped to the simulation box. If I am correct this is performed in pppm.cpp by the function pppm::set_grid_local() (according to the comment // reset portion of global grid that each proc owns in pppm::setup_grid()). However, in the function pppm::compute() I see no call to set_grid_local prior to mapping the atoms on the pppm grid. I have a feeling that this may be related to the issue. What do you think? Or am I off track?

"off track", i'd say. with any object oriented code using
polymorphism, it is not always the right way to search for function
calls directly.
when running with fix npt, the change of the box is done in the
Fix::initial_integrate() step, which calls Kspace::setup() to update
the pre-computed structure factors and related properties, so looking
in the PPPM::compute() is the wrong place. try PPPM::setup() instead.

axel.