Cannot compute PPPM when using


I am attempting to run a simulation with a box of TIP3P water and some organic molecules dissolved in it. However, I encounter some problems when I attempt to run it on a computer cluster.

The error I receive is ERROR on proc #: Out of range atoms - cannot compute PPPM.

I know what his error means, and I have studied the dynamics at the point of the error occurring extensively; the molecule is blown apart.

I have run the simulation with only the water box and it runs fine.

Everything points to bad dynamics of the organic molecule.

However, I have attempted to run the same script and datafile on my own pc, both in serial and in mpi, and I do not receive this error. I have tried using the same number of cores both on the cluster and on my own computer, and the problem occurs on the cluster but not on my own PC.

I am currently attempting to find some new force field parameters for my molecule(I am using CHARMM), but I also want to ask if there is any possibility my error is somehow related to the way the simulation is run on the cluster?

Is there some setting I am missing?

I can provide more information about the simulation if needed.


it is difficult to make an assessment without actually looking over your shoulder and seeing what you have been doing and what the specifics of your inputs are.

there are multiple issues that could provide an explanation. here are some common cases:

  • the first thing to look at would be the version of LAMMPS. if you have two different versions, one may have a bug fixed that the other has not
  • sometimes compilers can miscompile code, especially when the code was written in a “quirky” way (and thus unexpected for the compiler’s heuristics). scientific software sometimes is quite quirky, since not all scientists are formally trained programmers (and even people with formal training sometimes develop quirky programming habits).
  • sometimes bad data stems from uninitialized memory. on linux in particular, the situation can be quite deceptive, since larger chunks of memory allocated via malloc() as implicitly set to 0 and only become non-zero when the memory get “recycled” after it was freed. The actual logical memory locations vary by compiler or features included in the LAMMPS binary and sometimes kernels use address randomization (as a security precaution) that may mix up things even more.


Thank you Axel for a quick response!

I understand it is tricky to answer given how complicated it is.

The version of lammps on the cluster is indeed different from my own, an older version from September 2019.

Regards, Bjorn