[lammps-users] ERROR on proc 10: Dihedral atoms 707 708 711 1011 missing on proc 10 at step 4767

Dear LAMMPS experts,

I have a problem with a simulation and most of the time for my structures receive the same error message.

I am simulating Polymermelt with salt contents, I create the structure with marvinsketch, load the structure in winmostar to assign charges manually/ force fields and then run it on my computer. The lammps code looks like this:

Michael,

the error message you see is reported quite frequently, so you can find previous discussions and explanations in the mailing list archives.
the short version is, that there are three common scenarios where this happens: 1) when your pairwise cutoff is very short and thus the default communication cutoff too short so that all atoms for dihedral interactions are included in the ghost atom area all the time. that seems rather unlikely with a 10.0 angstrom cutoff unless you have defined dihedral interactions across very long bonds (typical bonds are 1-2 angstrom and thus a fully stretched dihedral would be no longer than 6 angstrom) or 2) your force field parameters are bad. either bonded interactions are too weak (so bonds get stretched too far) or LJ terms are too weak, so atoms get too close and then experience very high forces and get accelerated a lot and then move too fast to be properly contained in the exchange or ghost atoms. that is much most likely and consistent with your description and the fact that your temperature is crazy high at nearly 70000K but your potential energy is very negative. 3) you have a very bad initial geometry with very high potential energy that is not properly relaxed before starting MD.

the log file or screen information you provide is incomplete and thus it is difficult to make a more meaningful assessment. most importantly, you don’t provide information about the LAMMPS version you are using and what platform/settings you are running on/with.

but there are some items that stick out and are rather unusual and indicative of more fundamental problems.

  1. your time spent in kspace is very high in comparison with the other force computation terms. are you running with the GPU package? or a rather small number of atoms across a very large number of MPI ranks?
  2. your time step is very small for this kind of model.
  3. your use of fix deform makes no sense
  4. lost atom or bond/angle/dihdral atom detect will happen on every reneighbor, so why delay reneighboring? also why use such a small neighbor list skin? particularly with the extra force delay?
  5. why use r-RESPA when there is next to no time spent on pair/bond/angle? this just adds overhead.
  6. there doesn’t seem to be a thermostat in use or some form of equilibration. so what is the goal to be achieved with this simulation.

my generic suggestions:

  • simplify your input
  • create a small test system
  • run with run style verlet
  • run on the CPU with few MPI ranks
  • run a normal equilibration protocol (minimize plus fixed volume MD with a dissipative thermostat)
  • either adjust to the desired density right from the beginning or do it after initial equilibration
  • after adjusting to the desired density, repeat the equilibration protocol
  • after the system is equilibrated to the desired density, run with fix nve and without a thermostat to verify that your choice of timestep is proper to conserve energy. what is achievable depends on the masses of atoms in your system and the stiffness of the potentials in use
  • only then there may be value in experimenting with r-RESPA and other tweaks for additional performance (e.g. verlet/split). also for r-RESPA there is not much value unless you actually increase the outer timestep (usually the bonded interactions to light atoms require the shortest timestep, but for a “normal” molecular system, you should be able to handle 0.2fs or even more. pair interactions can usually be done at 0.5fs to 1fs and kspace may be done every other or every two steps. but when running on the GPU, there is not much benefit there, since pair interactions can run independently on the GPU while bonded and kspace can run on the CPU at the same time with the verlet run style. but watch out that kspace has rather steep scaling limits (unlike pairwise interactions), since it can parallelize over space only in 2d (and not 3d like pair) and the communication overhead increases while the number of work units decreases when you use more MPI ranks (due to having to do up to 6 transposes of the entire densite/force grid).

HTH,
axel.

Dear Axel,

Thank you very much for the thorough analysis.

So far, I was able to redo the structure or the polymer and it is very likely that I did some dihedral angles define not correctly. It helped and the simulation ran now overnight stable.

Nevertheless your other comments are very helpful as well and I will work on it.

I will keep you informed if you wish.

Take care,

Michael

I would be primarily interested in getting answers to the questions I have asked.

Sure:

1, yes, I am running with the GPU package.
2. when choosing a bigger time step the simulation crashed again.
3. The fix deform was just an attempt to get better entangled polymer chains because I am interested to see interchain jumps of ions in the polymer matrix depending on density of the box
4. The delay of reneighbouring was just an attempt - copy of a command in the Helpdesk. There was no help in that
5. See 3.

I hope this helps ?

Take care,

Michael