This is almost certainly an NFS problem. If you're writing your dump
files to a remote file system (Network File System, or NFS, protocol) and
the network connection gets saturated or cuts out, you'll find "gaps" in
your dump files where some data are missing. It is often accompanied, in
my experience, by a corrupted image---for example, a header saying it
will dump 2438 atoms and only dumping 1384 before the next frame starts.
Sometimes the connection will come back in the middle of a dump, too, so
you'll get more than the specified number of atoms (a bunch from the first
dump right before the failure, and a bunch more from the dump that was
going on when it resolved). If so, remove the section of the file that
contains the wrong number of atoms. If you really needed those frames,
it's called "start over."
That's probably what happened ANY time you have skipped time steps in your
dump files. You may not notice it in the output files (screen files, in
LAMMPS parlance) because the OS can buffer small amounts of data without
loss during network outages. Large files aren't so fortunate.
Karl D. Hammond
University of Tennessee, Knoxville