missing atoms occur on restart not during previous run...

Below is part of my script.

I’m using the 10 Feb 2012 version of LAMMPS on OS X 10.6.8 built for OpenMPI

I’m doing a quench with a 0.01 fs timestep for 10000 steps but I originally was attempting to run 20000 steps and I received the following error.

ERROR on proc 3: Bond atoms 2577 2618 missing on proc 3 at step 251907 (neigh_bond.cpp:49)
[Profmembrane.local:16112] MPI_ABORT invoked on rank 3 in communicator MPI_COMM_WORLD with errorcode 1
mpiexec noticed that job rank 0 with PID 16109 on node Profmembrane.local exited on signal 15 (Terminated).
2 additional processes aborted (not shown)

I kept reducing the number of steps in order to get some number of steps to complete and a restart file could be written.

Upon starting the next run with a timestep of 0.001, I reduced the time step hoping the atoms would not disappear, and I get the same error. I have backed off the number of steps
in the first quench several times now.

My question is if atoms 2577 and 2618 are lost when the next run starts then why didn’t I get the error during the first quench? It’s like the missing atoms are discovered on restart?

I’d love to know what I’m doing wrong.

I suppose I should try running Ewald since all atoms are kept on all processors as a sage told me here a few weeks ago.

Thanks in advance.

bruce,

there is one *very* big problem in your input.
you are using "fix temp/rescale" in every step.
that will hide all kinds of bad physics and bad
dynamics.

axel.