I’m using lammps to calculate some time correlation functions, and I would like to restart the same trajectory with the exact same coordinates and velocities sometime.
I made some tests for a system with 100 time steps, and I found the numbers in thermo files differ a little bit. I understand the round-off effects may result some non-precision, but I’m not sure if this is the case. Would you please advise if this kind of differences are within that range?
I’ve attached two output files: “log_original” for the continuous run, and “log_restart10” for the run I restart after 10 timesteps.
I appreciate all your help in advance.

I'm using lammps to calculate some time correlation functions, and I
would like to restart the same trajectory with the exact same
coordinates and velocities sometime.
I made some tests for a system with 100 time steps, and I found the
numbers in thermo files differ a little bit. I understand the
round-off effects may result some non-precision, but I'm not sure if
this is the case. Would you please advise if this kind of differences
are within that range?

dear jihang,

unless you write a code in fixed point math, results from
MD trajectories will always deviate exponentially (it is
the so called "butterfly-effect").
the difference you see in lammps is enhanced by the default
"lazy" neighborlist rebuild which can change the order of
atoms how their forces are summed up. if you restart, then
you trigger a neighborlist rebuild. restarts are in full
double precision internally.

in short, if your analysis depends on exact reproduction
or trajectories, you have to write your own MD code.
as in general, it is only required that you sample the
same region in phase space properly, and the exact numbers
don't matter. thus almost all codes favor high efficiency
over exact reproducability.

In principle it is possible to restart and continue on exactly, if you
are running on the same # of procs. I believe this works in LAMMPS.

But various fixes and other
options in the code will break this assumption, then you get a diverging
restart trajectory, as Axel indicates. One fix that breaks this is
fix shake, which you are using. Shake uses an approximate constraint
estimation on the 1st half step of a new run, which is different than
the calculation done for a full step if the previous run had continued.
Any fix that uses random #s will also break it, like fix langevin.