How does the length of timestep affect the result of simulation

Hello! I wonder how does the length of timestep affetc the result. In general numeric methods, when we choose a timestep which is small enough, the result will be the same if we continue to decrease timestep.

However, in Lammps, I read the timestep command. Default value of different units has been given in manual. My question is when I use a timestep which is smaller than the default value, my result will change. For example, when I use si units and set the timestep 1ns( the default value is 10ns), the result will change. I got confused about this and Any help will be appreciated.

Thanks advanced!

You already have the answer: the default timestep isn’t small enough. How different are your calculations if you go down to 0.1ns?

Thank you for your reply!

I heard a view from my supervisor that in Lammps we can not choose timestep randomly, we need to take proper length of timestep. He said it is an empirical conclusions.

I just want to make sure that lammps is as same as other numerical methods that when we choose a timestep which is smaller than enough, there is no difference.

Thanks again!

It is not entirely empirical, but dependent on the fastest moving particles in the system.
If you know the mass of the particles and their kinetic energy (i.e. temperature) you can estimate what a suitable timestep would be. Ultimately, doubling the mass of a particle is equivalent to dividing the the length of the time step by sqrt(2).

There are lots of publications discussing errors of integrators for MD. I also recall a lengthy discussion in the Allen and Tildesley book on MD simulations on errors due to use of integrator algorithm and long-term stability and errors. I also am aware of active research trying to find Langevin style time integrators that allow to increase the time step significantly.
While different numerics will quickly lead to exponentially diverging trajectories (MD is chaotic!) they are still sampling the same statistical mechanical ensemble, for as long as they run stable and the errors through discretization are acceptable. The choice of time step is then usually motivated by improving the statistical sampling through collecting more simulated time.

An interesting outcome of the fact that we are using floating point math and thus have numbers with a truncated precision is that shortening the timestep only improves the (short term) accuracy of the time integration up to a point. Beyond, the accumulated error from floating point truncation becomes larger than the error from discretization and thus further shortening the time step does not reduce the error anymore. But that is usually at the point where this is a massive waste of computer effort.

Since computational capability is so much easier to obtain than in the early years of MD simulations (when most of the method was established) the concern about efficiency is less these days and instead worrying about accuracy should prompt people to make more conservative choices and thus run less of a risk to incur unwanted errors from trying to run a simulation “faster”.