Dear User’s,
I am performing 2D simulations using LAMMPS (12 Dec 2018 Version). My system contains spherical colloidal particles. I want to calculate the quantities as ensemble average. So I am repeating the simulations multiple times. But I am getting the same coordinate values for the particles in all simulation. For example if I am getting a value of the X and Y coordinates in simulation one. In the next simulations also the trajectory remains the same. It shows some changes only when I change the seed number in the fix_langevin. Since the process is purely Stochastic, the trajectories should not be the same in all the time or in all simulations. In 3D system I am getting different trajectories while I repeating the simulation with the same seed number for fix_langevin. So what may be the cause? Is it a good idea to change the seed number for fix in LAMMPS to calcaulate the ensemble average?
That is fine… But If that is the case, then even for a 3D system also the trajectories should remains the same for a given seed. But while repeating the simulations for a 3D system we are getting different trajectories for a given seed (we have not changed any seed number in the case of 3D system to calculate ensemble average).
That is fine… But If that is the case, then even for a 3D system also the trajectories should remains the same for a given seed. But while repeating the simulations for a 3D system we are getting different trajectories for a given seed (we have not changed any seed number in the case of 3D system to calculate ensemble average).
this topic has been discussed to death on this mailing list.
exact reproducibility (and reversibility) can only be achieved doing simulations in fixed-point math and without a thermostat, i.e. in a clean NVE ensemble.
with floating-point math (which LAMMPS employs), there is going to be an exponential divergence, but how soon that happens depends on many factors. with a 2D system the impact from the non-associative floating point math (i.e. that the result depends on the order of operations) is much less, since you have far fewer neighbors. also, it is less likely that your atoms will be reordered during neighbor list rebuilds or sorts of local atoms for increased performance. if you want to continue this discussion, please study the mailing list archives, so you have seen all arguments and questions that have been presented so far and their counter-arguments and answers.
at any rate, you have been given good advice: if want to have independent trajectories, you first have to decorrelate them. this can be done through using different random seeds to velocity and fix langevin, randomization of initial positions with displace_atoms, or completely different intial geometries or a combination of those. a common practice is to do an initial equilibration run and collect snapshots regularly (say every 10000 steps) and then do a small random displacement and a new velocity initialization. you can re-equilibrate those runs until they are sufficiently decorrelated and then have your set of independent equilibrated restarts. this simulation protocol is similar to the enhanced sampling methods using parallel replicas developed in the group of art voter (study his publications if you want to learn more about how to decorrelate restarts and enhance sampling through concurrent simulations).
That is fine… But If that is the case, then even for a 3D system also the trajectories should remains the same for a given seed. But while repeating the simulations for a 3D system we are getting different trajectories for a given seed (we have not changed any seed number in the case of 3D system to calculate ensemble average).
this topic has been discussed to death on this mailing list.
exact reproducibility (and reversibility) can only be achieved doing simulations in fixed-point math and without a thermostat, i.e. in a clean NVE ensemble.
with floating-point math (which LAMMPS employs), there is going to be an exponential divergence, but how soon that happens depends on many factors. with a 2D system the impact from the non-associative floating point math (i.e. that the result depends on the order of operations) is much less, since you have far fewer neighbors. also, it is less likely that your atoms will be reordered during neighbor list rebuilds or sorts of local atoms for increased performance. if you want to continue this discussion, please study the mailing list archives, so you have seen all arguments and questions that have been presented so far and their counter-arguments and answers.
at any rate, you have been given good advice: if want to have independent trajectories, you first have to decorrelate them. this can be done through using different random seeds to velocity and fix langevin, randomization of initial positions with displace_atoms, or completely different intial geometries or a combination of those. a common practice is to do an initial equilibration run and collect snapshots regularly (say every 10000 steps) and then do a small random displacement and a new velocity initialization. you can re-equilibrate those runs until they are sufficiently decorrelated and then have your set of independent equilibrated restarts. this simulation protocol is similar to the enhanced sampling methods using parallel replicas developed in the group of art voter (study his publications if you want to learn more about how to decorrelate restarts and enhance sampling through concurrent simulations).