Fix GCMC produce different result using different number of core/node

Hi,

I am using lammps-7Aug19 to run my simulation. I am using fix gcmc to insert molecules. Using different number of nodes (or cores) result in different structure. This is not because of randomness since I can reproduce the exact same result when repeating the simulation with fixed number of nodes and cores. I was wondering if this is expected and if so why or there is a problem with the parallelization of fix gcmc.

I appreciate any help.

Thanks,
Ali

Hi,

I am using lammps-7Aug19 to run my simulation. I am using fix gcmc to insert molecules. Using different number of nodes (or cores) result in different structure. This is not because of randomness since I can reproduce the exact same result when repeating the simulation with fixed number of nodes and cores. I was wondering if this is expected and if so why or there is a problem with the parallelization of fix gcmc.

even non-gcmc simulations will eventually diverge when you change the number of MPI ranks, as that impacts the order in which forces are summed and with floating point math being non-associative, this will eventually lead to small, exponentially differences in the trajectories. even with the same number of MPI ranks this will happen, if you change the settings for neighbor list builds and/or atom sorting. the time until the divergence becomes visible depends on multiple factors. it is more rapid when the simulation cell is rescaled (e.g. via fix npt) or when you are using high temperatures and/or “stiff” force field parameters.

axel.

There are two pseudo-random number sequences used to generate the trial moves. One of them is independent of the number of MPI tasks and so is strictly synchronized across MPI tasks without communication. This ensures that the same Monte Carlo moves are selected by each processor. The second sequence is different on each MPI task (although it is initialized with the same seed). I think it should be possible to modify the code to also keep the second sequence synchonized across MPI tasks without communication, but this is not considered important, since all sequences are equally valid. The same thing happens with e.g. the Langevin thermostat, although, unlike in the case of GCMC, this documented.