The aggregate number is different in both cases when calculation is performed on CPU and GPU. Pls help

Dear LAMMPS users and developers,

I am running pair style dpd on cpu and gpu and getting different aggregate number. On cpu even at 1250th frame there are two clusters consisting of two well separated aggregates consisting of 38 and 69 molecules. Whereas on gpu at 1067th frame, a single aggregate is found with all 107 molecules in it. The single aggregate is formed by coalescence of two aggregates during 900-950 steps. The package GPU is being used for calculation on GPU. The total number of steps is 2500000 where the output is taken every 2000 step which gives a total of 1250 frames. The neighbor list building is performed on CPU in both cases. “tpa 16” is used for GPU calculations. LAMMPS24Mar2022 is being used. GPU make and model- Nvidia Geforce GTX 1060. Thanks for giving your time.

Molecule composition 2-3-3-3-3
The settings are given below.

pair_coeff 2 2 25.0 4.5
pair_coeff 2 3 30.0 4.5
pair_coeff 3 3 25.0 4.5
pair_coeff 2 4 35.0 4.5
pair_coeff 3 4 60.0 4.5
pair_coeff 4 4 25.0 4.5
pair_coeff 1 2 34.0 4.5
pair_coeff 1 3 61.0 4.5
pair_coeff 1 4 24.0 4.5
pair_coeff 1 1 26.0 4.5
bond_coeff 1 50.0 0.5
bond_coeff 2 50.0 0.5
angle_coeff 1 30.0 180
angle_coeff 2 30.0 180

The upper portion of data file is given below.

192227 atoms
4 atom types
344 bonds
2 bond types
258 angles
2 angle types

0.0 40.0 xlo xhi
0.0 40.0 ylo yhi
0.0 40.0 zlo zhi


1 1
2 1
3 1
4 1

The DPD algorithm includes using random numbers. Thus any change to the random number seed, the number of processors, the hardware, the random number generator algorithm will cause getting a different sequence of random numbers and thus results in different trajectories. Switching from CPU to GPU certainly counts as such a change.

Results from individual MD simulations rarely are deterministic but usually you need to average over multiple independent runs from different but equivalent initial conditions to get meaningful results.

Since in MD you are solving coupled linear partial differential equations numerically through finite differences, you have in essence a chaotic system that is subject to the so-called “butterfly effect”, i.e. the tiniest change like a single bit difference in one force computation can eventually lead to a completely different trajectory. This manifests in usually exponential divergence of simulations after a few thousands or tens of thousands time steps even without having a model that incorporates random numbers. Mathematically this is also known as Lyapunov instability.