Dear LAMMPS_users,
I want to calculate the square difference of the atoms coordinates in the simulation box over time by:
double **x = atom->x;
double *x0 = request_vector(step);
for (int i = 0; i < nlocal; i++){
if (mask[i] & groupbit) {
y[0] = y[1] = y[2] = 0.0;
domain->unmap(x[i],image[i],y);
dx = y[0] - xc[0] - x0[n];
dy = y[1] - xc[1] - x0[n+1];
dz = y[2] - xc[2] - x0[n+2];
dxdotdxme += dxdx+ dydy+ dz*dz;
}
n += 3;
}
double dxdotdxall = 0.0;
MPI_Allreduce(&dxdotdxme,&dxdotdxall,1,MPI_DOUBLE,MPI_SUM,world);
x is the 2D array storing current coordinates in LAMMPS while x0 is the saved 1D vector storing the unmapped atom coordinates during each time step by:
double *x0 = request_vector(istep-1);
int n = 0;
for (int i = 0; i < nlocal; i++){
if (mask[i] & groupbit) {
y[0] = y[1] = y[2] = 0;
domain->unmap(x[i],image[i],y);
x0[n] = y[0] - xc[0];
x0[n+1] = y[1] - xc[1];
x0[n+2] = y[2] - xc[2];
}
n += 3;
}
The code runs correctly with one core. However, when I run it in parallel, the results, such as dxdotdxall, become different from one-core results if there is atom exchange between cores. As we know, LAMMPS is using domain decomposition to divide the simulation box. Is it possible to use particle decomposition to do MPI? What can I do so as to avoid the error caused by atom exchange? Many thanks!!!
Sincerely,
Vivian