# MPI domain decomposition

Dear LAMMPS_users,

I want to calculate the square difference of the atoms coordinates in the simulation box over time by:

double **x = atom->x;

double *x0 = request_vector(step);

for (int i = 0; i < nlocal; i++){

y = y = y = 0.0;

domain->unmap(x[i],image[i],y);

dx = y - xc - x0[n];

dy = y - xc - x0[n+1];

dz = y - xc - x0[n+2];

dxdotdxme += dxdx+ dydy+ dz*dz;

}

n += 3;

}

double dxdotdxall = 0.0;

MPI_Allreduce(&dxdotdxme,&dxdotdxall,1,MPI_DOUBLE,MPI_SUM,world);

x is the 2D array storing current coordinates in LAMMPS while x0 is the saved 1D vector storing the unmapped atom coordinates during each time step by:

double *x0 = request_vector(istep-1);

int n = 0;

for (int i = 0; i < nlocal; i++){

y = y = y = 0;

domain->unmap(x[i],image[i],y);

x0[n] = y - xc;

x0[n+1] = y - xc;

x0[n+2] = y - xc;

}

n += 3;

}

The code runs correctly with one core. However, when I run it in parallel, the results, such as dxdotdxall, become different from one-core results if there is atom exchange between cores. As we know, LAMMPS is using domain decomposition to divide the simulation box. Is it possible to use particle decomposition to do MPI? What can I do so as to avoid the error caused by atom exchange? Many thanks!!!

Sincerely,

Vivian

Dear LAMMPS_users,

I want to calculate the square difference of the atoms coordinates in the
simulation box over time by:

why no just use the msd compute?

double **x = atom->x;

double *x0 = request_vector(step);

for (int i = 0; i < nlocal; i++){

y = y = y = 0.0;

domain->unmap(x[i],image[i],y);

dx = y - xc - x0[n];

dy = y - xc - x0[n+1];

dz = y - xc - x0[n+2];

dxdotdxme += dx*dx+ dy*dy+ dz*dz;

}

n += 3;

}

double dxdotdxall = 0.0;

MPI_Allreduce(&dxdotdxme,&dxdotdxall,1,MPI_DOUBLE,MPI_SUM,world);

x is the 2D array storing current coordinates in LAMMPS while x0 is the
saved 1D vector storing the unmapped atom coordinates during each time step
by:

double *x0 = request_vector(istep-1);

int n = 0;

for (int i = 0; i < nlocal; i++){

y = y = y = 0;

domain->unmap(x[i],image[i],y);

x0[n] = y - xc;

x0[n+1] = y - xc;

x0[n+2] = y - xc;

}

n += 3;

}

The code runs correctly with one core. However, when I run it in parallel,
the results, such as dxdotdxall, become different from one-core results if
there is atom exchange between cores. As we know, LAMMPS is using domain
decomposition to divide the simulation box. Is it possible to use particle
decomposition to do MPI? What can I do so as to avoid the error caused by
atom exchange? Many thanks!!!

no you cannot do particle decomposition. it is very inefficient anyway.
you have to migrate your original positions with the atoms.

you could use fix store/state or fix property/atom for that.

but before doing any of that, you need to explain why you can't use
the already existing facilities in LAMMPS.

axel.

Dear Axel,

Thank you for your reply. What I did in my code is similar to the MSD compute. However, I need to get the dx, dy, dz between current step and all past steps instead of one step ahead. Also, I need to use the dx, dy, dz to compute a penalty force to add to my system. Thus, I have added a fix to LAMMPS files.

I wonder that in the msd compute, whether atom transferring between different cores is considered. Are the results still correct if there is an atom moving from one processor to another?

Many thanks!!!
Weiwei

Dear Axel,

Thank you for your reply. What I did in my code is similar to the MSD
compute. However, I need to get the dx, dy, dz between current step and all
past steps instead of one step ahead. Also, I need to use the dx, dy, dz to
compute a penalty force to add to my system. Thus, I have added a fix to
LAMMPS files.

what you describe is extremely impractical to do. if you need to store
the entire past trajectory, you will quickly run out of memory and
number of steps recorded and thus make your simulation slowly come to
a grinding halt.

I wonder that in the msd compute, whether atom transferring between
different cores is considered. Are the results still correct if there is an
atom moving from one processor to another?

do you seriously believe that people release software to the public,
especially software that is *meant* to be used in parallel, with some
fundamental errors like you insinuate? do you think that over the many
years that LAMMPS is available and has this feature, nobody would have
noticed?? think about how many papers have been published using the
MSD compute in LAMMPS to compute self-diffusion, can you imagine that
nobody has made any validation of the results???

*OF COURSE* it does it correctly. it uses something similar to fix
store/state that i already pointed out to you.

axel.

Perhaps there is a reason you need to calculate the MSD while the
simulation is running. If not, then it's probably much cleaner to
calculate the MSD after the simulation is finished. That should be
much easier to do.

I replied because I'm curious. I have code that calculates the
MSD (and various similar quantities) and can divide it into separate
contributions from rotational motion, translation motion, and internal
conformational changes. It will read a trajectory and calculate how
MSD increases over time. (And it also suffers from the O(N^2) running
time that Axel mentions). If anyone things this code would be useful,
I can release it.

Andrew

Incidentally, you can extract coordinates from the dump file using
either pizza.py, or "dump2data.py" (which is located in the
tools/moltemplate/src/ subdirectory in lammps), or your own script.
Example:

dump2data.py -raw -multi < DUMP_FILE > coords.txt

optional arguments (see docs_dump2data.txt for details):
-tstart ta
-tstop tb
-interval n
-center
-scale x

docs_dump2data.txt (6.25 KB)

Dear Andrew,

Thank you for sharing your method with me. However, I have to calculate the MSD during the simulation since I need to calculate a new force based on the value of dx,dy,dz.

Many thanks,
Weiwei

Dear Andrew,

Thank you for sharing your method with me. However, I have to calculate the MSD during the simulation since I need to calculate a new force based on the value of dx,dy,dz.

Why can’t you just use vx, vy, and vz? They contain practically the same information. Just multiply with dt. Think back to your very first Physics classes…

Axel