Question about MPI Speed

Hi Niall and Axel,

Thanks for your replies.

Actually I am using a pair_style written by myself. When a core calculates force, it have to use coordinates of atoms in other cores. Besides, interaction between atoms are determined by atom index instead of distance of other stuffs. Thats why I share coordinates in each time step by allreduce. (I am trying to change it into allgatherv)

Since other exist pair_style might face the same problem( although those styles use cutoff range), is LAMMPS using a better way to share coordinates between cores?

Best regards,

Hi Niall and Axel,

Thanks for your replies.

Actually I am using a pair_style written by myself. When a core calculates
force, it have to use coordinates of atoms in other cores. Besides,
interaction between atoms are determined by atom index instead of distance
of other stuffs. Thats why I share coordinates in each time step by
allreduce. (I am trying to change it into allgatherv)

Since other exist pair_style might face the same problem( although those
styles use cutoff range), is LAMMPS using a better way to share coordinates
between cores?

no. there are two possibilities:
- you have a pathological case of a model that cannot be efficiently
parallelized (which begs the question why one would want to implement
it), or
- your implementation is bad and does not take advantage of LAMMPS'
domain decomposition (which begs the question why you want to
implement it into LAMMPS in the first place)

since you didn't explain why you believe that you need to follow such
an inefficient implementation path, there is little else to comment.

axel.

yes, for short-range forces you don’t need an AllReduce.
LAMMPS only communicates with neighbor processors
to get coordinates of nearby atoms.

Steve

Axel,

Thanks. You are right. I guess what I can do is to polish my codes to enhance performance. I will try to write an other pair_style using LAMMPS’s domain latter.

Best,

Yilian

Hi Steve,

Thank you. The “neighbor processor” idea is inspiring. I will think about this and see what I can do.

Best regards,

Hi Steve,

Could you please tell me in which part LAMMPS share coordinates of atoms? Which MPI command does LAMMPS use in this process?

In pair_styles(lj/cut, for instance), LAMMPS get coordinates of i atom and j atom directly from x[][], which seems only stores coordinates for atoms in the domain of each processor. How could this work when j atom belongs to another processor?

Best regards,

Yilian Yan

Hi Steve,

Could you please tell me in which part LAMMPS share coordinates of atoms?

there is not a single part. you have to discriminate between steps
where the neighbor lists are updated and steps where this doesn't
happen. in the first case you have comm::borders() and
comm::exchange(). for all other steps there is simply
comm::forward_comm() which updates atom positions of the ghost atoms
without any redistribution.

Which MPI command does LAMMPS use in this process?

there are multiple MPI commands involved, but mostly MPI_Send(),
MPI_Irecv() and MPI_Sendrecv().

In pair_styles(lj/cut, for instance), LAMMPS get coordinates of i atom and j
atom directly from x[][], which seems only stores coordinates for atoms in
the domain of each processor. How could this work when j atom belongs to
another processor?

that is not a correct assessment. x{}{} contains positions of atoms
from the local domain and atoms from neighboring domains (aka ghost
atoms).

this is all explained in the LAMMPS paper.

axel.