custom fix: how are my variables communicated?

Hello everyone,

I skipped through some user fixes and I was wondering how custom
variables are communicated across processors since I couldn’t find any
explicit MPI commands for this.
Just an example: fix_ttm.cpp

there are different 3d-arrays defined via memory->create(…)
I guess, this is where the finite-difference grid is created.

Then starting from approx line 432, the heat-diffusion-euqation is solved on that grid.


// compute new electron T profile

for (int ixnode = 0; ixnode < nxnodes; ixnode++)
for (int iynode = 0; iynode < nynodes; iynode++)
for (int iznode = 0; iznode < nznodes; iznode++) {
int right_xnode = ixnode + 1;
int right_ynode = iynode + 1;
int right_znode = iznode + 1;
if (right_xnode == nxnodes) right_xnode = 0;
if (right_ynode == nynodes) right_ynode = 0;
if (right_znode == nznodes) right_znode = 0;
int left_xnode = ixnode - 1;
int left_ynode = iynode - 1;
int left_znode = iznode - 1;
if (left_xnode == -1) left_xnode = nxnodes - 1;
if (left_ynode == -1) left_ynode = nynodes - 1;
if (left_znode == -1) left_znode = nznodes - 1;
T_electron[ixnode][iynode][iznode] = …

and so on…
it seems like e.g. the nodes “0” and “nxnodes-1” serve as ghost-layers.
But I don’t see where the actual communication is issued.

So questions are:
1.

Where is the communication hidden? (couldn’t find something in
memory.cpp and comm.cpp)
and
2.

Is there a way to force an additional communication in between these
‘automated’ communications?

best regards,
frank.

This is a bad example, since the TTM grid is not decomposed. Instead the full entire grid is computed by each MPI task. Contributions to a grid point from atoms owned by different MPI tasks are handled using MPI_Allreducel() operations. In that sense, fix ttm functions a lot like fix nvt, but with more variables. In fact, most variables internal to fixes are handled in this way. The exceptions are fixes that create per-atom variables. The best example of this is fix store, which is used by other components of LAMMPS, such as compute msd.

Aidan

Thank you for your response, Aidan.
So there isn’t really a communication along neighboring processors in the way I expected. Isn’t it a waste of computation time if every single processor computes for the whole grid or is it fine?

Are there any “auxiliary” methods for lammps to communicate only across the boundaries?

Best regards, frank.

This is a bad example, since the TTM grid is not decomposed. Instead the full entire grid is computed by each MPI task. Contributions to a grid point from atoms owned by different MPI tasks are handled using MPI_Allreducel() operations. In that sense, fix ttm functions a lot like fix nvt, but with more variables. In fact, most variables internal to fixes are handled in this way. The exceptions are fixes that create per-atom variables. The best example of this is fix store, which is used by other components of LAMMPS, such as compute msd.

Aidan

Yes, it is wasteful, and it doesn’t scale. But for typical use cases, fix ttm is like a flea on the elephant of the interatomic potential force calculation. That is why I directed you to examine fix store. Each MPI process only stores the values associated with the atoms that it owns. When an atom migrates to another process (only occurs on reneighbor timesteps) the values are also migrated.

Aidan

…and to elaborate on the communication bit: LAMMPS doesn’t do the communication to ghost atoms or the exchange of per atom data across neighboring boundaries for each fix independently, but they are combined into buffers and then the buffers are communicated. the subroutines are called FixXXX::pack_() and FixXXX::unpack_().

HTH,
axel.