Loop over all atoms in the simulation box for a new pair potential

Hello,

I try to write a new pair potential. For each atom I, I want a sub-loop that goes through all the atoms J in the simulation box, regardless of the distance between atoms I and J. In this case, the usual (full) neighbor list cannot be applied.

  • For serial calculations, we can try to first store the positions of all atoms. Then for each atom I, loop over all atoms in the simulation box.
memory->create(dp, natoms, 3, "pair:dp");    # natoms * three dimension
memory->create(sum, natoms, "pair:sum");

# Store the positions of all atoms in the simulation box
for (ii = 0; ii < inum; ii++) {
    i = ilist[ii];
    itag = tag[i] - 1;
    for (beta = 0; beta < 3; beta++)  dp[itag][beta] = x[i][beta];
}

# For each atom i, calculate the sum of all positions in three dimensions of all atoms
for (ii = 0; ii < inum; ii++) {
    i = ilist[ii];
    itag = tag[i] - 1;
    sum[itag] = 0;
    for (j = 0; j < natoms; j++) {
       for (beta = 0; beta < 3; beta++) sum[itag] += dp[j][beta];
    }
}
  • For parallel calculations, it gives errors because of the partitioning. Another way is directly loop through all atoms for each atom I.
memory->create(sum, natoms, "pair:sum");

# For each atom i, calculate the sum of all positions in three directions of all atoms
for (ii = 0; ii < inum; ii++) {
    i = ilist[ii];
    sum[itag] = 0;
    for (jj = 0; jj < inum; jj++) {
       j = ilist[jj];
       for (beta = 0; beta < 3; beta++) sum[itag] += x[j][beta];
    }
}

The problem is, because of partitioning, ilist only contains atoms that are partitioned to the local core, not the full atom list in the simulation. Communications between different cores seem not trivial.

May I ask if there are some ways to achieve this? Thanks in advance for your help.

Best regards,
Yongliang Ou

LAMMPS is designed to parallelize well for range-limited potentials by using domain decomposition and neighbor lists.

If you want to loop over all atoms, you are better off creating your own MD code that does not require neighbor lists and uses particle decomposition instead of domain decomposition. Please be aware that your algorithm has O(N**2) scaling with system size and that particle decomposition has poor strong scaling behavior due to the need of collective communications.

Forcing LAMMPS to do this does not make much sense.

I agree with everything Axel has said, but this:

is not correct. It is reasonably straightforward MPI programming to gather the total number of atoms, allocate suitably-sized arrays on all processors, have each processor fill in positions of its local atoms, and then gather those positions all-to-all to give each processor a complete tag-sorted position list.

Of course your eventual code will be incredibly inefficient in memory and communication usage relative to LAMMPS’s inbuilt system, but the writing of it isn’t hard. It’s a bit like building a horse-drawn chariot from a Tesla electric car – it’s hard to see the use of it, but the engineering is easy.

EDIT: See for example: OCTP/compute_position.cpp at d48ded1f2102dc34b3053827304dc8a79cc5b682 · omoultosEthTuDelft/OCTP · GitHub