# finding the closest atoms across processors

Hi all,

I am trying to improve my parallel implementation - one aspect of the
current algorithm requires to find the closest atoms to the currently
investigated atoms. The pseudocode looks something along the line of:

double **x = atom->x;
for (ii=0; ii<inum; ii++) {
i = ilist[ii];
// index to save the closest atom
int jclose = -1;
double rclosest = 10000.0;
for (jj = 0; jj < jnum; jj++) {
j = jlist[jj];

// get the distance
delx = atom->x[i][0] - atom->x[j][0];
dely = atom->x[i][1] - atom->x[j][1];
delz = atom->x[i][2] - atom->x[j][2];
double rsq = delx*delx + dely*dely + delz*delz;

if (rsq < rclosest) {
// store the index
jshortest = j;
// further magic
[...]
}
}
}

This seems to work fine, as long as i and the respective jclosest lie
on the same processor. The moment they are on the different ones it
behaves weirdly. I tried creating a simulation of a box of [0,0, 1, 1]
where there is a circle of clustered atoms in the middle [0.5, 0.5]
and others are spread out to the corners, running on 4 processors. For
the atoms lying in the center circle, the closest reported one will be
those in the corner but lie on the same processor - rather than the
'real' closest one that lie on the other processor.

I am thinking of a different approach where the closest distance and
the index to the closest neighbour for each real atom will be send
through reverse communication for the ghost atoms. This information
can then be somehow used to find the closest atom for each particle -
which should work across processor (currently still designing the
algorithm for this).

Are there other aspects of LAMMPS I have overlooked that have this
capability? I am currently looking at bond create, but I'm not quite
sure it will give me what I want.

Many thanks,
QT

Hi all,

I am trying to improve my parallel implementation - one aspect of the
current algorithm requires to find the closest atoms to the currently
investigated atoms. The pseudocode looks something along the line of:

double **x = atom->x;
for (ii=0; ii<inum; ii++) {
i = ilist[ii];
// index to save the closest atom
int jclose = -1;
double rclosest = 10000.0;
for (jj = 0; jj < jnum; jj++) {
j = jlist[jj];

// get the distance
delx = atom->x[i][0] - atom->x[j][0];
dely = atom->x[i][1] - atom->x[j][1];
delz = atom->x[i][2] - atom->x[j][2];
double rsq = delx*delx + dely*dely + delz*delz;

if (rsq < rclosest) {
// store the index
jshortest = j;
// further magic
[...]
}
}
}

This seems to work fine, as long as i and the respective jclosest lie
on the same processor. The moment they are on the different ones it
behaves weirdly. I tried creating a simulation of a box of [0,0, 1, 1]
where there is a circle of clustered atoms in the middle [0.5, 0.5]
and others are spread out to the corners, running on 4 processors. For
the atoms lying in the center circle, the closest reported one will be
those in the corner but lie on the same processor - rather than the
'real' closest one that lie on the other processor.

since you are using a neighbor list, if the atom of interest is not on
the current processor, you have requested a neighbor list with too
short a cutoff. LAMMPS will always make sure, that sufficient ghost
atoms are present to include all atoms within the neighbor list cutoff
(plus skin).

axel.

there is an important piece of information missing here: what kind of
neighbor list did you request?

axel.

This happens inside a fix, and I have asked for a full neighbour list
inside init():

// need a full neighbor list, built whenever re-neighboring occurs
int irequest = neighbor->request((void *) this);
neighbor->requests[irequest]->pair = 0;
neighbor->requests[irequest]->fix = 1;
neighbor->requests[irequest]->half = 0;
neighbor->requests[irequest]->full = 1;

QT

This happens inside a fix, and I have asked for a full neighbour list
inside init():

// need a full neighbor list, built whenever re-neighboring occurs
int irequest = neighbor->request((void *) this);
neighbor->requests[irequest]->pair = 0;
neighbor->requests[irequest]->fix = 1;
neighbor->requests[irequest]->half = 0;
neighbor->requests[irequest]->full = 1;

ok. so a full neighborlist is the right choice, but now you have to
keep in mind, that by default the neighborlist cutoff is set to be the
largest cutoff of your pair style. if that one has a short cutoff, you
will only get neighbor list entries up to that distance. if you want
to include neighbors farther away, you need to modify your request to
set the the "cut" member from 0 to 1 and set the desired neighbor list
cutoff for the "cutoff" member.

this should then be reflected in your output.

axel.

Thanks, Axel. Changing it does reflect on the output, both in terms of
the solution and computational time. I guess finding the right value
is something that, in practice, people just choose one that works well
for their problems? Rather than having a workflow to determine one?

Thanks,
QT

Thanks, Axel. Changing it does reflect on the output, both in terms of
the solution and computational time. I guess finding the right value
is something that, in practice, people just choose one that works well
for their problems? Rather than having a workflow to determine one?

what is the correct cutoff depends on your model and has to be chosen accordingly.

axel.