I’ve been looking through some of the LAMMPS source code, and I noticed that atom->map() takes a global atom ID and converts it to a local ID. For a fix I’m working on, I would like to do something inverse to this, namely, if I know a local ID (owned or ghost) is there a function that will give me some global identifier, or tell me what processor the atom in question resides on (I’m thinking of the case where I know a ghost ID on one processor and would like a way to access the owned atom on its home processor)?
I've been looking through some of the LAMMPS source code, and I noticed that
atom->map() takes a global atom ID and converts it to a local ID. For a fix
I'm working on, I would like to do something inverse to this, namely, if I
actually, Atom::map() is the inverse operation. what you are asking
for is the Atom::tag property.
know a local ID (owned or ghost) is there a function that will give me some
global identifier, or tell me what processor the atom in question resides on
(I'm thinking of the case where I know a ghost ID on one processor and would
like a way to access the owned atom on its home processor)?
there is only one atom with a given tag that is also a local atom.
however, if you need to communicate information from ghost atoms back
to their originals, you'd better not do this on a case by case bases,
but rather you the provided reverse communication methods.
I had a look at the comm->reverse methods, and while they seem to make sense, I'm not getting things to work correctly. What I'm trying to do is communicate charges on ghost atoms back to their originals in a custom fix I'm writing. Is there some simple example code somewhere that covers this kind of situation?
The reverse methods typically sum the values on the ghost atoms,
back to the original atom. If you don’t want to sum, you need
to write your callback method accordingly.
Do you mean the pack/unpack_reverse_comm() functions? I tried borrowing the style for those from fix_comb.cpp as nearly as I understood it, and set their behavior to overwrite rather than add, but I’m getting odd results out still.
To make things clearer, I’ve attached the relevant functions in a code snippet. The function I’m trying to debug is shift_proton(). I’m using it as part of a larger simulation of the Azzouz-Borgis system; the QCPI algorithm I use in that simulation needs to move atoms outside of integration steps, which is part of what shift_proton() does, and the movement by itself seems to be working correctly (since I only ever move the middle atom of the angle set). However, the Azzouz-Borgis model has a position-dependent charge on the H-transfer complex to model polarization, and what I’m finding is that in certain configurations, one of the angle atoms (I use an angle type to organize this hydrogen transfer cluster since it’s the only 3-atom collection in the model) is a ghost, and is owned by a different processor.
So the charge gets calculated correctly and resets the ghost atom’s value, but when I try to translate that back, something is going wrong and I either end up with the wrong charge on the ‘official’ copy of the ghost atom, or I get all my charges messed up (when using comm->reverse as it currently is in that code snippet).
I’d appreciate it if someone could take a quick look and suggest how I can correctly back-communicate my charge values.
Each real atom has potentially many ghost copies. If you’re not
summing the values of the ghost copies (like a force calculation would),
but overwriting the real atom with one of the ghost copy values,
how do you choose which one? If it doesn’t matter, then why
do you need to communicate the ghost values back to the
real value, if they haven’t changed?
Do you mean the pack/unpack_reverse_comm() functions? I tried borrowing the
style for those from fix_comb.cpp as nearly as I understood it, and set
their behavior to overwrite rather than add, but I'm getting odd results out
still.
Just to make it clear, fix qeq/comb uses "forward" communication to
overwrite charges on ghost atoms with iteratively updated charges on
local atoms, not the other way around as you intended.
The answer to those questions has to do with the way I’m altering the atom charge in the shift_proton function. Since I’m using an angle group to organize my calculation, the correct positions and charges are only updated for the atoms of the processor which owns that group.
If all the atoms are locally owned atoms on that processor, there’s no problem at all, since the positions and charges are correct and get communicated to the ghosts on any other procs during the next neighbor update phase.
The problem I’m running into occurs when this angle group I’m using is straddling a processor boundary. Then, the proc that owns the angle group only owns two of the three atoms locally, while the third is a ghost. So when I go to update neighbors, the positions and charges are correctly communicated for the two owned atoms, but the ghost atom is incorrectly handled since the now-correct ghost value gets overwritten by its now-incorrect value from the neighboring proc that owns it locally.
So what I need to do is communicate the ghost charge value of the third angle atom from the process that owns the angle group (all of whose values are correct after execution of shift_proton()) back to the process that actually owns that third atom locally. If I can do that, any other ghosts will be updated correctly in the re-neighboring phase.
The answer to those questions has to do with the way I'm altering the atom
charge in the shift_proton function. Since I'm using an angle group to
organize my calculation, the correct positions and charges are only updated
for the atoms of the processor which owns that group.
what is an "angle group"? also, there is no shift_proton function in lammps.
If all the atoms are locally owned atoms on that processor, there's no
problem at all, since the positions and charges are correct and get
communicated to the ghosts on any other procs during the next neighbor
update phase.
but there is an inconsistency. if you depend on the neighborlist
updates, you have to enforce a neighbor list update after every of
these manipulations, otherwise you have inconsistent data on different
processors and all hell will break loose.
The problem I'm running into occurs when this angle group I'm using is
straddling a processor boundary. Then, the proc that owns the angle group
only owns two of the three atoms locally, while the third is a ghost. So
when I go to update neighbors, the positions and charges are correctly
communicated for the two owned atoms, but the ghost atom is incorrectly
handled since the now-correct ghost value gets overwritten by its
now-incorrect value from the neighboring proc that owns it locally.
So what I need to do is communicate the ghost charge value of the third
angle atom from the process that owns the angle group (all of whose values
are correct after execution of shift_proton()) back to the process that
actually owns that third atom locally. If I can do that, any other ghosts
will be updated correctly in the re-neighboring phase.
Hope that helps clarify my issue.
not really. i think you are making the common mistake that you explain
and discuss technical details, but somehow missed to explain the
overall approach and purpose. it is hard to argue details that nobody
knows exactly but you, since we don't see what you see (your code) and
know what you (your plan/method)