Hello,

I am trying to understand how QEq algorithm works by looking at the compute_charge method in pair_reax.cpp file. There is a piece of code inside the neighbours loop that construct the A matrix (aval, acol_ind, etc.) that I can not understand:

// avoid counting local-ghost pair twice since

// ReaxFF uses half neigh list with newton off

if (j >= nlocal) {

if (itag > jtag) {

if ((itag+jtag) 2 == 0) continue;
} else if (itag < jtag) {
if ((itag+jtag) 2 == 1) continue;

} else {

if (zjtmp < zitmp) continue;

if (zjtmp == zitmp && yjtmp < yitmp) continue;

if (zjtmp == zitmp && yjtmp == yitmp && xjtmp < xitmp) continue;

}

}

So, this filter out half of the ghost neighboors, but I can not see when those ignored neighboors are taken back into account. The code just follows trough the CG solver and that is allâ€¦ Can somebody help me to understand?

Thank you!

S. Alexis Paz

PS: I know that this pair_reax.cpp is obsolete and we have to use reax/c instead, but I get the same results with both pair styles (serial lammps), and I just understand pair_reax.cpp better.

Hello,

I am trying to understand how QEq algorithm works by looking at the

compute_charge method in pair_reax.cpp file. There is a piece of code inside

the neighbours loop that construct the A matrix (aval, acol_ind, etc.) that

I can not understand:

// avoid counting local-ghost pair twice since

// ReaxFF uses half neigh list with newton off

if (j >= nlocal) {

if (itag > jtag) {

if ((itag+jtag) 2 == 0\) continue;
\} else if \(itag < jtag\) \{
if \(\(itag\+jtag\) 2 == 1) continue;

} else {

if (zjtmp < zitmp) continue;

if (zjtmp == zitmp && yjtmp < yitmp) continue;

if (zjtmp == zitmp && yjtmp == yitmp && xjtmp < xitmp) continue;

}

}

So, this filter out half of the ghost neighboors, but I can not see when

those ignored neighboors are taken back into account. The code just follows

trough the CG solver and that is all... Can somebody help me to understand?

this is a bit complicated to explain. you first have to keep in mind

the way how LAMMPS does domain decomposition: processor "owns" some

atoms (aka "local" atoms) and has copies of atoms from neighboring

subdomains/processors (aka "ghost atoms"). now you need to understand

the function of the command "newton off". this will trigger a change

in the neighbor list for pairs where one atom is a local atom and the

other is a ghost. since the ghost atom is a local atom in another

subdomain, these pairs exist twice, even when otherwise the neighbor

list contains each pair only once. with "newton off" both pairs remain

in the neighbor list, the interactions are now computed twice and the

result is only stored for the local atom. with "newton on", one of the

two pairs is removed, the interaction only computed once, and the

resulting force stored on the local *and* the ghost atom. this

requires doing a so-called "reverse communication" to collect the

forces from the ghost atoms to their corresponding local atoms. in

short you can choose between more computation or more communication.

ok, so far? this applies to typical pairwise additive interactions

(e.g. lj/cut).

in reax (or reax/c) the neighbor list situation is a bit more

confusing. the pair style itself requires "newton on", and that is how

the pair wise part of the computation is done, however for the bonded

interactions, a newton off setting would be much better. now the

neighbor list request is made special, so that a "newton off" neighbor

list is created, but in order to do a "newton on" style calculations,

you need to skip over half of the pairs and do this in a way that

doesn't create a load imbalance. this is what the quoted code does.

you'll see similar code constructs in other manybody styles, for

example, where the style requests a full neighbor lists (i.e. *all*

pairs are listed twice), but still computes the pairwise additive

contribution while taking advantage of newton's third law and thus

computes each pair's interaction only once.

HTH,

axel.

Thank you Axel. I think I understood. So a pair is filter out on the local processor because other processor will include it and communicate back the results. I presume this communication is happening inside the parallel CG method. That is right?. And my last doubt: how this work with a PBC in a serial version of lammps? there is no other processor to compute the pair but the filter seems to be there anyway.

Thank you again, I really appreciate your help.

Yes, this is correct. From a linear algebra perspective, this piece of code is just taking advantage of the fact that the A matrix is symmetric. Even when compiled in serial, ghost atoms are generated to handle interactions across periodic boundaries.