fix reax/c/species and PBC

Dear Ray,

I think I fixed the issues with C2 and memory leaks. Now the only memory block that remains available after program finishes is pos pointer (the corresponding file is not closed in FixReaxCSpecies destructor when singlepos_opened - is it correct? anyway, it is not very important).

Comment on "blue PETN" problem. The correction of atomic coordinates related to PBC is done as following. The position of first met atom of each molecule that spans through PBC is chosen as an "anchor point", and all the other coordinates of that molecule atoms are compared with this point. If the difference between "x"s or "y"s, or "z"s of a particular atom and "anchor point" is larger than half a box size along this axis, the corresponding coordinate of this atom is corrected. This procedure is quite simple, and it will only give errors if some molecule goes through PBC and at the same time is larger than half a box size (as PETN molecules in a unit cell). I think it may be accepted, because it is not very interesting where exactly is some molecule in the cell if it occupies more than a half of this cell. So it is not bug but feature of the implementation.

In fact, I think the main problem of the fix is different. Now each core creates nmax*nmax array of doubles for bonding information. It makes practically impossible applying this fix to systems with more than several thousands (maybe tens of thousands) atoms. In fact, even with the averaging, there should be no more than 100-1000 neighbors of each atom, so the memory consumption is several orders higher that it could ideally be, and grows as n^2.

New version of fix reax/c/species is attached.

Regards,
Oleg Sergeev,
VNIIA.

reaxc_species_pbc.tar.gz (12.8 KB)

Dear Oleg,

These all look very good - the leaks fixed and the examples working. In a modification that I recently worked on, the abo array of concern has been changed to nlocal by nmax. I can run 40,000 atoms on a 8-core, 12 GB Linux machine, which should be sufficient because one should not want to run more than a couple thousand atoms per core with ReaxFF due to its efficiency. I will take a closer look of your changes and merge these two versions together.

Many thanks,
Ray

Oleg,

I came across a test problem that fails the modified fix reax/c/species - please find the attached tar ball, thanks. The *-old routines are my recent modification to the fix by changing to nlocal by nmax array for abo, and the *-new routines are the ones with the same changes added to your modifications.

Both can run a replication of 4 x 10 x 10 (~23,000 atoms), but the new one failed with a replication of 5 x 10 x10 while the old one succeeded. The memory usage for both are very similar. May I ask for your opinion as to which part of your modification might be responsible for this failure?

p.s. Please see if the inclusion of your contribution in the new routine is correct.

Thanks,
Ray

reaxc_species.tar.gz (18.2 KB)

Ray,

thanks for the modification you sent. It requires significantly less memory than the old routine.

The problem looks quite strange. Unfortunately, I cannot reproduce your situation exactly, because I only have 4-core machine with 8GB memory available now, and it is insufficient for the 5*10*10 case. The inclusion of PBC-related part looks correct to me.

I also made the less memory-consuming variant of the fix and attach it to this message. It is uncommented and does some debugging output, and it is way slower because of the cycles, but may be also of use. It gives correct output for 10*10*10 case, and works for 10*10*10 system even on one core. So maybe the failure you observed is related to that small amount of memory that makes a difference between -old and -new variants.

Regards,
Oleg.

04.06.2013, 21:33, "Ray Shan" <[email protected]...>:

reaxc_species_test.tar.gz (7.89 KB)