REAX still not working in parallel

Hi all,

I noticed that a couple days ago, it was reported that reax/c is not
working with more than 2 processors. Then on Jun 25, Dr Ray Shan
caught a bug for reax/c. I downloaded Jun 30, 2012 version with this
bug fixed, and ran the reax examples supplied in lammps package. But
still neither reax or reax/c example will run with more than 2
processors. It is either got stuck, killed, or with the "bondchk
failed" error. I have a Jan 10, 2012 version, which runs in parallel
for the reax examples. Therefore, it seems that there is still a bug
hidden somewhere caused by changes between Jan and Jun.

Best,

Hai

Hi Hai,

I don't remember seeing such a report that reax/c was not working for
more than 2 procs. We have recently updated reax/c for efficiency
improvement, as the Jun 25 patch said. And a little while ago we
fixed a bug with "fix reax/c/bonds" for bonding analysis. But reax/c
was never found not working in parallel.

Which example does not work for more than 2 procs? Do you mean that
both reax and reac/c fails with MPI after you downloaded new version?
We have actually just ran some large-scale simulations with thousands
of processors.

Please check your MPI installation, and make sure you can run MPI jobs.

Best,
Ray

I just tried all input scripts in /examples/reax using reax/c, before
and after updating from 25 Jun to 2 Jul version, and also with and
without "fix reax/c/bonds". All 9 input scripts worked with 4
processors under all circumstances. Please double check your reax and
reax/c installation.

Additionally, "bondchk error" are usually associated with too big timesteps.

Ray

Thank you for your timely reply. I recompiled the Jun 30 version but
found the same problems. Then I downloaded the July 3 version,
compiling using same method, and it runs perfectly for reax and
reax/c. So maybe something wrong with Jun 30 version. But anyway,
problem is solved and thanks for your time!

Hai

Hi Hai,

Glad to know it's been resolved. Thanks for pointing out the Jun 30
issue, I will try look into this.

Best,
Ray