Green-Kubo Thermal Conductivity SPC/E

Hi Axel,

Apologies for replying to your email address.

I updated to the latest version of Lammps (lammps22Aug12), however I am still getting the same segmentation fault. Once my script enters the Green-Kubo calculation section.

If there is a possibility it is a Lammps bug, are there any bug tests I could run?

Many thanks, Jeff.

Hi Axel,

Apologies for replying to your email address.

I updated to the latest version of Lammps (lammps22Aug12), however I am
still getting the same segmentation fault. Once my script enters the
Green-Kubo calculation section.

If there is a possibility it is a Lammps bug, are there any bug tests I
could run?

i am now also getting a segfault after moving to a more powerful 64bit
machine (unlike my 32-bit 4yo laptop), but this is at a different location
of the code (neighborlist generation).

axel.

Hi Axel,

Apologies for replying to your email address.

I updated to the latest version of Lammps (lammps22Aug12), however I am
still getting the same segmentation fault. Once my script enters the
Green-Kubo calculation section.

If there is a possibility it is a Lammps bug, are there any bug tests I
could run?

please try to run the rhodo benchmark input
and let us know if this goes through cleanly or not.

thanks,
    axel.

Both the in.rhodo and in.rhodo.scaled input scripts ran without crashing on my version of lammps.

Many thanks, Jeff.

ok. i found the source of my segfault. it was due an inconsistent
recompile. but there is a more severe issue that shows up at
the begin of the GK calculation that hints at a programming bug.

[ws18.icms.temple.edu:5760] *** An error occurred in MPI_Wait
[ws18.icms.temple.edu:5760] *** on communicator MPI_COMM_WORLD
[ws18.icms.temple.edu:5760] *** MPI_ERR_TRUNCATE: message truncated
[ws18.icms.temple.edu:5760] *** MPI_ERRORS_ARE_FATAL: your MPI job
will now abort

that will take a little longer to sort out.

axel.

jeff,

i've tracked down your issue a bit more.
it seems to be caused indirectly by the
stress/atom compute, but the origin of
the error is in the pppm module.

can you please do another test and replace
the kspace_style pppm with pppm/old and
see if that fixes the issue for you until we have
the proper solution.

thanks,
    axel.

Hi, I have ran a short test and that appears to have worked. Many thanks for your help. I couldn't see any online documentation on the pppm/old style, what exactly is the difference between it and the pppm.

The reason I ask is because I have performed some non-equilibrium simulations of SPC/E water using the pppm style and wish to compare them to the Green-Kubo based calculations, and it has been shown in various papers that the electrostatic handling can have a large influence on the calculation of the thermal conductivity.

Many thanks, Jeff.

Hi, I have ran a short test and that appears to have worked. Many thanks for your help. I couldn't see any online documentation on the pppm/old style, what exactly is the difference between it and the pppm.

there never will be any. this is a temporary hack until the various derived
pppm classes are updated to the new pppm implementation that supports
analytic differentiation (and saving a bunch of FFTs) in addition to the
traditional (old) pppm.

it seems some yet unknown bug as crept in that is triggered by using
compute stress/atom.

pppm/old rolls back those changes to the previous version of the code.

i am trying new to identify the cause of the problem, but this is tricky.

The reason I ask is because I have performed some non-equilibrium simulations of SPC/E water using the pppm style and wish to compare them to the Green-Kubo based calculations, and it has been shown in various papers that the electrostatic handling can have a large influence on the calculation of the thermal conductivity.

you should be fine then using pppm/old for now. just don't depend
on it being around forever. it will vanish as soon as no other class
depends on it anymore. this is why it is not documented.

axel.