compute stress/atom error with lammps package compiled with gpu

Dear lammps Users,

I am getting an error during calculation of stress per atom for pair style SW using gpu package. I am using command

"compute myStress all stress/atom virial"

I am using recent version of Lammps(16Dec2013) with Mixed precision/double CUDA precision variable.

I have also checked it for LJ system and its showing the same error. For small number of particle (<1000) its working but giving wrong values of stress per atom.

Here is the screen output of error

LAMMPS (16 Dec 2013)
Reading restart file …
restart file = 16 Dec 2013, LAMMPS = 16 Dec 2013
WARNING: Restart file used different # of processors (…/read_restart.cpp:681)
orthogonal box = (-24.8354 -24.8354 -24.8354) to (24.8354 24.8354 24.8354)
1 by 1 by 1 MPI processor grid
4096 atoms
Finding 1-2 1-3 1-4 neighbors …
0 = max # of 1-2 neighbors
0 = max # of 1-3 neighbors
0 = max # of 1-4 neighbors
1 = max # of special neighbors
Resetting global state of Fix 1 Style nvt from restart file info

Hi Debdas,

thanks for reporting the issues. Could you try rebuild the gpu package with the attached lal_answer.cpp file? and then rebuild lammps with the new libgpu.a. There are bugs in that file that lead to segfaults when the virial per atom is computed, in addition to those with npt/nph runs using thermo greater than 1.

These problems would not arise with versions <= 9Aug13 or so.

Let me know if the modified code fixes the issue you encounter.

Best,

-Trung

lal_answer.cpp (7.52 KB)

Hi everyone!

  I am also having segfaults when attempting to calculate the stress per
atom using the GPU package (in my case using eam+single precision). The
modifications on lal_answer.cpp did not change the error. Probably the
npt/nph issue and the stress calculation one are unrelated.

  Could this be related with the excessive memory usage by the gpu package?

  Best,
  Luis

Luis and Debdas,

attached are the input scripts I used to show that npt and stress/atom work fine for eam/gpu and sw/gpu with the changes made to lal_answer.cpp. You can try running these scripts to see if the crash persists. The npt/nph issue and the stress per atom compute issue are indeed unrelated.

Luis, did you remove the old LAMMPS binary before rebuilding so that the new LAMMPS binary is linked against the updated libgpu.a?

Best,

-Trung

sw_virial.gpu.in (660 Bytes)

log.sw.mixed (2.39 KB)

eam.gpu.in (764 Bytes)

log.eam.single (3.56 KB)

Dear all,

First of all many many thanks to Trung for sending the file. I am using fix nvt and in my case, its working fine for sw/gpu and lj/cut/gpu and giving the correct stress/atom result using modified lal_answer.cpp.

Thanks again

Debdas

Hi Trung,

  I think I got confused with the files and the compilation. All is
working fine now.

  Thank you very much for the effort. When will this correction be
deployed to LAMMPS?

  Best,
  Luis

posted a 12Jan14 patch today with this included

Steve

Thanks a lot.