LAMMPS segmentation fault when fix print used with group/group kpace yes and pk gpu

When I tried calculation electrostatic force using:

compute 7 upper group/group lower pair no kspace yes
variable force_e_1 equal c_7[3]

and print it in a file with:
fix test1 all print 100 "${force_e_1}" file test_group.txt

LAMMPS give a segfault.
If i omit the fix print line it's all good. If I make kspace no, it runs fine.

Problem occur only when both group group with kspace yes and fix print is used.
Please find attached a test script and test data file with 1000 atoms.

Command given: lmp_mpi -pk gpu 1 -sf gpu -in
LAMMPS: 27 Aug 2016-ICMS
CPU: Intel(R) Xeon(R) CPU E5-2620 v3
GPU: GTX 1080, CUDA- 8.0 (1.04 KB) (53.7 KB)

Does it happen when you don’t use the GPU (CPU only) ?


For me it runs fine on just CPU. problem only shows up in run with gpu.

P.S. I am observing similar behavior while using xeon phi as well, (user-intel package, separate binary, independent from above). But there it is lot more complicated. There it only crashes when i give following 2 lines:

compute 6 upper group/group lower pair yes kspace no
compute 7 upper group/group lower pair no kspace yes

together. But it only works for large systems, I am not able to reproduce it properly. Will update if i can isolate the problem

Possibly Trung (CCd) can give some insight on this,

since it appears to be unique to the GPU package.


HI Amit,

I can reproduce the issue with the provided input deck. Will take a look and let you know.