A strange error while running gpu package

Dear lammps,

     I am running lammps gpu ( 64 bit ) on two gtx cards. My computation contains LJ pair-wise
potential, FENE bond, and PPPM/gpu calculation.
     When I input "package gpu force/neigh 0 0 -1", application run correctly.
     When I input "package gpu force/neigh 0 0 -1" , it produce error

Setting up run ...
Cuda driver error 700 in call at file 'geryon/nvd_device.h' in line 41.

Dear lammps,

I am running lammps gpu \( 64 bit \) on two gtx cards\. My computation

contains LJ pair-wise
potential, FENE bond, and PPPM/gpu calculation.
When I input "package gpu force/neigh 0 0 -1", application run
correctly.
When I input "package gpu force/neigh 0 0 -1" , it produce error

there is no difference between those two statements.
what do you do differently?

axel.

I have tried examps/gpu/in.melt, it worked for "package gpu force/neigh 0 0 1", but
failed for "package gpu force/neigh 1 1 1". This is error

- Using GPGPU acceleration for lj/cut:
- with 1 proc(s) per device.

Sorry,
     it is a typo. My point is that lammps/gpu could
     work under "package gpu force/neigh 0 0 -1"
     fail under "package gpu force/neigh 1 1 -1"
     It looks strange.
Best regards,
yangpeng

Sorry,
it is a typo. My point is that lammps/gpu could
work under "package gpu force/neigh 0 0 -1"
fail under "package gpu force/neigh 1 1 -1"

it works for me with the current lammps version.

there are two possible explanations.
1) you don't have an up-to-date lammps version
2) you have a broken GPU

axel.

Thanks.
     I think maybe 2nd reason, because I compiled lammps using May 3 release tar.gz.
It worked yesterday, but produced errors today.
     However, I could still find gpu information through "nv_get_devices", and "Nvidia-setting".

Best Regards,
Yangpeng

Thanks.
I think maybe 2nd reason, because I compiled lammps using May 3
release tar.gz.
It worked yesterday, but produced errors today.
However, I could still find gpu information through
"nv_get_devices", and "Nvidia-setting".

those will work for broken GPUs unless they are
completely nonfunctional.
the errors you show can come from memory corruption,
which can be caused from weak/broken memory or
overheating (quite possible with 2 GPUs). you should
run cuda memtest to make certain.

http://sourceforge.net/projects/cudagpumemtest/

axel.

Wow.
     I will thanks Alex a lot. The 2nd card really have memory problem.