error while running on GPU

Dear Developers and users

I’m going to run LAMMPS on GPU. When neglecting Kspace and using lj/cut/coul/cut as pairwise potential, everything goes well, but, once using KSpace, either ewald or pppm, and correspondingly lj/cut/coul/long, program crashes with the following error:

Cuda driver error 1 in call at file ‘geryon/nvd_kernel.h’ in line 364.

To get help it’s usually much better to include a full script with everything needed (potential files and input data files) to reproduce the error.
With what you have given so far, it is close to impossible to know :slight_smile:

Also, what version of LAMMPS did you use?

Anders

I use the last version, 2018, of LAMMPS.

I need to know what is the usual origin of the error? Has anybody ever experienced such case?

Hard to know without more info. If you send the full input script to reproduce the error, it might be possible :slight_smile:

You can try to remove -DUCL_NO_EXIT from the Makefile in lib/gpu, recompile library+LAMMPS to get more info yourself.

Also, you might want to try the latest stable version, not the latest from master branch since it has some issues addressed in https://github.com/lammps/lammps/pull/926.

Anders

there have been 8 (eight!) releases of LAMMPS in 2018 already. the proper way to describe the LAMMPS version is to name the full version date.
the very latest patch has a known bug (bugfix is already pending) in the GPU library due to some refactoring. the latest stable release (30 Mar 2018) is free from it and has been successfully tested on GPUs.
axel.

Thanks Axel
I’m using version 16Mar18.
Does it have bug on gpu?