LAMMPS compiles however, does't work on GPUs



I have compiled LAMMPS on Tesla K20Xm GPU with compute capability of 3.5. I did not get any error while compiling LAMMPS. I ran a sample simulation on 2 nodes, 4CPUs/node and 2GPUs using the sub job file with lines

mpiexec -np 8 lmp_wustlchpc < in.nve > out.nve2
mpiexec -np 8 lmp_wustlchpc -sf gpu -pk gpu 2 < in.nve1 > out.nve1

I get the error on the last line "Accelerator sharing is not currently supported on system (../gpu_extra.h:47)”

I am wondering if you could give me any pointers regarding what could be the issue.

talk to your system administrator and/or study the nvidia-smi manual.
the GPU on the machine you are using has been configured for exclusive
access. in this mode, you can have only one MPI process per GPU.
however, you are trying to run with 4 MPI tasks per GPU. this requires
GPU sharing to be enabled.


How about not sharing accelerators?

You get this error because on some systems it is not possible for multiple MPI processes to use the same GPU. It is actually mentioned here:

If you have an Nvidia GPU then apparently there is a way to enable this with nvidia-smi (I don’t know how it works), alternatively you can use just one MPI process per GPU.