Using a 2nd gpu device with USER-CUDA package

Forgot to include lammpslist


there are two ways to do what you want. One is to set the GPUs in compute exclusive mode (as root do: "nvidia-smi -g 0 -c 1" and "nvidia-smi -g 1 -c 1"). This will allow only one process per GPU. Note that this is not a good setting for using the "GPU package" of LAMMPS since you typically want to run two MPI processes per GPU. There are also a number of other codes which use oversubscription of GPUs to hide data transfers.

The second possibility is to specify which GPU to use in the each input script.

For example:
"package cuda gpu/node special 1 0"
tells the USER-CUDA package to use 1 GPU with the id 0.

Then you can use:
"package cuda gpu/node special 1 1"
in the second script to use the GPU with id 1 with a second run.

YOu can also use the variable command to allow you to choose GPUs as an commandline argument:

"package cuda gpu/node special 1 ${gpuid}"

and running LAMMPS with the argument "-var gpuid 0" or "-var gpuid 1".

A few more general notes:
(i) If you have a system with 4 GPUs and want to run two siumulations which use 2 GPUs each you can also use the compute exclusive mode or the following lines in an input script:
"package cuda gpu/node special 2 0 1"
"package cuda gpu/node special 2 2 3"
(ii) The IDs you set will be ignored if the GPUs are in exclusive mode.
(iii) Generally gpus will be sorted in the USER-CUDA package internally according to their number of multiprocessors. Thus a small GPU which might be installed for the X-Server only will usually not be used as long as you try not to many GPUs while they are in compute exclusive mode, or you dont specify that GPU to be used with the "package cuda..." command.


-------- Original-Nachricht --------