[lammps-users] GPUs on multiple nodes

Dear Axel and Manish,

Your discussion about GPUs is very helpful to us. I have a silly
question: How to maximumly use more GPU cores? I was wondering whether
the number of GPU corese able to use is up to the quantity of CPU
cores assigned with mpi? For example, we have 480 cores per GPU and we
hope all of them work together with one task.

Hongyi

Dear Axel and Manish,

Your discussion about GPUs is very helpful to us. I have a silly
question: How to maximumly use more GPU cores? I was wondering whether
the number of GPU corese able to use is up to the quantity of CPU
cores assigned with mpi? For example, we have 480 cores per GPU and we
hope all of them work together with one task.

you cannot treat GPUs like CPUs. they are "external devices", there is no
operating system on the GPU and you cannot (easily) partition the GPU
cores. so if an MPI task dispatches a kernel to the GPU, that kernel will
use the whole GPU. if you attach multiple MPI tasks to the same GPU
the kernels from the different MPI tasks will be serialized (at least on
all but the latest GPU hardware). thus "oversubscribing" the GPU only
has a benefit, if the amount of work that is still done on the _CPU_
(while the GPU is busy) takes much longer than the GPU kernel so
that multiple GPU kernels can be completed before the next synchronization
point in the code. with newer hardware this is a bit blurred, since
kernels can be executed concurrently, but if a kernel has, say, 80%
occupancy of the GPU, there is only 20% left in any case.

the drilldown is the following.
- each GPU kernel is written to run on an entire GPU
- you can only use one GPU per MPI taks
- multiple MPI tasks can connect to the same GPU
  but the amount of GPU capacity available for sharing
  depends on your input and your hardware.
  oversubscribing GPUs is similar to hyperthreading:
  sometimes it helps, in a few cases a lot, sometimes
  it can slow you done, in a few cases by quite a lot.

HTH,
   axel.

Dear Hongyi

Actually I was hoping that would be so too - to be able to use the
full GPU on a single task.
But the more I use them, the more I feel that the mis match between
GPU and CPU speeds would be tricky to handle ... I just got initiated
into GPGPU computing few weeks back...

Thanks,
Manish Agarwal
<[email protected]...>
- - - - - - - - - - - - - - - - - - - - - - - - - - -