Hey everyone! Somehow everytime i run with GPU package, I get 224 cores running. So far it wasn’t an issue, since I had ~280 cores availables. However, I am now considering getting a ~1k cores GPU, and I was wondering if this behaviour of using 224 cores will continue to be so, or if it gets like “chunks” of 224: 224, 448 (and I believe you can see what I mean).
Is this bypassed with USER-CUDA lib?
Both user-cuda and the gpu package will use all cores of the configured GPUs. So I don't understand how you are counting cores.
Oh, I found the reason now. If anyone else encounters this issue, the problem is with the version of CUDA i had installed.
It had a bug in the deviceQuery, and “the API doesn’t return the number of cores”:
This is actually a problem with the lammps device query code that assumes 32 cores per multiprocessor for your gpu instead of 48. It won't affect the lammps code or performance at all though.
It will be fixed in a future version.
ok, so you mean it’s actually using the 48 cores per processor, although the devicequery says otherwise?