My simulation job is still not using gpu

Hi,

I want to run lammps gpu on a supercomputer. I request cpus and gpus as the supercomputer guide says.
For running, I use the following command in the job script file:

mpirun -np 24 lmp_gpu -sf gpu -pk gpu 8 -in input.in

And at the beginning of the produced log file, I notice:

LAMMPS (7 Dec 2015)
using 1 OpenMP thread(s) per MPI task
package gpu 1
package gpu 8

However, the staff tells me that the job is still not using any of the GPUs requested
though the processes have been loaded to GPU memory.

Is there any problem with my commands in gpu requesting/running?

Thanks
X

Hi,

I want to run lammps gpu on a supercomputer. I request cpus and gpus as the
supercomputer guide says.
For running, I use the following command in the job script file:

mpirun -np 24 lmp_gpu -sf gpu -pk gpu 8 -in input.in

And at the beginning of the produced log file, I notice:

LAMMPS (7 Dec 2015)
  using 1 OpenMP thread(s) per MPI task
package gpu 1
package gpu 8

However, the staff tells me that the job is still not using any of the GPUs
requested
though the processes have been loaded to GPU memory.

Is there any problem with my commands in gpu requesting/running?

impossible to say with certainty without seeing what it is in your
input.in file.

your command line seems ok, but it won't make a difference unless you
also use a pair style that actually does support the /gpu suffix. if
none exists for what you use in your input, then LAMMPS cannot make
use of your GPUs, regardless of how correct your invocation command
line is.

axel.

Hi again,
Thanks for the kind reply.
This is my input file. Can you please make comment on it? I think “lj/class2/coul/long” supports gpu, according to the document.
Thank you.

Hi again,
Thanks for the kind reply.
This is my input file. Can you please make comment on it? I think
"lj/class2/coul/long" supports gpu, according to the document.

yes, but your "read_restart" command will override it with whatever
is set in your restart file, which is also according to the
documentation.

axel.

Thanks so much. I therefore need to prevent from overriding.

Instead of the read_restart command, I tired using a data file and the read_data command. I think no overriding occurs in this case.
However, again no gpu is being used.
Can you please check this situation?

Thanks so much. I therefore need to prevent from overriding.

Instead of the read_restart command, I tired using a data file and the
read_data command. I think no overriding occurs in this case.
However, again no gpu is being used.
Can you please check this situation?

no.

Just read the restart file first, then set new pair or other
styles. I also believe that if you had
written the restart file from a run that used a pair GPU
style, it would restart w/out the need to reset the pair style.

Steve

Thanks Steve,
That results in: “ERROR: All pair coeffs are not set (…/pair.cpp:227)”;
Which I understand why.
The solution may be reading a data file, in which all the coefficients are available.
And, I’m still wondering why reading a data file by read_data command does not use any gpu in my case.

Thanks Steve,
That results in: "ERROR: All pair coeffs are not set (../pair.cpp:227)";
Which I understand why.
The solution may be reading a data file, in which all the coefficients are
available.

And, I'm still wondering why reading a data file by read_data command does
not use any gpu in my case.

read_data has *nothing* to do with using the GPU or not. whether the
GPU is properly used depends on three factors (apart from system and
software configuration issues):

1) you have a correctly compiled and working executable. this can be
easily checked by running the LJ benchmark input from the LAMMPS
distribution. that should work with *any* GPU acceleration method, if
used correctly. if you cannot make that input use the GPU properly,
then there is no point in trying anything else

2) using the correct command line flags and/or the corresponding input
file commands. since restart files store some style information, it is
usually safer to use a data file and input the pair coefficients via
input (note, that with a recent LAMMPS versions, the pair coefficients
can be extracted from the restart file using the write_coeffs command.

3) having an input with styles for which a GPU acceleration exists.
this is most easily tested by looking at the help message with the -h
command line flag, which lists all included styles

all three of this issues are fully under your control and are
difficult to verify from the outside. *lots* of people have
successfully used LAMMPS with GPU acceleration, so it *does* work. the
rest is pretty much up to you and no amount of posting to the mailing
list saying that it doesn't work is going to change that.

axel.