run lammps in parallel with cuda acceleration

Dear all,

I am now trying to run lammps with cuda package, and meet some errors.

When I type "mpirun -np 16 lmp_gpu -c on -sf cuda -i test.txt", the error is "several fixes do not support cuda". This is because the "fix addforce region" command does not support cuda.

Then I try another method, and add "cuda" suffix only to the commands that support cuda explicitly, and type "mpirun -np 16 lmp_gpu -c on -i test.txt". Similar error occurs: "you asked for a verlet integration using cuda, but several fixes have not yet been ported to cuda"

So I am wondering whether I can use cuda to accelerate the commands that support it and run other commands as usual without acceleration at the same time.

Any help will be appreciated!

Best wishes,

Xiaohui

University of Wisconsin-Madison

Dear all,

I am now trying to run lammps with cuda package, and meet some errors.

When I type "mpirun -np 16 lmp_gpu -c on -sf cuda -i test.txt", the error is "several fixes do not support cuda". This is because the "fix addforce region" command does not support cuda.

Then I try another method, and add "cuda" suffix only to the commands that support cuda explicitly, and type "mpirun -np 16 lmp_gpu -c on -i test.txt". Similar error occurs: "you asked for a verlet integration using cuda, but several fixes have not yet been ported to cuda"

last time i checked, this was not an error, but a warning.

So I am wondering whether I can use cuda to accelerate the commands that support it and run other commands as usual without acceleration at the same time.

you can, but those other commands with *decelerate* the calculation.

please see: http://lammps.sandia.gov/doc/Section_accelerate.html

and make some benchmarks. using GPU acceleration effectively requires
some technical understanding how GPUs work and how GPU acceleration is
implemented in the particular code (LAMMPS supports two different
options, BTW). failing that understanding, you are likely to achieve
the opposite of what you desire.

axel.