Interesting point. In my script, I just have tersoff/zbl, but I guess
because I was running on a gpu, it assumed it should be tersoff/zbl/gpu?
Is there a way to enforce that it is just tersoff/zbl?
see the docs for the suffix command
if you *want* to run on the GPU, then you need to run with a version
of LAMMPS released after July 1st
When I run with tersoff/zbl and with newton on, I get this error:
ERROR: Pair style tersoff/zbl/gpu requires newton pair off
(../pair_tersoff_zbl_gpu.cpp:153)
How can I specify tersoff/zbl without assuming it is tersoff/zbl/gpu?
read the documentation AND THINK LOGICALLY (this has been your biggest
problem since the very beginning you posted questions on lammps-users,
you almost never think things through, you guess - often wrong - and
you don't seem to make any attempts to verify or validate your claims,
but rather "assume") if you specify the suffix flag, it will try to
append the suffix to any style that supports it. this is all very well
explained in the documentation. if you read *all* of the relevant
parts and think carefully about what is said and make some simple
tests to confirm, you wouldn't be facing such issues as you are facing
over and over again.
And when I try turning newton off, and running with just 1 mpi core, I get
the following message:
ERROR: Insufficient memory on accelerator (../gpu_extra.h:38)
that error is self explanatory.
Any suggestions?
Note that it the simulation does not hang for 1-2 million atoms, but above
that it does.
at this point, we have a total mess of options and it is not clear,
what specific setup and settings this corresponds to. please provide a
simple summary explaining what combinations of GPU and CPU and MPI on
or off do work or not. ...and best provide (simple) examples to
reproduce this. we also need to know LAMMPS version and compilation
settings. ...and keep in mind, that nobody will make an effort to
debug an issue, that cannot be reproduced with the latest development
version of LAMMPS.
axel.