GPU and CPU

Dear all,
I have run some simulations in GPU and only with CPU separately the same system. For GPU it shows that error : Error on Proc 8: out of range atoms - cannot compute PPPM (../pppm_gpu.cpp:255). However for CPU it does not show any error.
Could anyone please make some comment on that?
With best regards,
Pritam

Dear all,
I have run some simulations in GPU and only with CPU separately the same system. For GPU it shows that error : Error on Proc 8: out of range atoms - cannot compute PPPM (../pppm_gpu.cpp:255). However for CPU it does not show any error.
Could anyone please make some comment on that?

impossible to say anything without more knowing details. due to the
CPU and GPU implementations having different code paths, there are
some differences to be expected, even more so in case GPU support was
compiled with single or mixed precision.

does this happen with any of the benchmark tests or examples provided
with LAMMPS that use pppm?
or only your input?

axel.

Dear Axel,
Thank you very much.
When I test it with another system that use PPPM, it does not show any error with GPU.
If the problem is from initial configuration, the error should also appear in case of CPU only.
I am using the latest version of LAMMPS. I used mixed precision in case of GPU support. What else I should check?

With best regards,
Pritam

Dear Axel,
Thank you very much.
When I test it with another system that use PPPM, it does not show any error with GPU.
If the problem is from initial configuration, the error should also appear in case of CPU only.

no, i disagree. it need not. with mixed precision, your forces between
atoms can overflow in the part computed in single precision, and that
would not happen on the CPU. but even in double precision you have
have a difference, if your initial configuration is marginal due to
the different order in which forces are computed and summed.

since this indicates, that you likely have some *very* close contacts
somewhere, you should properly treat those, even on the CPU. the fact,
that your CPU run completes, doesn't mean, that it is automatically
producing a meaningful trajectory.

there are different ways to go about this. this can be due to how you
construct your initial geometry, how you choose box dimension, what
protocol you use to relax/equilibrate your system, whether you use
tools like fix nve/limit or fix viscous to slow down high-energy
partciles, whether you first run a minimization or not, what time step
you choose. these are all applicable in both cases, but the GPU code,
especially in mixed and even more so in single precision exposes them
more.

I am using the latest version of LAMMPS. I used mixed precision in case of GPU support. What else I should check?

your initial configuration. check the energies on that in the CPU only
run. see if you can improve the situation there.

axel.

Dear Axel,
Thank you very much.
When the box dimension is increased the problem is not appeared. Most probably it is due to the close contact between box dimension and position of the atoms at the edge of the box as you mentioned.

With best regards,
Pritam