Total # of neighbors = 0 when running with gpu

Hello. I’m trying to test simulatiom between with/without gpu but there are several problems.

1st, when I use this input script,

DPD simulation unit

units lj

neighbor 1.5 bin
neigh_modify delay 1

atom_style bond

read_data data.dpd

#read_restart 2.restart

mass * 1.0

bond_style harmonic
bond_coeff 1 100 0.86

pair_style dpd 1.0 1.0 34387
pair_coeff 1 1 25.0 4.5
pair_coeff 1 2 25.0 4.5
pair_coeff 1 3 25.0 4.5
pair_coeff 1 4 25.0 4.5
pair_coeff 2 2 25.0 4.5
pair_coeff 2 3 25.0 4.5
pair_coeff 2 4 25.0 4.5
pair_coeff 3 3 25.0 4.5
pair_coeff 3 4 25.0 4.5
pair_coeff 4 4 25.0 4.5

special_bonds lj 0.0 1.0 1.0

comm_modify vel yes

fix 1 all nve

timestep 0.005

dump haha all xyz 5000 movie.xyz

restart 1000 1.restart 2.restart

thermo_style custom step temp pe ke etotal

thermo 100
run 10000

the run goes well. with command

cpu; mpirun -np 4 lmp_linux -in in.init

gpu; mpirun -np 4 lmp_linux -sf gpu -pk gpu 1 -in in.init

A problem is that when I use gpu acceleration, the simulation time becomes two times longer, especially because of “Pair time”

Sceond problem is that when I looked at a log.lammps file after simulation, it says

Total # of neighbors = 0
Ave neighs/atom = 0
Ave special neighs/atom = 0.25

I think this is more serious problem because total number of neighobors cannot be zero.

When I run without gpu, this problem doesn’t occur.(Like “Total # of neighbors = 18762940”)

Same problem occurs when I simulate with micelle, which is in examples directory

Do I have to change input script when I’m using gpu? or is this just a normal situation?(being “Total # of neighbors = 0”)

Or is there anything wrong in my script?

I use 10AUG2015 version of lammps, and Quadro K420

Thank you.

Hello. I'm trying to test simulatiom between with/without gpu but there
are several problems.

1st, when I use this input script,

# DPD simulation unit

units lj

neighbor 1.5 bin
neigh_modify delay 1

atom_style bond

read_data data.dpd

#read_restart 2.restart

mass * 1.0

bond_style harmonic
bond_coeff 1 100 0.86

pair_style dpd 1.0 1.0 34387
pair_coeff 1 1 25.0 4.5
pair_coeff 1 2 25.0 4.5
pair_coeff 1 3 25.0 4.5
pair_coeff 1 4 25.0 4.5
pair_coeff 2 2 25.0 4.5
pair_coeff 2 3 25.0 4.5
pair_coeff 2 4 25.0 4.5
pair_coeff 3 3 25.0 4.5
pair_coeff 3 4 25.0 4.5
pair_coeff 4 4 25.0 4.5

special_bonds lj 0.0 1.0 1.0

comm_modify vel yes

fix 1 all nve

timestep 0.005

dump haha all xyz 5000 movie.xyz

restart 1000 1.restart 2.restart

thermo_style custom step temp pe ke etotal

thermo 100
run 10000

the run goes well. with command

cpu; mpirun -np 4 lmp_linux -in in.init

gpu; mpirun -np 4 lmp_linux -sf gpu -pk gpu 1 -in in.init

A problem is that when I use gpu acceleration, the simulation time becomes
two times longer, especially because of "Pair time"

​how large is your system? have you tried running with only one MPI task?
using the GPU will not automatically run faster. certain requirements of
your system, and your hardware have to be met. otherwise running on the CPU
can be faster. CPUs are optimized to handle complex operations, GPUs are
optimal to handle the large amounts of the same operations.

Sceond problem is that when I looked at a log.lammps file after
simulation, it says

Total # of neighbors = 0
Ave neighs/atom = 0
Ave special neighs/atom = 0.25

I think this is more serious problem because total number of neighobors
cannot be zero.

​no. this is not a problem. it is simply a manifestation of the neighbor
lists being constructed on the GPU. the output you refer to only concerns
neighbor lists on the CPU.​

​axel.​

When I run without gpu, this problem doesn't occur.(Like "Total # of