[lammps-users] ERROR on proc 0: Too many neighbor bins

Hi all
I am using LAMMPS 27Oct2021 version. 4 MPI tasks 1 opemp thread per MPI task
I got the following message after 22 10^6 steps

ERROR on proc 0: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 1: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 2: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 3: Too many neighbor bins (src/nbin_standard.cpp:213)

In the log file I find

Neighbor list info …
update every 10 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.3
ghost atom cutoff = 5.3
binsize = 2.65, bins = 41 41 82
6 neighbor lists, perpetual/occasional/extra = 6 0 0

(1) pair meam, perpetual, skip from (5)
attributes: full, newton on
pair build: skip
stencil: none
bin: none
(2) pair meam, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton/skip
stencil: none
bin: none
(3) pair lj/cut, perpetual, skip from (6)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(4) pair table, perpetual, skip from (6)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(5) neighbor class addition, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(6) neighbor class addition, perpetual, half/full from (5)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none

Is it coming from a neighbor/atom greater than 2000 within the cutoff raius of 5.3 ?
Thanks a lot for your help
best
Pascal

Hi all

I am using LAMMPS 27Oct2021 version. 4 MPI tasks 1 opemp thread per MPI task
I got the following message after 22 10^6 steps

ERROR on proc 0: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 1: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 2: Too many neighbor bins (src/nbin_standard.cpp:213)
ERROR on proc 3: Too many neighbor bins (src/nbin_standard.cpp:213)

[…]

Is it coming from a neighbor/atom greater than 2000 within the cutoff raius of 5.3 ?

No. The number of neighbors/atom has nothing to do with this error.
It usually means that your box has expanded too much. E.g. when using shrinkwrap boundaries with “escaping” atom(s).

Since this happens after a lot of steps, it is usually advisable to look at
a visualization of the trajectory to determine what is going on.

1 Like

Thanks a lot Axel,
OK I understand what happens. I am simulating deposition along z axis on a xy substrate. I use a ppm bpc for allowing non sticking atoms to escape and be always in the simulation.
When using ppf or pps atoms escaping above zmax value they are no more counted. Correct ? So it will be the solution for avoiding this error ?
The total number of inhjected atom is given by the fix deposit command. So I can calculate the sticcking coefficient
Best
Pascal

pps boundary will have the same result.

you must use ppf and thermo_modify lost ignore

This will also help to maintain parallel efficiency. With a growing simulation cell you can have many empty subdomains and thus would be wasting resources that way (unless you run with only 1 CPU or use “processors * * 1” which is probably a good idea anyway).

Yes I use in this case « processors n n 1 » since the substrate is denser than the deposition region where are injected atoms one after eact other towards the substrate. And so the load remains close in each MPI subdomain of size lx/n . ly/n . ly
So I will use ppf and thermo_modify lost ignore

Thanks again
Pascal

an alternative would be a ppf boundary and a soft repulsive wall (e.g. fix wall/harmonic) at a large enough distance, then all atoms are contained and the number doesn’t change.

which is the better choice depends on the kind of environment you are simulating.

Thanks Axel
I will try this too.
Best
Pascal

Dear Axel,
I will this this second possibility since using ppf and thermo_modify lost ignore there is another error of lost atonm at an ealier time step.
best
Pascal