atoms lost with more processors

Dear all,

I am trying to use lammps to deposit Ga and N atoms to a clean GaN substrate. When I use 12 processors to do the simulation, the calculations goes with warning, but still can be finished:
“WARNING: Particle deposition was unsuccessful”.

However, when I use 16 or more processors, there is an error in the beginning of simulation:

LAMMPS (9 Jan 2009)
Scanning data file …
Reading data file …
orthogonal box = (-0.5 -0.5 30) to (45 49 110)
2 by 2 by 4 processor grid
2198 atoms
Finding 1-2 1-3 1-4 neighbors …
0 = max # of 1-2 neighbors
0 = max # of 1-3 neighbors
0 = max # of 1-4 neighbors
1 = max # of special neighbors
Lattice spacing in x,y,z = 3.189 5.52351 5.20762
314 atoms in group fix
1884 atoms in group mobile1
0 atoms in group depositRegion
Setting up run …
Memory usage per processor = 3.4976 Mbytes
Step Atoms Temp E_pair TotEng Press
ERROR: Lost atoms: original 2198 current 1731
Exit code -5 signaled from b634
Killing remote processes…MPI process terminated unexpectedly
DONE
Signal 15 received.

I changed thermo to 1, this error still exist, and it is even before the printing of the 0th time step information.
Then I tried “fix 1 all nve/limit 0.01”, it either does not help.
I have tried different versions of lammps(10Aug-2010,23Mar-2011, etc.), It also does not help.

Through googling, maybe it is due to decreasing size of subdomain when # of processors increasing. Does it mean, in the first time step, some atoms move less than the size of subdomain fo #12 processors but larger than #16 processors? Or are there other variables change so fast especially for 16 more processors?

I also tentatively modify the boundary “s s s” to “f f f” or “p p p”, this error disappeared, I was totally puzzled and do not know how to try further. Anybody who can give me any clue?

Thank all in advance!

The input file and system information file for these two calculations are all the same. The input file are as following, and the data file and output file for 12 and 16 processors are also enclosed.

data.c-sideM20 (81.7 KB)

output.12 (87.9 KB)

output.16 (943 Bytes)

in.c-sideM-20 (1.88 KB)

This section of the read_data doc file is probably relevant,
particularly the last line.

IMPORTANT NOTE: If the system is non-periodic (in a dimension), then
all atoms in the data file must have coordinates (in that dimension)
that are "greater than or equal to" the lo value and "less than or
equal to" the hi value. If the non-periodic dimension is of style
"fixed" (see the "boundary"_boundary.html command), then the atom
coords must be strictly "less than" the hi value, due to the way
LAMMPS assign atoms to processors. Note that you should not make the
lo/hi values radically smaller/larger than the extent of the atoms.
For example, if your atoms extend from 0 to 50, you should not specify
the box bounds as -10000 and 10000. This is because LAMMPS uses the
specified box size to layout the 3d grid of processors. A huge
(mostly empty) box will be sub-optimal for performance when using
"fixed" boundary conditions (see the "boundary"_boundary.html
command). When using "shrink-wrap" boundary conditions (see the
"boundary"_boundary.html command), a huge (mostly empty) box may cause
a parallel simulation to lose atoms the first time that LAMMPS
shrink-wraps the box around the atoms.

Steve

Thank Steve.
So I change the "boundary" or the box size in z direction to evade this error.