[lammps-users] what might cause "Did not assign all atoms correctly"?

Hi all.

Wondering what could cause the "Did not assign all atoms correctly" error when using the "read_data" method (reading in ascii restart file).

The boundary conditions are "p s p". It's a 2d setup, so all z co-ords are set to zero. I'm checking now that all the y-values are within the initial range (but that shouldn't matter with "s" type boundaries, right?)

There are 435288 atoms in the ascii file. The number of atom lines in the data file and the "435288 atoms" line in the header agree. LAMMPS seems to successfully parse all the atoms, since it doesn't complain until the "Did not assign all atoms correctly" error which happens after the MPI broadcast of atoms to all procs. In the LAMMPS output, I get:

The boundary conditions are "p s p". It's a 2d setup, so all z co-
ords are set to zero. I'm checking now that all the y-values are
within the initial range (but that shouldn't matter with "s" type
boundaries, right?)

Not so. For the y-dim you need to insure the ylo/yhi specified
in the data file bound the atoms. The values may be changed on
subsequent timesteps to shrink-wrap but they have to be large
enough intially. The read_data doc page discusses this (below).

If that doesn't fix the problem, you'll need to print out the coords
of the atom that is not assigned and figure out why - should be easy
on 1 proc.

Steve

If the system is non-periodic (in a dimension), then all atoms in the
data file should have coordinates (in that dimension) between the lo
and hi values. Furthermore, if running in parallel, the lo/hi values
should be just a bit smaller/larger than the min/max extent of atoms.
For example, if your atoms extend from 0 to 50, you should not specify
the box bounds as -10000 and 10000. Since LAMMPS uses the specified
box size to layout the 3d grid of processors, this will be sub-optimal
and may cause a parallel simulation to lose atoms when LAMMPS
shrink-wraps the box to the atoms.