Neighbor list overflow / Segmentation fault

Dear LAMMPS community,

I am having difficulty running a simulation using the 30 Aug 2013 distribution. My simulation size is fairly large, 305793 atoms. The input data consists of the final positions of a previous dump file (a heated and cooled block of Cu). I am encountering a neighbor list overflow error at the beginning of the run with the script below. When I increase the page size in nigh_modify to a very large number (maybe 5000000000), I get a ‘segmentation fault error, address not mapped’ error from the OS (see below). It’s not clear to me why I am seeing the list overflow error, especially since I am running the same configuration with the same .eam potential that was used previously. I am relatively new to LAMMPS, so any insight/education/advice would be greatly appreciated.

#---------Initialize Sim---------------

clear
units metal
dimension 3
boundary p p p
atom_style atomic

#---------Create Atoms -----------------

read_data start.data.txt

--------Define Interatomic Potential —

pair_style eam/alloy
pair_coeff * * Cu_mishin1.eam.alloy Cu
neighbor 2.0 bin
neigh_modify delay 10 every 2 check yes page 500000 one 50000

#---------Run-----------------------------

reset_timestep 0
timestep 0.001
fix 1 all nvt temp 700 700 0.1

Set thermo output

thermo 1000
thermo_style custom step lx ly lz press pe ke etotal vol temp

dump 1 all atom 40000 dump_up.atom

Run for 3 nanosectons (1 fs timestep)

run 3000000
unfix 1

OS ERROR

LAMMPS (30 Aug 2013)
Reading data file …
orthogonal box = (0 0 0) to (100 100 100)
2 by 4 by 4 MPI processor grid
305793 atoms
Setting up run …
[n072:30924] *** Process received signal ***
[n072:30924] Signal: Segmentation fault (11)
[n072:30924] Signal code: Address not mapped (1)
[n072:30924] Failing at address: (nil)
[n072:30924] [ 0] /lib64/libpthread.so.0 [0x36c2a0eb10]
[n072:30924] [ 1] lmp_openmpi(_ZN9LAMMPS_NS8Neighbor15half_bin_newtonEPNS_9NeighListE+0x35c) [0x72fbfc]
[n072:30924] [ 2] lmp_openmpi(_ZN9LAMMPS_NS8Neighbor5buildEi+0x483) [0x722753]
[n072:30924] [ 3] lmp_openmpi(_ZN9LAMMPS_NS6Verlet5setupEv+0x156) [0x986346]
[n072:30924] [ 4] lmp_openmpi(_ZN9LAMMPS_NS3Run7commandEiPPc+0xd1e) [0x94f2ae]
[n072:30924] [ 5] lmp_openmpi(_ZN9LAMMPS_NS5Input15command_creatorINS_3RunEEEvPNS_6LAMMPSEiPPc+0x29) [0x6b27b9]
[n072:30924] [ 6] lmp_openmpi(_ZN9LAMMPS_NS5Input15execute_commandEv+0x2225) [0x6b7f65]
[n072:30924] [ 7] lmp_openmpi(_ZN9LAMMPS_NS5Input4fileEv+0x1cb) [0x6b42cb]
[n072:30924] [ 8] lmp_openmpi(main+0x94) [0x6cccc4]
[n072:30924] [ 9] /lib64/libc.so.6(__libc_start_main+0xf4) [0x36c1e1d994]
[n072:30924] [10] lmp_openmpi(_ZNSt8ios_base4InitD1Ev+0x41) [0x490929]
[n072:30924] *** End of error message ***

Dear LAMMPS community,

I am having difficulty running a simulation using the 30 Aug 2013
distribution. My simulation size is fairly large, 305793 atoms. The input
data consists of the final positions of a previous dump file (a heated and
cooled block of Cu). I am encountering a neighbor list overflow error at the
beginning of the run with the script below. When I increase the page size in

for a system like this, you should not get a neighbor list overflow.
so something else has to be wrong. the simplest possible scenario
would be that your data file is invalid or that you don't read it in
correctly. that could be for example due to a mismatch in the atom
style or some other problem.

nigh_modify to a very large number (maybe 5000000000), I get a 'segmentation
fault error, address not mapped' error from the OS (see below). It's not

changing the page size doesn't really help for this. you would have to
increase the one value (which corresponds to the estimate for the
typical number of neighbors for one atom). but as i wrote above, for
this kind of system, you should not get a neighbor list overflow in
the first place.

clear to me why I am seeing the list overflow error, especially since I am
running the same configuration with the same .eam potential that was used
previously. I am relatively new to LAMMPS, so any insight/education/advice
would be greatly appreciated.

since you use a version that is almost two years old, try using the
latest version (use one of the precompiled binaries for convenience.
for debugging it doesn't matter, if you do not have a custom adapted
parallel super fast self-compiled version). if there is indeed a bug,
it may have been fixed and in any case, it will only be fixed in the
development version.
if the problem persists, you have to provide more details on how the
data file was constructed and with what settings or provide a
(gzipped) version of the data file itself.

axel.