Issue regarding peratom compute quantities in 28Jun14 Lammps on Linux CentOS6

Happy holidays all!

I'm having an issue when asking for peratom quantities in my simulations. When a run sequence is completed and then the program moves onto a new sequence, peratom quantities such as coordination, and centrosymmetry seem to break. I'm left with my supercell populated by a small number of rectangular grids where the values for these quantities becomes ridiculously high. I'm wondering if it is some sort of domain decomposition neighbour list issue between processors but am not sure. Attached below is a sample input file where this occurs. I've found uncomputing and then computing after each step seems to hold off this problem, but that could just be a fluke.

I'm using 28Jun14 build of lammps on Linux CentOS6 (openmpi build)

Thanks!

Matt

#Basic Settings
units metal
atom_style atomic
boundary p p p
dimension 3
read_data 110.atoms

#Specify inter-atomic potential
pair_style eam/alloy
pair_coeff * * NiCo2013.eam.alloy Ni Co
neighbor 2.0 bin
neigh_modify every 1 delay 0 check no

#delete_atoms overlap 1.75 all all
#Energy minimization
thermo 100
thermo_style custom step atoms temp pe lx ly lz pxx pyy pzz pxy press
thermo_modify lost error norm no flush yes
compute energ all pe/atom
compute bbreak all coord/atom 2.55
compute ccent all centro/atom fcc
fix 1 all box/relax iso 0.0 vmax 0.0001
dump min all cfg 100 dump/min.*.cfg mass type xs ys zs id c_energ c_bbreak c_ccent type
min_style cg
minimize 1e-50 1e-50 10000 10000
undump min
unfix 1
uncompute energ
uncompute bbreak
uncompute ccent

#Annealing
reset_timestep 0
timestep 0.001
compute energ1 all pe/atom
compute bbreak1 all coord/atom 2.55
compute ccent1 all centro/atom fcc
fix 1 all npt temp 773 773 0.1 x 0.0 0.0 1 y 0.0 0.0 1 z 0.0 0.0 1
thermo 500
thermo_style custom step atoms temp pe pxx pyy pzz pxy lx ly lz
thermo_modify lost error norm no flush yes
dump relax all cfg 500 dump/anneal.*.cfg mass type xs ys zs id c_energ1 c_bbreak1 c_ccent1 type
run 100000
undump relax
unfix 1
uncompute energ1
uncompute bbreak1
uncompute ccent1

#Cooling
reset_timestep 0
compute energ2 all pe/atom
compute bbreak2 all coord/atom 2.55
compute ccent2 all centro/atom fcc
fix 1 all npt temp 773 300 0.1 x 0.0 0.0 1 y 0.0 0.0 1 z 0.0 0.0 1
thermo 500
thermo_style custom step atoms temp pe pxx pyy pzz pxy lx ly lz
thermo_modify lost error norm no flush yes
dump relax all cfg 500 dump/relax.*.cfg mass type xs ys zs id c_energ2 c_bbreak2 c_ccent2 type
run 100000
undump relax
unfix 1
uncompute energ2
uncompute bbreak2
uncompute ccent2

#Equilibriate
reset_timestep 0
compute energ3 all pe/atom
compute bbreak3 all coord/atom 2.55
compute ccent3 all centro/atom fcc
fix 1 all npt temp 300 300 0.1 x 0.0 0.0 1 y 0.0 0.0 1 z 0.0 0.0 1
thermo 500
thermo_style custom step atoms temp pe pxx pyy pzz pxy lx ly lz
thermo_modify lost error norm no flush yes
dump relax all cfg 500 dump/equil.*.cfg mass type xs ys zs id c_energ3 c_bbreak3 c_ccent3 type
run 100000
undump relax
unfix 1
uncompute energ3
uncompute bbreak3
uncompute ccent3

write_restart NiCo.restart

I’m left with my supercell populated by a
small number of rectangular grids where the values for these quantities
becomes ridiculously high.

I don’t understand what this means, or the big picture problem.

I’m assuming that the dynamics that your successsive simulations
show are find (e.g. thermodynamics, viz of system). Also you are
simply dumping some per-atom compute values, like coord/atom

and centro/atom. Those computes have no “memory” of previous
timesteps. So I can’t imagine that there is anything about the sequence
of runs that affects their result on timestep N. E.g. you should
get the same value if you write a restart file at timestep N, restart
a new simulation and invoke dump output of the compute quantity.

Neither of those computes use rectangular grids, so I don’t understand
the comment above, either.

Steve