[lammps-users] Problems with compute cna/atom

Hello everyone,

I use compute cna/atom to identify FCC and surface atoms in a Nickel single crystal for a crack propagation problem. I have the following problems while running the simulation.

  • Except for the first dump, the compute cna/atom output is random/garbage for subsequent timesteps on multiple processors. (I saw that this problem has been reported earlier)
  • Even on one processor, the atoms that are correctly identified as “fcc” and “unknown” in the first step, have the same tag till the end of the simulation even though more atoms should become “unknown” as the crack advances.

There was a similar problem with the compute ackland/atom command too.

The first two pictures are snapshots of a simulation on a single processor that show the atoms before appying a deformation (cna_ti.jpg) and another after the crack has advanced substantially (cna_tf.jpg). Green atoms refer to “unknown” atoms and white atoms refer to “fcc” atoms. The atoms that are green initially, stay green throghout. It is as if the cna output for an atom does not get updated.
I have tried changing the cna cutoff slightly as well as trying out different frequencies of building the neighbor lists.




The cna/atom, ackland/atom, centro/atom commands all have no memory
of previous timesteps. I.e. the calculation they do is only for the
current timestep. So I don't understand the time-dependent part of
your message. There could be a bug in cna/atom, but I think it
should be reproducible for a single snapshot. E.g. if you read in a snapshot
or restart file and compute the cna/atom, you should get the same answer
as doing it in the middle of a long run. I believe you should also
get the same answer in parallel as in serial, for any particular
atom. If you have a simple script (small # of atoms) that violates
either of these, I will
look at it.


Thanks a lot Steve,

Here is the input file for a small test problem. This one gives a weird output for multiple processors.
I shall try the other test and let you know about it.
However, from the attached figure in the previous mail, it is obvious that something is wrong. The atoms on and near the crack surface after the crack has propagated cannot be in FCC configuration (like the other atoms in the bulk). As I had mentioned earlier, I get a similar result with ackland/atom.


Here is the code:

Nickel 3D crack simulation using EAM

Initial relaxation NVE. Then NVT

units metal
dimension 3
boundary p p p

atom_style atomic
neighbor 2 bin
neigh_modify delay 1 every 1

create geometry

lattice fcc 3.524
region box block 0 25 0 25 0 5
create_box 2 box
create_atoms 1 box

mass 1 58.71
mass 2 58.71

EAM potentials

pair_style eam
pair_style eam/opt
pair_coeff * * /nfs/06/osu5197/lammps/potentials/Ni_u3.eam

********** Define groups ************

Atoms to be deleted

region delatoms block 11 14 12 12.5 INF INF
group delatoms region delatoms

delete atoms to create a crack

delete_atoms group delatoms

Boundary atoms

region btop block INF INF 24.5 INF INF INF
region bbot block INF INF INF 1 INF INF
group btop region btop
group bbot region bbot

group boundary union btop bbot
group unmobile union boundary delatoms
group mobile subtract all unmobile
set group boundary type 2

initial velocities

compute new mobile temp/deform
velocity mobile create 100 887723 temp new

Compute CNA

compute 1 mobile cna/atom 3.00


fix 1 all nve
fix 2 mobile temp/rescale 100 100 100 0.05 0.5
fix 3 boundary setforce 0.0 0.0 0.0


timestep 0.001
thermo 200
thermo_modify temp new

run 5000

#****************** 2nd run **********************#


unfix 1
unfix 2
fix 4 boundary setforce 0.0 0.0 0.0
fix 5 all nvt 100 100 200 drag 0.1

Apply fix deform to apply a strain.

fix 1000 all deform 1 y delta 0 2.5 remap x


dump 1 mobile custom 200 mobile1_cna1.dat c_1 x y z
dump 2 all xyz 500 all.xyz


timestep 0.001
run 20000