How to find time average of nearest Neighbour distance?

This should be simple enough but confused on which command to use. I want per-atom time averaged distance between the nearest neighbours of the atom. For example let’s say atom 1 has nearest neighbours 2,3,4 and 5, I want the time averaged distance of bonds 1-2,1-3,1-4,1-5. The same thing for all the atoms in the simulation.

I tried to use:

compute 1 all pair/local dist
fix 3 all ave/time 100 1 100 c_2 file dist.txt mode vector

Now here I get errors like: Fix ave/time compute 2 does not calculate an array (src/fix_ave_time.cpp:153)

Even if I remove the mode vector part I get the same error.
What really is pair/local outputting? If I want to get a clean file like with columns like:

1st-atomtype 2nd-atomtype dist

For every time step is that possible?

This is not at all simple.

  • nearest neighbors are not necessarily bonds.
  • compute pair/local outputs all pairs that are within the neighbor list cutoff
  • those pairs of neighbors will change as atoms move around, so it is not at all easy to do averaging of this list of pairs.

The error message is expected because compute pair/local computes local data and that cannot be fed to fix ave/time.

What its documentation says it does.

This can be done with dump local when you also define a compute property/local

Outputting this kind of information for every time step is a very bad idea. This will be a huge amount of data and it will be highly correlated (and thus with low statistical relevance) and just the formatted read and write will create significant overhead.

Thanks for the reply Alex.

The problem is that I’m using Tersoff potential which is a many-body potential and that’s why I cannot use property/local as you have mentioned here: compute property/local - LAMMPS / LAMMPS Mailing List Mirror - Materials Science Community Discourse

I’m looking for a workaround. For context, I’m have a III-V semiconductor layers being simulated and I want to see how the lattice constant is changing because of internal strain because of lattice mismatch. Any other way of doing this that you know from experience?

That was 12(!) years ago. Have you checked with the documentation? That is where you should look first. It doesn’t mention that restriction. In fact, I just tried and there is no problem to add the following lines to a simulation with a manybody potential. The limitation that existed over a decade ago has thus long been lifted.

compute 1 all property/local patom1 patom2
compute 2 all pair/local dist

dump 1 all local 10 pair_local.lammpstrj c_1[*] c_2

Which outputs the two atom IDs and the distance of all pairs in the neighborlist:

ITEM: TIMESTEP
100
ITEM: NUMBER OF ENTRIES
1882
ITEM: BOX BOUNDS pp pp pp
0.0000000000000000e+00 2.1724000000000000e+01
0.0000000000000000e+00 2.1724000000000000e+01
0.0000000000000000e+00 2.1724000000000000e+01
ITEM: ENTRIES c_1[1] c_1[2] c_2
1 128 2.42733 
1 5 2.40558 
1 2 3.69466 
2 28 3.771 
2 32 2.4215 
2 31 2.27161 
2 6 2.48361 
2 156 3.74514 
3 5 2.21378 
3 102 2.36405 
3 7 2.31222 
4 391 2.36882 
4 390 2.38656 
4 5 2.4759 
4 8 2.30356 
5 102 3.70767 
[...]

GaAs10.dat (358.5 KB)
2002_GaAs.tersoff.txt (1.5 KB)
GaAs.lmp (1.1 KB)

Please try running this exact code GaAs.lmp above. It’s for GaAs. I’ve also attached the potential and structure file above.

Why?

I get this output. The end gives an error.

“ERROR: Pair style does not support compute property/local (src/compute_property_local.cpp:259)”

LAMMPS (29 Aug 2024 - Update 1)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
Loaded 1 plugins from C:\Users\sandi\AppData\Local\LAMMPS 64-bit 29Aug2024-MSMPI with Python\plugins
Reading data file …
orthogonal box = (0 0 0) to (56.53 56.53 56.53)
2 by 2 by 4 MPI processor grid
reading atoms …
8000 atoms
read_data CPU = 0.063 seconds

CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE

Your simulation uses code contributions which should be cited:

CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE

Neighbor list info …
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.6
ghost atom cutoff = 5.6
binsize = 2.8, bins = 21 21 21
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair tersoff, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
Setting up Verlet run …
Unit style : metal
Current step : 0
Time step : 0.001
Per MPI rank memory allocation (min/avg/max) = 3.113 | 3.113 | 3.113 Mbytes
Step Temp Press KinEng PotEng
0 300 1730.3034 310.18557 -26834.718
1000 311.10166 5708.5326 321.66415 -26508.277
2000 294.37458 5363.7761 304.36916 -26527.065
3000 301.30809 5657.7792 311.53808 -26511.865
4000 293.13652 5428.5651 303.08906 -26522.12
5000 302.50485 5579.7527 312.77546 -26516.668
6000 294.38763 5417.7599 304.38265 -26517.666
7000 299.28068 5507.6551 309.44183 -26519.514
8000 299.14374 5550.5791 309.30024 -26516.841
9000 297.11186 5451.5559 307.19938 -26521.772
10000 297.87577 5484.1663 307.98922 -26513.97
11000 298.31914 5379.207 308.44764 -26517.944
12000 306.13196 5543.4395 316.52573 -26522.018
13000 296.38645 5467.5292 306.44933 -26516.225
14000 300.63097 5438.3627 310.83797 -26523.203
15000 303.51978 5435.1861 313.82485 -26522.668
16000 293.40111 5462.007 303.36263 -26511.859
17000 298.97007 5444.2817 309.12067 -26520.5
18000 295.74475 5406.7253 305.78584 -26519.349
19000 301.70813 5438.7772 311.9517 -26518.972
20000 304.92035 5459.7755 315.27297 -26521.381
Loop time of 70.7228 on 16 procs for 20000 steps with 8000 atoms

Performance: 24.433 ns/day, 0.982 hours/ns, 282.794 timesteps/s, 2.262 Matom-step/s
83.8% CPU use with 16 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

Pair | 34.296 | 34.568 | 34.763 | 2.2 | 48.88
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 22.494 | 22.786 | 23.061 | 2.7 | 32.22
Output | 0.003227 | 0.0049516 | 0.00594 | 1.4 | 0.01
Modify | 7.7758 | 8.0415 | 8.357 | 6.1 | 11.37
Other | | 5.322 | | | 7.53

Nlocal: 500 ave 500 max 500 min
Histogram: 16 0 0 0 0 0 0 0 0 0
Nghost: 1058 ave 1058 max 1058 min
Histogram: 16 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 16 0 0 0 0 0 0 0 0 0
FullNghs: 14000 ave 14000 max 14000 min
Histogram: 16 0 0 0 0 0 0 0 0 0

Total # of neighbors = 224000
Ave neighs/atom = 28
Neighbor list builds = 0
Dangerous builds = 0
Setting up Verlet run …
Unit style : metal
Current step : 0
Time step : 0.001
Per MPI rank memory allocation (min/avg/max) = 3.114 | 3.114 | 3.114 Mbytes
Step Temp Press KinEng PotEng
0 304.92035 5459.7755 315.27297 -26521.381
1000 301.00271 5508.3076 311.22232 -26517.328
2000 305.8129 5484.3715 316.19583 -26522.303
3000 306.8505 5563.7444 317.26866 -26523.375
4000 303.41682 5493.2818 313.71839 -26519.823
5000 299.59356 5439.3355 309.76533 -26515.871
6000 301.3036 5491.2526 311.53343 -26517.639
7000 301.60682 5452.1048 311.84695 -26517.953
8000 303.06253 5554.9014 313.35208 -26519.457
9000 299.44782 5439.7301 309.61464 -26515.721
10000 300.32298 5454.4791 310.51951 -26516.626
11000 302.45053 5476.0286 312.7193 -26518.826
12000 298.3732 5506.9519 308.50354 -26514.608
13000 298.66258 5465.0147 308.80274 -26514.908
14000 302.39571 5463.9 312.66262 -26518.769
15000 300.31929 5428.4615 310.5157 -26516.622
16000 301.11491 5457.2619 311.33834 -26517.444
17000 299.93964 5496.371 310.12316 -26516.228
18000 305.58276 5458.9017 315.95788 -26522.067
19000 301.36901 5542.2731 311.60106 -26517.706
20000 302.70357 5491.2864 312.98093 -26519.086
Loop time of 85.0353 on 16 procs for 20000 steps with 8000 atoms

Performance: 20.321 ns/day, 1.181 hours/ns, 235.197 timesteps/s, 1.882 Matom-step/s
78.8% CPU use with 16 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

Pair | 36.928 | 37.78 | 38.904 | 8.9 | 44.43
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 33.457 | 34.453 | 35.63 | 10.1 | 40.52
Output | 0.067866 | 0.07428 | 0.078846 | 1.2 | 0.09
Modify | 0.16064 | 0.16886 | 0.17375 | 0.9 | 0.20
Other | | 12.56 | | | 14.77

Nlocal: 500 ave 511 max 488 min
Histogram: 2 0 1 2 1 3 4 0 1 2
Nghost: 1149.38 ave 1166 max 1135 min
Histogram: 1 2 2 2 1 3 3 1 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 16 0 0 0 0 0 0 0 0 0
FullNghs: 14847.1 ave 15235 max 14447 min
Histogram: 2 0 2 1 1 5 2 0 1 2

Total # of neighbors = 237554
Ave neighs/atom = 29.69425
Neighbor list builds = 0
Dangerous builds = 0
ERROR: Pair style does not support compute property/local (src/compute_property_local.cpp:259)
FullNghs: 14847.1 ave 15235 max 14447 min
Histogram: 2 0 2 1 1 5 2 0 1 2

Total # of neighbors = 237554
Ave neighs/atom = 29.69425
Neighbor list builds = 0
Dangerous builds = 0
ERROR: Pair style does not support compute property/local (src/compute_property_local.cpp:259)
Last command: run 20000
FullNghs: 14847.1 ave 15235 max 14447 min
Histogram: 2 0 2 1 1 5 2 0 1 2

Total # of neighbors = 237554
Ave neighs/atom = 29.69425
Neighbor list builds = 0
Dangerous builds = 0
ERROR: Pair style does not support compute property/local (src/compute_property_local.cpp:259)
FullNghs: 14847.1 ave 15235 max 14447 min
Histogram: 2 0 2 1 1 5 2 0 1 2

Total # of neighbors = 237554
Ave neighs/atom = 29.69425
Neighbor list builds = 0
Histogram: 2 0 2 1 1 5 2 0 1 2

Total # of neighbors = 237554

Total # of neighbors = 237554
Ave neighs/atom = 29.69425
Neighbor list builds = 0
Dangerous builds = 0
ERROR: Pair style does not support compute property/local (src/compute_property_local.cpp:259)
Last command: run 20000

Not for me. I am using the latest development version (of course).

According to the git history, the check causing this error was removed in October 2024.

Please note that the (online) documentation by default corresponds to the latest feature release version.

It worked!!! Thank you so much. I guess sometimes it’s as simple as updating LAMMPS😅

There is a reason why we ask people to test systems with the very latest version of LAMMPS first.

For several years now we also provide portable static non-MPI binaries for Linux to download. These have been curated and compiled by LAMMPS developers, thus they are the fastest way to confirm a problem still exists with the latest released version without having to compile LAMMPS yourself. Similar for macOS and Windows.

1 Like