Overlap check in energy_full() in fix_gcmc

Hi Axel and Steve,

I just wanted to clarify-

Does the overlap test in the energy_full() function in fix_gcmc account for periodic boundary conditions? From the code, it seems like it calculates the distances between the atoms inside the box, but I think it needs to check the distance between the atoms across the periodic boundary as well. Otherwise some overlap scenarios might be missed?

For example-

Assume a 1D system of atoms. If we consider periodicity along x, and assume an atom (atom 1) at fractional coordinate of 0.99. Suppose we want to calculate the distance of this atom from an atom (atom 2) at a fractional coordinate of 0.1. To do so, we convert first to Cartesian coordinates (assume x lattice vector is 10 Ang):

0.99 -> 9.9

0.1 -> 1

Then calculate the distance sqrt((9.9-1)^2) = 8.9. This makes it seem as if atom 1 and atom 2 are quite far from each other.

But actually, there is a periodic image of atom 2 at a fractional coordinate of 1.1 (across the x boundary). Then, the distance between atom 1 and this image of atom 2 is:

sqrt((9.9-11.1)^2) = 1.1. So atom 1 and this image of atom 2 are actually quite close.

As you can imagine, considering these periodic images is important when one considers if an inserted atom overlaps with an existing atom or not.

Am I missing something?

Thank you!

Regards,

Vrindaa

please note that while the outer loop in the function is over nlocal atoms, the inner loop is over nall which is nlocal + nghost.
that way, also periodic replica are considered.

please familiarize yourself with the data model of how LAMMPS does domain decomposition and handles (transparently) interactions with atoms between subdomains or periodic replica: local atoms are “owned” by each subdomain and the sum of them are the system, while ghost atoms are “copies” from neighboring subdomain, which may also be the periodic replica in that direction.

axel.

Dear Axel,

Thank you for the clarification!

The problem I have is that when I run the gcmc calculations, upon inserting atoms, sometimes an overlap is detected, and the loop breaks successfully. Other times, the overlap is not detected (despite there being an overlap, which I can see using visualization tools). When I investigated this, it seemed to me that if the overlap occurred with one of the atoms in the simulation box, then the overlap was detected successfully. However, if the overlap occurred with a periodic image, then it isn’t detected.

Would it be possible to give me some clue as it why could this be happening?

I actually have looked into a little bit regarding how LAMMPS basically divides the simulation box into multiple smaller boxes, and assigns those boxes to one processor each, and communicates the required information across processors. I used this paper- Plimpton, Steve. Fast parallel algorithms for short-range molecular dynamics. No. SAND-91-1144. Sandia National Labs., Albuquerque, NM (United States), 1993.

But here I am using only one processor, so I thought all information would be housed in that one processor. Could this be the problem? Also when I print ‘atom->nghost’ right before the overlap check is performed, it prints ‘1’. This made sense to me before, since if all the atoms’ information is stored in that one processor, this makes nall = nlocal + 1, so distance between all the atoms in the box is calculated, but now I am not sure where the periodic image atoms are…

Any insight would be very helpful.

Thank you!

Regards,

Vrindaa

two things: 1) you (still) do not understand the data model and 2) contrary to your reasoning not having any ghost atoms is highly unusual as this is why LAMMPS does not need to apply minimum image conventions and can run the same code in serial than in parallel. ghost atoms are copies from the neighboring sub-domain; in the case of just one MPI rank that means the same sub-domain (just from the opposite side). there are specific communication patterns that update the data of those atoms and they are recreated on every re-neighboring step.
thus not having any ghost atoms means that you must be running some very unusual simulation and that there should be some warnings about that. please provide a log of a simple input with the system setup on just a run 0. for your reference here is the equivalent for doing a “run 0” with the “melt” example on 1 MPI rank with 4000 atoms. as you can see from the neighbor list stats at the bottom, there are more ghost atoms than there are local atoms:

LAMMPS (30 Jun 2020)
using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962
Created orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
1 by 1 by 1 MPI processor grid
Created 4000 atoms
create_atoms CPU = 0.000 seconds
Neighbor list info …
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Setting up Verlet run …
Unit style : lj
Current step : 0
Time step : 0.005
Per MPI rank memory allocation (min/avg/max) = 3.222 | 3.222 | 3.222 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
Loop time of 6.54e-07 on 1 procs for 0 steps with 4000 atoms

152.9% CPU use with 1 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total