Issues with GCMC and NVT in Vapor-Substrate Simulation in LAMMPS

Dear LAMMPS User,

I am attempting to run a simulation of a substrate with water vapor. My simulation setup involves dividing the simulation box into two regions:

Top of the box: This contains a slab where I am trying to apply GCMC to control the vapor pressure for the system.
Lower part: This contains the substrate along with vapor molecules.

I am running an NVT simulation on the full system except for the substrate molecules.

To achieve this, I have been using the attached input files with the LAMMPS version (29 Sep 2021). However, the simulation isn’t producing any output except the following, despite the simulation continuing to run.

LAMMPS (29 Sep 2021 - Update 1)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
  using 1 OpenMP thread(s) per MPI task
Reading data file ...
  orthogonal box = (-1.9380240 -0.70999900 -15.000000) to (20.358384 20.562000 350.00000)
  1 by 1 by 16 MPI processor grid
  reading atoms ...
  181 atoms
  scanning bonds ...
  3 = max bonds/atom
  scanning angles ...
  3 = max angles/atom
  scanning dihedrals ...
  12 = max dihedrals/atom
  reading bonds ...
  270 bonds
  reading angles ...
  540 angles
  reading dihedrals ...
  1080 dihedrals
Finding 1-2 1-3 1-4 neighbors ...
  special bond factors lj:    0        0        0
  special bond factors coul:  0        0        0
     3 = max # of 1-2 neighbors
     6 = max # of 1-3 neighbors
    18 = max # of 1-4 neighbors
    18 = max # of special neighbors
  special bonds CPU = 0.001 seconds
  read_data CPU = 0.009 seconds
Reading sw potential file mW.sw with DATE: 2020-12-15
180 atoms in group surface
dynamic group liquid defined
dynamic group addatoms defined
WARNING: Using a manybody potential with bonds/angles/dihedrals and special_bond exclusions (src/pair.cpp:244)
WARNING: One or more dynamic groups may not be updated at correct point in timestep (src/fix_group.cpp:171)
WARNING: One or more dynamic groups may not be updated at correct point in timestep (src/fix_group.cpp:171)
WARNING: Fix gcmc using full_energy option (src/MC/fix_gcmc.cpp:483)

To find out the source of the error, I ran a simulation using GCMC for half of the simulation box for an LJ particle. This example case worked properly.

# GCMC for LJ simple fluid, with dynamics
# T = 2.0
# rho ~ 0.5
# p ~ 1.5
# mu_ex ~ 0.0
# comparable to Frenkel and Smit GCMC Case Study, Figure 5.8

# variables modifiable using -var command line switch

variable        mu index -1.25
variable        temp index 2.0
variable        disp index 1.0
variable        lbox index 50

# global model settings

units           lj
atom_style      atomic
pair_style      lj/cut 3.0
pair_modify     tail no # turn of to avoid triggering full_energy
boundary        p p f

# box

region          box block 0 ${lbox} 0 ${lbox} 0 ${lbox}
create_box      1 box

region          slab block 0 ${lbox} 0 ${lbox} 25 ${lbox}
# lj parameters

pair_coeff      * * 1.0 1.0
mass            * 1.0

# we recommend setting up a dedicated group for gcmc

group           gcmcgroup type 1

# gcmc

fix             mygcmc gcmcgroup gcmc 1 100 1 1 29494 ${temp} ${mu} ${disp} pressure 0.005 region slab
fix             1 all nvt temp 2.0 2.0 0.1
fix             zwalls all wall/reflect zlo EDGE zhi EDGE
# atom count

variable        type1 atom "type==1"
group           type1 dynamic gcmcgroup var type1
variable        n1 equal count(type1)

# averaging

variable        rho equal density
variable        p equal press
variable        nugget equal 1.0e-8
variable        lambda equal 1.0
variable        muex equal ${mu}-${temp}*ln(density*${lambda}+${nugget})
fix             ave all ave/time 10 100 1000 v_rho v_p v_muex v_n1 ave one file rho_vs_p.dat
variable        rhoav equal f_ave[1]
variable        pav equal f_ave[2]
variable        muexav equal f_ave[3]
variable        n1av equal f_ave[4]

# output
compute_modify  thermo_temp dynamic/dof yes
#compute         thermo_press all pressure thermo_temp
variable        tacc equal f_mygcmc[2]/(f_mygcmc[1]+${nugget})
variable        iacc equal f_mygcmc[4]/(f_mygcmc[3]+${nugget})
variable        dacc equal f_mygcmc[6]/(f_mygcmc[5]+${nugget})
compute_modify  thermo_temp dynamic/dof yes
thermo_style    custom step temp press pe ke density atoms v_iacc v_dacc v_tacc v_rhoav v_pav v_muexav v_n1av
thermo          1000

dump            1 all custom 10 test.gcmc.lj.lammpstrj id type x y z vx vy vz
dump_modify     1 sort id

# run

run             1000000

The output of the above script is as follows:

Step Temp Press PotEng KinEng Density Atoms v_iacc v_dacc v_tacc v_rhoav v_pav v_muexav v_n1av
       0            0            0            0           -0            0        0            0            0            0            0            0            0            0
    1000    2.0739961 0.0030743144 -0.022606211    3.0944463     0.001504      188   0.95755403   0.96179802   0.99106256   0.00131592 0.0026341975    12.021816       123.34
    2000    2.3342505 0.0033075063 -0.006491215     3.481815     0.001432      179   0.95757449   0.96145439   0.98620337   0.00143392 0.0030825456    11.849581       138.86
    3000    2.2802809  0.003478746 -0.0090035573    3.4026989     0.001544      193   0.95767396   0.96297486   0.98659966   0.00159952 0.0034801434    11.630191       159.28
    4000    2.1393647  0.003633831  -0.01273557    3.1941212      0.00172      215   0.95859382   0.96179193   0.98561508   0.00169712 0.0036851333    11.510703       171.48
    5000    2.2091034 0.0042540784 -0.0061327643    3.3000186     0.001944      243   0.95965029   0.96074081   0.98543786   0.00183944  0.004097954    11.349656       187.51
    6000    2.2170991 0.0043717764 -0.014395972    3.3122927     0.001992      249   0.95832141   0.96123798    0.9856127    0.0020192 0.0045293704    11.162913       211.88
    7000     2.160243 0.0040297167 -0.012031101    3.2266342     0.001888      236   0.95817141   0.96167988   0.98551754   0.00207568 0.0045988313    11.106981       217.82
    8000    2.2457954 0.0047532472 -0.017596743     3.355981      0.00212      265   0.95749699     0.962152   0.98602446   0.00210512 0.0046866174    11.079109       222.15
    9000    2.3657757 0.0047493316 -0.0051657931    3.5346372     0.002024      253   0.95775656   0.96181846   0.98669033   0.00208464  0.004637359    11.098776       220.31
   10000    2.2723047 0.0049489149 -0.010956748    3.3960626       0.0022      275   0.95828042   0.96099142   0.98707625   0.00213952 0.0048525838    11.046637       225.93
   11000    2.2086118 0.0046713666 -0.012487406     3.300556     0.002144      268   0.95869052   0.96079813   0.98623812   0.00218368 0.0048409925    11.005383       232.68
   12000    2.2143607 0.0051171688 -0.012820724    3.3097625     0.002256      282   0.95878175   0.96043096   0.98598789   0.00222464 0.0047906312    10.968668       237.35
   13000    2.1015738 0.0051070322 -0.004834693    3.1418528       0.0024      300   0.95858722   0.96033104   0.98598775   0.00226688 0.0048525867    10.931029        241.8
   14000    2.1314644 0.0047510957  -0.01314817    3.1855704       0.0022      275   0.95892711   0.96004203   0.

Could you please help me understand what might be going wrong and how I can resolve this issue?

I have also included the input files for your reference.
data.gra_vapor (42.2 KB)
input.test (1.8 KB)
mW.sw (783 Bytes)
Thank you for your assistance.

It is not likely that somebody will have a closer look at any issues with such an old LAMMPS version. We just a few days ago released a new stable version.

It doesn’t make any sense to run with so many MPI tasks and so few atoms.

As mentioned above, I am not looking at any details until after you can prove that the issue persists with the latest stable version of LAMMPS. You may use a debugger to determine where exactly LAMMPS gets stuck. I suggest you also get rid of the warnings for the dynamic groups.

This works with the latest version and does not get stuck with a single MPI process. Consider updating.

This file is bogus and results in error. The GCMC command is wrong.

I’d like to mention once again that the comment info in this file are wrong. The parameter have nothing to do with the article, Aidan is not the author of this file.

Dear Axel,

Thank you very much for your response and for pointing out the small details that need attention.

However, after running the simulation on the latest LAMMPS version (29 Aug 2024), I am still facing the same issue that I mentioned earlier. Below is the output I am receiving:

LAMMPS (29 Aug 2024)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
  using 1 OpenMP thread(s) per MPI task
Reading data file ...
  orthogonal box = (-1.938024 -0.709999 -15) to (20.358384 20.562 350)
  1 by 1 by 2 MPI processor grid
  reading atoms ...
  181 atoms
  scanning bonds ...
  3 = max bonds/atom
  scanning angles ...
  3 = max angles/atom
  scanning dihedrals ...
  12 = max dihedrals/atom
  orthogonal box = (-1.938024 -0.709999 -15) to (20.358384 20.562 350)
  1 by 1 by 2 MPI processor grid
  reading bonds ...
  270 bonds
  reading angles ...
  540 angles
  reading dihedrals ...
  1080 dihedrals
Finding 1-2 1-3 1-4 neighbors ...
  special bond factors lj:    0        0        0
  special bond factors coul:  0        0        0
     3 = max # of 1-2 neighbors
     6 = max # of 1-3 neighbors
    18 = max # of 1-4 neighbors
    18 = max # of special neighbors
  special bonds CPU = 0.000 seconds
  read_data CPU = 0.011 seconds
Reading sw potential file mW.sw with DATE: 2020-12-15
180 atoms in group surface
dynamic group liquid defined
dynamic group addatoms defined

CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE

Your simulation uses code contributions which should be cited:
- Type Label Framework: https://doi.org/10.1021/acs.jpcb.3c08419
The log file lists these citations in BibTeX format.

CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE

WARNING: Using a manybody potential with bonds/angles/dihedrals and special_bond exclusions (src/pair.cpp:243)
WARNING: Fix gcmc using full_energy option (src/MC/fix_gcmc.cpp:544)

Could you please advise what might be causing this issue?

I have not (yet) mastered the art of reading the mind of a remote computer and thus I am unable to debug processes on remote machines. You will have to do that by yourself as I already suggested in a previous post. Once you have determined in which function LAMMPS gets stuck, it may be simpler. Please also note, that @Germain has been able to run on a single MPI rank. Given your tiny system, you should not at all attempt to run in parallel.

This is a test case to identify the error, and it works for me as well, even with the older version (29 Sep 2021), as shown in the output below the line you mentioned.

Yes, I am also encountering an issue with this file, which is why I raised the issue. When I’m not using GCMC, it runs fine.

Thank you for your suggestion. I will make sure to update the comments accordingly.

Yes, I am working on the debugger as you suggested.
I just wanted to bring to your attention that @Germain might have run the LJ system (which was an attempt to figure out the issue), and it works for me too. The issue seems to lie with the “input.test” file, which is the substrate with vapor system.

When I run your posted input.test script, I get an error message:

ERROR: Fix gcmc cannot exchange individual atoms belonging to a molecule (src/MC/fix_gcmc.cpp:568)

When running with multiple MPI processes the calculation stalls, which suggests that fix gcmc uses a call to Error::all() to report an error where it should be using Error::one().

[Update:]

This is a bug in LAMMPS when running with multiple MPI ranks and one of a few error conditions in your input. It is fixed in commit must call Error::all() from all MPI ranks. · akohlmey/lammps@3e2f929 · GitHub

1 Like

Nice that you got that one.

But does it crash the same when you change the gcmc command to deposit type 2 atoms (water) instead on type 1 (bonded carbon)?

As the doc says:

When not using the mol keyword, you should ensure you do not delete atoms that are bonded to other atoms, or LAMMPS will soon generate an error when it tries to find bonded neighbors. LAMMPS will warn you if any of the atoms eligible for deletion have a non-zero molecule ID, but does not check for this at the time of deletion.

This seemed like documented behavior to me.

Yes, it is crashing in the same manner regardless of whether I am using atom type 1 (bonded carbon) or type 2 (water). From my limited understanding of LAMMPS, it seems that the issue might be related to the atom_style full, where both carbon and oxygen atoms are defined with molecular IDs. When using the gcmc command for insertion or deletion, LAMMPS may be trying to manage molecular IDs for these atoms, which could be causing the crash.

I tried switching to atom_style atomic, since my substrate atoms are immobile during the simulation, making connectivity information irrelevant. This simulation is working well.

I appreciate your and @akohlmey’s efforts to resolve the issue and look forward to hearing your thoughts on this.