Using fix GCMC to plot adsorption isotherms

Hello Everyone,
I have been trying to run a GCMC simulation for adsorption of N2 in MOF-5, but no matter how long I run the simulation gcmc insertion success is always zero (Which is obviously wrong). I have tried changing N, X, M parameters in the fix gcmc command but no avail. Can someone please help me with this? The exact same simulation works well with RASPA but it is very slow as multithreading or more than 1 MPI processes are not permitted in RASPA. Attaching the relevant simulation files along with this post. (For some reason I am not able to attach the file, so copy pasting the input file here. Sorry about this.)

##################
# INITIALIZATION #
##################
units           real
atom_style      full
dimension 		3
newton 			on
boundary        p p p
pair_style      lj/cut/coul/long 12.5
bond_style      harmonic
angle_style     hybrid fourier cosine/periodic harmonic
dihedral_style  harmonic
improper_style  fourier
kspace_style    ewald 1e-06
pair_modify     shift yes tail no mix arithmetic #shifted, tail correction no, lorentz bertholt mixing rule
special_bonds   lj/coul 0.0 0.0 1.0
dielectric      1.0
box tilt        large
#n2 not in the file, so to add this extra/... stuff is there
read_data		data.IRMOF-1.lammps & 		
				extra/atom/types 2 &
				extra/bond/types 1 &
				extra/angle/types 1 
molecule		n2mol n2.mol.lammps #n2 molecule file traPPE model
#replicate 		2 2 2 #create a 2*2*2 unit cell
#create_atoms	0 random ${N_n2} 2345243 NULL mol n2mol 534556 

mass 			6	1e-20
mass 			7	14.05

pair_coeff      6   6  0 	0 #pair_coeff type type epsilon(energy units, here kcal/mol) sigma (distance units, here Angstrom)
pair_coeff      7   7  36.00	3.31
bond_coeff      7   0  0.55
angle_coeff     10   harmonic 0 	180

# GROUPS
group			n2 type 6 7 #Group definition of n2 molecule
group           fram  type  1 2 3 4 5  #Group definition of Framework

############
# SETTINGS #
############
reset_timestep  0 
neighbor 		2.0 bin
neigh_modify 	every 1 delay 10 check yes

#min_style 		cg
#minimize		0.0 1e-08 100000 10000000
#run 			0

variable 		nAds equal count(n2)/3
variable 		dt equal dt
variable  		tdamp equal ${dt}*100
variable		pdamp equal ${dt}*1000
compute 		mdtemp n2 temp
compute_modify 	mdtemp dynamic/dof yes
#fix 			1 all npt temp 313.15 313.15 ${tdamp} iso 45 45 ${pdamp}
velocity 		all create 77 2435728
#thermo 			500
#timestep 		1
#run 			10000
#unfix 			1
#write_data 		dataAfterNPT.data

fix 			2 n2 gcmc 10 100 100 0 342134 77 0 0.5 pressure 0.1 mol n2mol group n2
#fix 			3 all nvt temp 313.15 313.15 ${tdamp} 
thermo 			100
thermo_style 	custom step v_nAds press temp f_2[3] f_2[4] f_2[5] f_2[6] 

timestep 		1
run 			200000
1 Like

Adding the n2mol file as well for reference.

# N2 molecule file. TraPPE model.

3 atoms
2 bonds
1 angles

Coords

1        0.0 0.0 0.0
2        -0.55 0.0 0.0
3        0.55 0.0 0.0

Types

1        6
2        7
3        7

Charges

1        0.964
2       -0.482
3       -0.482

Bonds

1   1      1      2
2   1      1      3

Angles

1   1      2      1      3

Don’t expect LAMMPS to be much faster as it doesn’t have any smart biasing options and thus insertions into a dense system will be very unlikely. Also, fix gcmc is effectively processing in serial and requires additional communication when run in parallel, not to mention that in combination with long-range electrostatics, the full interaction of the entire system will need to evaluated twice for every MC attempt including recomputing of the structure factors. You are using “ewald” instead of “pppm” as kspace style, so that will be even slower except for tiny systems.

Please always keep in mind, that fix gcmc was conceived to study (noble) gas diffusion in zeolites and works well for such cases, but if you want serious monte carlo simulation functionality, you better use a proper MC code.

Thanks for the clarification. Can you suggest any other open-ware MC code with parallel capabilities? Otherwise it takes more than 15 days to run a single simulation.

No, I cannot recommend any software. My expertise is in MD (classical and with DFT).
But please note that your problem is inherently parallel, since you can run the independent simulations for different data points concurrently, and also can create equivalent but decorrelated starting conditions for the same data point and then run multiple independent chunks for the same data point concurrently and average over them.

I have done something similar for a different task. I shall post my solution separately, as it is a bit ugly… but it works!

The only open source MC software I know of is Towhee http://towhee.sourceforge.net/

Hey! Can you please share your solution? With a little bit description of your problem? That’ll be a great help. Thank you!

Thanks a lot! I did a quick survey and found a couple of open source codes along with MCCCS TOWHEE. Posting here if anyone finds it useful DL_POLY_2, DL_MONTE (developed during an EPSRC project and continuous updates are provided). And RASPA is really good for smaller problems with lesser number of molecules.

Apart from running different data points concurrently, you can for each data point create independent starting conditions by taking the same snapshot, apply small random displacements to all mobile atoms, then run with different random seeds, discard the initial bits while the systems decorrelate.
In MC you have no time axis, so it doesn’t matter that you combine independent chunks.

That’s a good point! I will try using this method. Although, now I have to figure out how to assign random seed in RASPA (A proper MC code) after equilibration. But this might actually work. Thank you :slight_smile:

The same strategy works also as an enhanced sampling method for Molecular Dynamics. Please see the groundbreaking work of Art Voter.

Well, while there has been a fruitful discussion so far, I should say I still find LAMMPS GCMC module useful for my systems (gas-polymer) since I need to do MD in-between and changing formats continuously (automatically) between software (like Cassandra?) has been a problem for me. I must say I always utilize GPU for LAMMPS GCMC though.

Actually I have to do the same (changing file formats (.cif to .lmpdata and many more) every single time is a huge headache). Can you please check my input script ad suggest any changes so that my simulation might work? Or better yet, can you please provide some kind of a working sample input script for reference. (Unfortunately my system doesn’t support gpu or KOKKOS package as of now).
Thanks a lot for your reply :smiley:

Yes, I will check it out. Thanks for your help :smiley:

Without GPU, GCMC with full_energy option at LAMMPS takes an eternity to run at the moment. The obvious suggestion is to increase the MC insertion moves you attempt every time, but the fact I mentioned before makes it really hard to make this suggestion.

By the way, I’m not familiar with your MOF-5 model but are you certain with your special_bonds settings? Also, I’m not sure if it has an effect in MC, but I’m pretty sure TRAPPE N2 is a tail corrected model, not a shifted one.

Ohh… I might have missed the tail correction, the original simulation I ran was for CO2 but while rewriting the input script I forgot to change it. The special bond settings are generated by the lammps-interface utility, and I am not entirely certain if it should be there. That’s why I did not change it. I’ll delete it and check the simulation again. Thanks!

That is not correct. Pair style lj/cut/coul/long is supported by both the GPU and the KOKKOS package, and I already mentioned that you likely should be using pppm instead of ewald. However, for the GPU package it is often faster to run kspace on the CPU.

Another often overlooked fact is that as of the February 2021 update, the LAMMPS GPU package works very well with Intel HD graphics GPUs, which many Intel CPUs have included and those GPUs provide substantial additional performance (and HD/UHD also support double and mixed precision, the most recent Xe graphics, GPUs support only single precision). I’ve done benchmarks with an 8th gen dual core Intel i3 CPU in an Intel NUC (i.e. a mobile CPU) and the speedup was very impressive.
Please also see: GitHub - intel/compute-runtime: Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver

Please don’t remove it because I told you so. You need to check every model/forcefield for these kinds of settings (tail corrections, special bonds etc.) and make sure your implementation is correct. I’m kind of sure most CO2 models also require tail corrections, shifted ones are instead quite rare.