How easy would it be to perform a minimisation inside fix gcmc?

Dear LAMMPS community

I have seen some papers using LAMMPS as a wrapper that they call with their in house code. Mostly what is done is to call LAMMPS to perform a minimisation procedure before the acceptance criterion of their gcmc code, it seems that doing so greatly improve the rate of convergence (I guess you still do as many energy call since you have to call the minimiser…)

I would be curious to try it, and I was wondering if it wouldn’t be better to directly add this option within the fix_gcmc of my lammps. I have some programming experience, unfortunately I never touched C++, my hope is that it wouldn’t take too much work, calling the minimisation subroutine from fix_gcmc.cpp ?

Unfortunately, this is not the case. In LAMMPS you have three “simulation modes”: minimization, run, and none. You can only start a minimization if you are in mode “none”, same for an MD run. This is not a problem when calling LAMMPS as a library from an external driver.

In addition, the fix gcmc code is quite convoluted and complex. So it is not a good place to start modifying LAMMPS.

Thank you for the fast answer,

I coded my own gcmc script using python which call lammps as an external routine (I followed the doc and compiled lammps in shared mode 2.2. Installation — LAMMPS documentation). I have two problems:

Rarely the very fist minimisation will show -nan energies at step 0 and get stuck there, I have to manually kill the script and restart it. Also, when I use the reaxff/openmp style it seems that the minimisations get quite random in the sense that they perform sometimes more or less steps (on the same geometries) giving energies that are often lower than ReaxFF without openMP.

Definitely these are not bugs from LAMMPS, have you heard of that before?

You could (initially) use soft-core potentials from the the FEP package to avoid the divergences at high potential energy (just for a small number of minimization steps until the close contact causing the divergence has been reduced).

Also, you could disallow insertions/moves that create close contacts in the first place.

There is a slightly different algorithm in use with OpenMP. Also, if you have high potential energy, then the minimizer may drop into a different local minimum. ReaxFF is particularly sensitive to such divergences since a) the use of charge equilibration emphasizes difference, e.g. from summing numbers in a different order ad b) the potential hypersurface of the ReaxFF potentials is quite rugged, which makes it easy to get “stuck” in different local minima.

Sorry, I should have been more clear:

What I meant is that this -nan problem occurs before any Monte-Carlo moves (when optimising the initial structure), the structure is a platinum nanoparticle, there is no duplicate atoms or atoms weirdly close. It never happened before when using LAMMPS normally, so I guess this is a problem related to the way I compiled LAMMPS on this machine.

“if you have high potential energy, then the minimizer may drop into a different local minimum.”

You are right, I tried with a simpler structure and the previous differences disappear.

You may still have a bad geometry or unsuitable force field parameters or similar.
It is very difficult to tell from remote with so little information.

Not necessarily. Divergent forces can happen with perfectly well compiled executables and it is rarely caused by the compiler or compilation configuration settings.