fix gcmc not supporting full_energy with molecules and MPI. Where does the problem lie ?

Dear LAMMPS users,

After upgrading to the latest version of LAMMPS, I noticed that fix gcmc does not support full_energy option with molecules and MPI parallelization whereas it used to in older versions. This error is not documented, but I have read in a previous thread that the developers added a check to the source code to prevent using those options all together as they cause ‘’incorrect trajectories’’.

I can imagine many problems arising from running hybrid MC/MD simulations… And I would say that I don’t expect trajectories to be « correct » in regions of the simulation box where hybrid MC/MD operates. But I cannot figure out the specific problem caused by long range interactions, molecules and parallelization used simultaneously.

I have already used these options with older versions of the code to run Dual Control Volume Grand Canonical Molecular Dynamics (DCV-GCMD) simulations where fix gcmc was used in well-defined regions of the simulation box to control the local density of the fluid (typically consisting of small molecules such as CO2, CH4, N2, etc, in gas state). Trajectories were analyzed in other regions, ‘‘away’’ from the GCMD regions.

Now, because of this new error message, I am worrying about the validity of my results. Before considering doing it all over again with a single processor I have several questions :

1- What is the exact nature of the problem with the current gcmc code when these options (molecules + kspace/full_energy + mpi) are enabled ?

2 - What would be the consequences of removing the error check (just like coming back to older versions of LAMMPS) and running simulations with these options ? To my knowledge, GCMD already yields « incorrect trajectories», so what would be even worse in this specific case ?

Best regards,

Hi,

When LAMMPS rotates molecules with full_energy option on, it is done more or less like this:

  1. calculate the energy E0 before the rotation
  2. perform rotation (potentially by moving atoms across processors)
  3. calculate the energy E1 to find ∆E = E0-E1
  4. Use ∆E to accept or reject rotation.

The problem is that if the rotation includes atoms changing processor, the rejection step does not move atoms back to the original position. You can see the discussion on GitHub here: https://github.com/lammps/lammps/issues/467.

This error message we now give was implemented in this pull request: https://github.com/lammps/lammps/pull/822.

A consequence of skipping that error is that you won’t explore phase space with correct probability proportional to exp(-E * beta), i.e. bad results? The performance issues you get by being forced to run on a single MPI process can be somewhat overcome by running on powerful GPU’s.

CC to Aidan who might have further comments.

Anders

Hello everybody !

Anders, if I understood what you are saying this error should only apear when someone performs movements of molecules with Monte Carlo. If someone uses the MC just to insert or delete molecules and let lammps update the positions through MD, this error might not be a problem.

Is what I am saying right. ??

cheers

Hi Anders,

Thanks for your reply. I have read the discussion on GitHub and now I get the point.

I’ve had a look at the source code and it seems like the problem has been corrected as for translation moves (by using atom->nlocal instead of some local variable to count local atoms). However, your suggestion to rotate back the molecules to original position when rotation moves are rejected was not implemented.

Correct me if I am wrong, but after inspecting the source code for the insertion and deletion attempts I think these moves do not suffer from the same problem. Maybe the error check could then be refined so that only gcmc simulations with molecules, full energy and nmcmoves > 0 would generate an error.

Best,

Romain

Yep, I agree with you. Regular insert/deletion, and also translation, is not affected by this, and should be allowed also with MPI.

Maybe you want to submit a pull request to fix this?

Anders

Yes, I’ll do this asap.

Thanks again !

All the best,
Romain

I do not believe that's a matter of a fix. It might work
"somehow" but appears (to me) very difficult to implement
correctly.

Monte Carlo is basically non-deterministic with the consequence
that accepting/rejecting a single insertion/move depends on the
whole system configuration at that move. This requires you to at
least subdivide the simulation box into sub-boxes larger than the
minimal interaction range, which is, with electrostatics, impossible.
Furthermore you'd have to tailor a specific parallelization scheme
compatible with such a volume subdivision and do simultaneous moves
only in "independent" sub-volumes (non-neighboring). I did this
years ago with a kind of simple parallel Monte Carlo of coarse grained
stuff with only short-ranged potentials. I'm still not sure this really
worked 100% under all circumstances (but found no discrepancies;)

Bottom line: when I started using the very versatile Monte Carlo module,
I didn't think about parallelization because I felt the pain once for my
self and don't care about waiting if only the results are reproducible.

Regards

M.

Question here is: can one get physically correct results with only insert/removals and translations? If so, using mpi or not shouldn’t change the statistics of the results at all?

I didn’t quite understand which part you said was problematic here. Could you elaborate?

Also, Aidan, could you comment?

Anders