Periodic GFN-FF in LAMMPS

Dear All,

I am interested in performing MD simulations with LAMMPS, using the periodic version of Grimme’s force field GFN-FF (original paper: S. Spicher et al.: Angew. Chem. Int. Ed. 59, 15665 (2020); periodic version: J. D. Gale et al.: J. Chem. Theory Comput. 17, 7827 (2021)).

Is there any chance that such method will be implemented on LAMMPS some day?

Best regards

That depends on what you are willing to do to get it implemented. Usually that means, that you need to do the C++ programming yourself. Of course, you could also see if you can find a collaborator that likes to spend his or her time on such a challenge.

There are multiple ways to do such a thing:

  • create a client-server script in python where LAMMPS is coupled to an external code via the LAMMPS MDI package using the MolSSI MDI library
  • create a “wrapper” pair style that then forwards the information to the C-API of the xTB code for computing forces, similar to how it is done for several machine learning pair style packages in LAMMPS (ML-QUIP, ML-HDNNP, ML-PACE)
  • write a new implementation in C++ from scratch (similar to REAXFF or AMOEBA)

Specifically, the third option would be the cleanest, fastest, and most flexible option but also the most demanding and time consuming one.

If you want, you can submit a “feature request” issue to the LAMMPS GitHub repository, but don’t get your hopes up. There are plenty of feature requests still looking for volunteers.

Most of LAMMPS was implemented because people needed the feature for themselves (or had developed the method) and then were generous enough to contribute the resulting package to LAMMPS (or could convince LAMMPS developers to help adapting/correcting/improving it).

1 Like

I’d be interested to pitch in, especially since I help maintain a charge equilibration package (ELECTRODE) whose architecture could be very easily reused for this purpose. (Also, I’m not sure how the other usual “code coupling” methods would handle dynamic charges.)

But the only way you’d get this to work is to email Julian Gale and Paolo Raiteri and ask them if they’d be willing to write a C++ LAMMPS implementation, or help you write one. There’s all kinds of wonderful magic that’s been thrown into this particular cauldron, and without regular feedback and cooperation from them you’re likely to have all kinds of subtle bugs in your eventual implementation.

Having said that, they (and I) are Australian researchers and we are usually not paid enough not paid purely to write software. So there’d have to be very nice papers in the offing, and a very substantial answer to the question “Why not just use xTB?”, especially since this force field sounds like it will be prohibitively expensive for the kinds of systems that LAMMPS often scales to.

1 Like

As you know, to simulate periodic systems, we need the pGFN-FF, which is implemented in GULP (thanks Dr. Gale once again for the paper and implementation), so xTB is not an option.
I have been testing systems with a few thousands of atoms in GULP (periodic solids, solvent and guest molecules), and it is feasible. However, it would be amazing if we could run simulations with pGFN-FF model using GPUs. That’s why I thought about LAMMPS.
Also, due to the replica exchange method implemented in LAMMPs that could be used with PLUMED for free energy methods (never tried, but I suppose it works). As far as I know, GULP only run replicas when performing NEB calculations.

I am still thinking how can I help the community on this subject, considering my programming skills, time available and no knowledge about how LAMMPS code works. But I’ll let you know :slight_smile:

In my opinion, the most straightforward and helpful thing you could do is to use GFN in GULP and demonstrate its performance – show that it is (let’s say) 10x better on XYZ system for only 2x the cost. Using it consistently as a tool in the package that it is available in then creates an argument that it should be written into LAMMPS and provides incentive for starving postdocs like me to give it a crack.

It’s also worth trying to understand its various subfeatures. For example, GNF appears to have dynamic hydrogen bonding angle and dihedral terms. What would happen if you ran an SPC/E simulation and added just that (with some reparametrisation to balance out the cohesive energy)? Would you get, again, 10x better accuracy for 2x cost?

Remember that ultimately “all models are wrong – but some are useful”. If you can effectively use the already-coded implementations of GFN to benchmark its accuracy and efficiency, that is already very helpful to the community (and yourself).

As always, I am just an Internet stranger and you need to form your own scientific hypotheses and get final advice from your supervisor or mentor.

I disagree on that one. A periodic derivation is required for GULP because of the way it computes forces, but not automatically with LAMMPS. If you look at ReaxFF for example, which is very similar in how it is constructed, it doesn’t use a periodic representation and handles electrostatics with a balanced damping function (aka Wolf summation).

So using a wrapper around xTB could be a rather simpleminded first implementation. You would just have to feed the “local” system of each domain including its ghost atoms to xTB, make sure that there is a sufficiently large communication cutoff and thus overlap between subdomains, and then let xTB do its job and retrieve the forces and charges for the local(!) atoms only and do a forward communication of charges during each timestep.
Then you can make some tests for a few different systems with using a different number of processes and thus a different domain decomposition to see how large the divergence between the trajectories would be. The downside of this would be the wasted CPU time for the ghost atoms, but it would get you something that can be used for MD much quicker.

You could also compare the results to a similar calculation with GULP.

That won’t make a difference. GPU support requires additional programming. The fact that LAMMPS has support for GPU acceleration does not mean that all of LAMMPS is GPU accelerated. On the contrary, all GPU accelerated parts had to be explicitly programmed as GPU accelerated version of the CPU base version. Bottom line, this more than doubles the programming effort (it requires quite a few additional programming skills for writing GPU accelerated code. I have been there myself; even taught at workshops and tutorials).

Multi-replica calculations with Plumed2 (or COLVARS for that matter) do not require any special features in the hosting MD code. You just run the same input multiple times and indicate in the collective variable specific input file that you want to do a multi-replica run and where to find the data from the other replica. Also, it would be by far easier to add support for Plumed to GULP. It is designed to be fairly independent and has been interfaced with many MD codes. Just look at the fix_plumed.cpp and fix_plumed.h in LAMMPS most of what they do is process command line, feed info to Plumed, call Plumed, and retrieve the resulting data and convert it so that LAMMPS can use it.

1 Like