Element X defined in the reaxff pops up problem in running LAMMPS?

I’m currently running LAMMPS with a new ReaxFF forcefield that was recently published by Adri’s team. I incorporated Nickel values from a previous forcefield at the end of the elements section in the new forcefield file. However, I’m encountering an issue with an element named “X” in the forcefield file.

This element “X” is defined at the top of the forcefield file but doesn’t have corresponding values assigned throughout the rest of the file, based on its element number. As a result, when I run LAMMPS, the simulation skips the “X” element and instead processes Nickel. The issue leads to an error during the pair_coeff command, specifically with the error message:

ERROR on proc 0: Not a valid integer number: ‘0.0100’ (…/reaxff_ffield.cpp:584)
Last command: pair_coeff * * ${FF}.ff C H O N B Al Si Cl X Ni

Has anyone else experienced similar issues when using a forcefield that includes an “X” element? If so, how did you resolve this, and is there a recommended way to handle such undefined elements in the forcefield file?

The presence of atom parameters for element X is not a problem (some parametrization files bundled with LAMMPS also have it). Your modified parametrization file must have incorrect syntax.

Also, just adding another element from another parametrization is almost always horrible idea, this forum is full of threads on this topic.

1 Like

This is a very, very, VERY bad idea. You are going from a vetted potential from a credible source to an untested parameterization that is very likely bogus.

This statement makes no sense to me and is an indication that you may have no understanding of the exact format of the exact format of the ReaxFF force field files. This again, is a strong indication that you should not be doing what you are doing. It is a very, very, VERY bad idea.

That is a consequence of a violation of the ReaxFF parameter file format. Did I mention that changing those files the way you did is a very, very, VERY bad idea?

Just ignore them. As you should ignore all elements in the parameter file that are not documented in the publication describing the specific parameterization. People that develop ReaxFF potentials have a habit of modifying existing parameter files, possibly replacing some elements, but not removing the ones that are not part of the parameterization.

Thank you for your feedback, and I understand your concerns regarding the modification of ReaxFF parameter files. However, I would like to clarify my approach and the rationale behind it.

I’m fully aware that directly altering forcefield files carries risks, particularly when integrating parameters from different sources. My decision to incorporate Nickel values from a previously validated forcefield into the new one published by Adri’s team was not made lightly. This approach was taken to specifically test how the new parameterization for elements like Oxygen and Nitrogen interacts with Nickel, which was not included in the recent publication(nickel parameters).

The rationale behind this experiment stems from ongoing research comparing different parameter sets to observe any significant deviations in simulation outcomes. The old Nickel parameters have been previously validated in simulations involving Nickel, Oxygen, and Nitrogen, and our goal was to assess whether the updated forcefield with new O-O, N-O-N, and N-N-N interactions would produce consistent results or reveal any novel interactions when paired with the existing Nickel parameters.

I would really like to see the two original ReaxFF files and your “zombie” file. The error message you report suggests that you were not following the correct format when writing the combined file, and I am very curious to see what edits exactly you did.

As a person with some experience in force field parameterization (just not for ReaxFF), I cannot agree with your reasoning. However, well you did your edits, your combined parameter file must be inconsistent and that means it is basically useless. You may get some favorable results, but that would be by chance and not by design. I’ve seen people doing similar mix-and-matching and ignoring common sense and best practices for force field parameterizations and arguing away concerns the same way you do. In the end, you can do whatever you want, but from my personal perspective you are wasting your time and I seriously doubt that any results from such a zombie potential can be published unless you get lucky with your reviewers.

If you want an improved parameterization that includes missing elements, my suggestion is to collaborate with an expert in ReaxFF parameterization to create the kind of file you need. That way you don’t rely on chance, but on experience.

With the same reasoning you could just employ an evolutionary algorithm to make random changes to any parameters in your force field file and then employ some kind of “fitness function” based on your simulation results for the next set of “mutations”. I am certain that some people have tried something along those lines, but the fact that you don’t hear about it and people invest their energy into machine learning instead speaks rather loudly to me.

Thank you for your detailed response. I appreciate your candid feedback and your concerns about the approach I’ve taken. I understand the risks associated with combining forcefields and the potential for inconsistency in the resulting parameter file.

I want to clarify that I’ve already consulted with Dr. Jeffrey Comer about this approach. Following his advice, I made necessary adjustments to the ReaxFF file, particularly ensuring that the indicator numbers from the previous forcefield were correctly matched with the new one where applicable and matching the total number of the bond, angle, etc. These helped resolve the initial issue, and I am currently running simulations with this modified forcefield.

It’s important to note that we do not plan to publish the results from these simulations. The primary objective here is to use these simulations as a learning tool, helping us understand the deeper layers of the parameters involved. While many of these simulations may not be successful in terms of generating publishable results, they are invaluable in feeding our knowledge and improving our understanding of the system.

Regarding the use of machine learning for forcefield development, while it is a promising and rapidly evolving area, it is also not without its challenges. Creating a reliable machine learning potential is an intensive process that demands a significant amount of high-quality data. The accuracy of these models is heavily dependent on the quality and representativeness of the training data. When applied to new datasets that differ from the training data, these models may not maintain the same level of accuracy and can sometimes exhibit weak performance. This underscores the importance of careful data selection and validation when developing machine learning potentials, as well as the need for continued development and refinement of these methods.

As for the evolutionary algorithm approach, I agree with your skepticism. While such methods might offer intriguing possibilities, they lack the reliability and theoretical grounding that more traditional and well-understood methods offer. It’s clear that a more structured approach, potentially involving machine learning or collaboration with ReaxFF experts, would yield more scientifically robust results.

“All models are wrong, some are useful.” – George Box

The group of Hartke in Germany is among those who have employed successfully evolutionary algorithms to derived ReaxFF parameterizations (see https://onlinelibrary.wiley.com/doi/abs/10.1002/jcc.23382 for instance). People have tried and it works :slight_smile: