LAMMPS ontology

As far as I know there are no “magical consensual recipes” and without a grasp of what you try to achieve, some learning about the system and general knowledge about MD, you might end-up applying inappropriate heuristics without realising it. A paper I like illustrating how tricky it is to get consensual heuristics on convergence is this one. Even if the method tested is unreliable, it is interesting to see how the proportion of opinions changes with the experience of the surveyed people. This makes me think that getting general methods that could get general approval about convergence is more tricky than it looks. I really have a hard time figuring what a “molecular dynamics ontology” would cover or not.

From another perspective, Wong-ekkabut and Kartunnen’s provocative review reminds that it is not only about reproducible results but also having a good grasp at the physics and what to expect from simulations. So “Meaningful results” is tricky in the sense that playing around with parameters might lead to “meaningful while meaningless” results. To quote their conclusion:

No matter how simple the simulation, the user must always check and validate all the parameter (as well as protocol) choices even if they have been used extensively before. Breaking the second law of thermodynamics like in Case Studies 1 and 2 demonstrates that anything is indeed possible and that such unphysical results may look very exciting. As with many other things in life, if something looks too good, it most likely is. This also explains why so many bad, uninformed, or sometimes old, choices still remain in simulation protocols: validation is time consuming and thankless. […] In the worst case, something appears to be wrong or suspicious and finding the origin may be extremely time consuming and tedious as anyone who has tried it knows very well.

As they show (which goes further into @srtee direction), the literature is already very dense with tests of parameters and methods but there appear to be more incentives to publish non-physical results than get a good grasp at MD. I think there might still be something missing on the community level that would be in-between normalised practices (tests, reproducible practices, general discussion of methods etc.) and (new)users. But I know a lot of people in the community do their best to tackle this issue.