Hi OpenKIM folks!
I would like to make a suggestion, running the risk of suggesting the obvious and something that is already planned
As I understand it, tests in OpenKIM play the role of testing the quality of the models by calculating quantities that can then be compared against experiments or quantum calculations. Just like there are model drivers and models, and most models will probably be implemented through model drivers, I understood that there will be a similar two-level structure for tests (simulators and tests?), so all the nitty-gritty of making neighbor lists etc can be taken care of once and for all, and writing the tests will be relatively simple.
I would like to suggest to extend this to a three-level structure: Simulators, tests and code-tests. The purpose of the code-test is to check the *implementation* of the models, the tests and the simulators, and in particular to catch when something unexpectedly stops working due to apparently unrelated changes.
A code-test could be a simple way of specifying a model, a test, an expected result and a tolerance. All code-tests should then be run automatically regularly, and a red flag raised if a result falls outside the expected tolerance. Preferably, a contributor should be able to run all code-tests relating to a given model, model driver, test or simulator prior to submitting a new version.
This could probably be implemented with code-tests as simple text files, and a job running all the tests based on these files. But it would require that tests have a standard way of getting their input (the model) and presenting their result. I guess such a standard will be needed anyway for the OpenKIM web infrastructure.