Load a pre-trained models locally in atomate2?

Hi,

I am trying to run a PhononMaker calculation in atomate2 using M3GNet. Since our HPC cluster has DNS/firewall restrictions, the pretrained model cannot be downloaded directly from the matgl GitHub repository.

Additionally, I want to use the most recent version of M3GNet (MatPES-PBE-v2025.1-PES) for force and energy predictions. I tried several ways, and after checking the source code, I overwrote the static maker as follows:

static_maker = ForceFieldStaticMaker(
    force_field_name="M3GNet",
    calculator_kwargs={"path": "model_name"}
)

This runs fine on my local PC, but I have a few clarifications:

  1. The code works but still fetches the model from the GitHub repository. However, is the model name actually overridden, or is the default model being used (MLFF.M3GNet)?
  2. When I replace it with a local path to the model, it does not work. How can I properly point it to a local model file?

Any guidance or example snippets would be very helpful.

Thanks!

Hey @Suhas_Adiga you should be able to specify an actual absolute path to a model file using path as a kwarg. Can you try something like:

from pathlib import Path

calculator_kwargs = {"path": Path("/some/path/to/model.pt").parent.absolute()}

path needs to be to the directory containing model.pt and not model.pt itself.

The force_field_name kwarg is only used internally in atomate2 to figure out which external ML forcefield library is needed. Passing M3GNet tells atomate2 to use matgl, you can then either use the M3GNet, CHGNet, TensorNet, etc. architectures within matgl plus a set of model parameters to define a forcefield

Hi @Aaron_Kaplan,

This works smoothly. I had previously tried overwriting matgl.load(path) in utils.py, but that didn’t work. Thanks a lot for providing such a simple solution!

1 Like