Update the VASP_CMD setting dynamically

According to the document here, the VASP_CMD should be set like the following:

VASP_CMD: mpirun -n 16 vasp_std

However, when I submit jobs through a queue system like SLURM, I want to dynamically use the relevant settings in the SLURM script. Therefore, I don’t know how to flexibly set this parameter to dynamically use the corresponding settings.

Regards,
Zhao

Got it. According to the description here:

The default way to modify these is to modify ~/.atomate2.yaml. Alternatively, the environment variable ATOMATE2_CONFIG_FILE can be set to point to a yaml file with atomate2 settings.

Lastly, the variables can be modified directly through environment variables by using the “ATOMATE2” prefix. E.g. ATOMATE2_SCRATCH_DIR = path/to/scratch.

So, in this case, it should be modified directly through environment variable as follows in the slurm script:

export ATOMATE2_VASP_CMD="mpirun -np $SLURM_NTASKS vasp_std"

See here for the related example:

_fw_name: CommonAdapter
_fw_q_type: SLURM
rocket_launch: rlaunch -w /path/to/fw_config/my_fworker.yaml singleshot
nodes: 2
walltime: 00:30:00
account: <account>
job_name: my_firework
qos: regular
pre_rocket: |
  module load vasp
  export ATOMATE2_VASP_CMD="srun -N 2 --ntasks-per-node=24"

But the export line in the above example is wrong, and it should be written as follows:

export ATOMATE2_VASP_CMD="srun -N 2 --ntasks-per-node=24 vasp_std"

Regards,
Zhao

See Mismatched dumped taskdoc and database state · Issue #231 · Matgenix/jobflow-remote · GitHub for the for the related discussion.