Add finished VASP calculation to database

Hi, it happened once or twice that my vasp calculation finishes fine but the workflow fails at the reading in state. Since I don’t always want to rerun the entire calculation, I wanted to write a function that just reads the finished Vasp job and adds it to the mongoDB. Does anyone already have a function like that or suggestions how to best do it?

I discussed with my colleague that we could have a maker that only reads in the calculation by taking the directory of the successful calculation or its fw_id as an argument.

Thanks for your help!

Hi there!

Do you mind sharing some more details, such as which workflow are you using and version of atomate (1 or 2)?

At a general level, I’d check which function is responsible for adding the vasp output into the db in the workflow you’re using. In atomate1 there something like VaspToDbTaskDrone that’s responsible for that. An idea could be to run that function in those folder that where the output was not uploaded. Alternatively, again for atomate1, you could try to rerun at the task level. This could work if the last task is the one responsible for uploading the output.

For atomate2, I’m not sure how you could do that. Maybe other can chime in.

Hope this helps.
FR

Sure. I’m using atomate2, and workflows mostly StaticMaker() or DoubleRelaxMaker()

I gather some information to answer you question.

In atomate2 this should be the function used to parse the output and you could potentially reuse it manually for all the directories where the job stopped.

which internally use this in emmet

The upload is done directly by jobflow into the db that is specified in your settings.

In alternative you could simply have a maker identical to the one you’re using but without the line that write input and run vasp. So that it just does the parsing and uploading. I think you could either implement your own maker and run it in the right directories, or in a more hacky way, you could comment the lines that run vasp and do a rerun in the same folder (which is possible with Fireworks with something like --task-level --prev-dir, check these flags).

Note that if you run another maker or manually parse and upload the output into the db, you flows’ status won’t be updated. With the second option of rerunning the hacked maker you could also have the state of the job/flow.

Please, do not take it for granted :wink: I think it could work but I haven’t test it myself.

Hope this helps!
FR