Slab relaxation and MongoDB output missing

Hello !

I started to learn how to use atomate, specially for surface/adsorption calculations workflow for now.
I encountered a problem with the function get_wf_slab from the tutorial “High-throughput workflows for determining adsorption energies on solid surfaces.” (in the notebook, it is is get_wf_surface but I suppose it is an old version of atomate).
While the workflow is successfully created and the calculations also done, I can’t find the output in the tasks output repertory in MongoDB database. The file task.json is still in my calculation repertory.
For MgO bands structures, I had no problem and the output are in tasks. I tried to compare both workflows but I couldn’t find what is different in get_wf_slab function. It seems just to call OptimizeFW like MgO wf (or the Si tutorial, working fine also).

I am just optimizing Ni(111) slab without adsorbate. Did I miss something obvious ? ><

from pymatgen import Structure, Lattice, MPRester, Molecule
from pymatgen.analysis.adsorption import *
from pymatgen.core.surface import generate_all_slabs
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from matplotlib import pyplot as plt

# Note that you must provide your own API Key, which can
# be accessed via the Dashboard at materialsproject.org
mpr = MPRester(APIKey)

from fireworks import LaunchPad
lpad = LaunchPad.auto_load()
#lpad.reset('', require_password=False)

from atomate.vasp.workflows.base.adsorption import get_wf_slab

struct = mpr.get_structure_by_material_id("mp-23") # fcc Ni
struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()
slabs = generate_all_slabs(struct, 1, 5.0, 2.0, center_slab=True)
slab_dict = {slab.miller_index:slab for slab in slabs}

ni_slab_111 = slab_dict[(1, 1, 1)]
wf = get_wf_slab(ni_slab_111) 
lpad.add_wf(wf)

Related to the slab relaxation, if I want to optimize only the top layers for example, I think I need to create the corresponding POSCAR with pymatgen/ase and use it for atomate.

Hi Florian, you might have to explicitly pass the db_file, I think you can pass the autoconfig db file using:

from atomate.vasp.config import DB_FILE

and then

wf = get_wf_slab(ni_slab_111, db_file=DB_FILE)

Thank you very much for your fast answer ! It was indeed the solution.
I saw this parameter but since OptimizeFW was called as for bands structures workflow, I didn’t think it would be the solution… Next time I will try in case of !

Hi !

I encountered a new problem when I tried to calculate the surface energy of Ni(111) (or any miller index) with the procedure described in one of the matgenb notebook.

The problem occurs when I tried to include the bulk optimization with the corresponding option (include_bulk_opt). The first fireworks (for bulk) finished successfully but the second one fizzled. In fact, when looking in the file, it never started and the input files corresponds to the bulk one. In the log file, after creating the launcher directory and copying the files, nothing happens while checking FW every 60 s.

 2020-09-10 11:03:43,343 INFO Hostname/IP lookup (this will take a few seconds)
2020-09-10 11:03:45,236 INFO Created new dir /home/f-gimbert/SLAB111/launcher_2020-09-10-02-03-45-236264
2020-09-10 11:03:45,236 INFO Launching Rocket
2020-09-10 11:03:45,406 INFO RUNNING fw_id: 5 in directory: /home/f-gimbert/SLAB111/launcher_2020-09-10-02-03-45-236264
2020-09-10 11:03:45,515 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.
2020-09-10 11:03:45,598 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}
2020-09-10 11:03:45,614 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.
2020-09-10 12:50:11,935 INFO Task completed: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}
2020-09-10 12:50:11,956 INFO Task started: {{atomate.common.firetasks.glue_tasks.PassCalcLocs}}.
2020-09-10 12:50:11,957 INFO Task completed: {{atomate.common.firetasks.glue_tasks.PassCalcLocs}}
2020-09-10 12:50:11,978 INFO Task started: {{atomate.vasp.firetasks.parse_outputs.VaspToDb}}.
2020-09-10 12:50:11,979 INFO atomate.vasp.firetasks.parse_outputs PARSING DIRECTORY: /home/f-gimbert/SLAB111/launcher_20
20-09-10-02-03-45-236264
2020-09-10 12:50:11,979 INFO atomate.vasp.drones Getting task doc for base dir :/home/f-gimbert/SLAB111/launcher_2020-09
-10-02-03-45-236264
2020-09-10 12:50:18,632 INFO atomate.vasp.drones Post-processing dir:/home/f-gimbert/SLAB111/launcher_2020-09-10-02-03-4
5-236264
2020-09-10 12:50:18,632 WARNING atomate.vasp.drones Transformations file does not exist.
2020-09-10 12:50:18,739 INFO atomate.vasp.drones Post-processed /home/f-gimbert/SLAB111/launcher_2020-09-10-02-03-45-236
264
2020-09-10 12:50:19,096 INFO atomate.utils.database Inserting atlas06:/home/f-gimbert/SLAB111/launcher_2020-09-10-02-03-
45-236264 with taskid = 11
2020-09-10 12:50:19,197 INFO atomate.vasp.firetasks.parse_outputs Finished parsing with task_id: 11
2020-09-10 12:50:19,201 INFO Task completed: {{atomate.vasp.firetasks.parse_outputs.VaspToDb}}
2020-09-10 12:50:19,399 INFO Rocket finished
2020-09-10 12:50:19,444 INFO Created new dir /home/f-gimbert/SLAB111/launcher_2020-09-10-03-50-19-443795
2020-09-10 12:50:19,444 INFO Launching Rocket
2020-09-10 12:50:19,645 INFO RUNNING fw_id: 4 in directory: /home/f-gimbert/SLAB111/launcher_2020-09-10-03-50-19-443795
2020-09-10 12:50:19,677 INFO Task started: {{atomate.vasp.firetasks.glue_tasks.CopyVaspOutputs}}.
2020-09-10 12:50:19,892 INFO Task completed: {{atomate.vasp.firetasks.glue_tasks.CopyVaspOutputs}}
2020-09-10 12:50:19,910 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteTransmutedStructureIOSet}}.
2020-09-10 12:50:20,100 INFO Rocket finished
2020-09-10 12:50:20,357 INFO Sleeping for 60 secs
2020-09-10 12:51:20,418 INFO Checking for FWs to run...
2020-09-10 12:51:20,483 INFO Sleeping for 60 secs

Another thing very weird is when I tried to do the same workflow without the bulk option (for (2,1,0) surface). The workflows finished successfully (Completed) but in the output log, the same lines appear (Checking for FWs to run / Sleeping for 60 seconds) and the job is still in the queue (while finished !).

That’s the first time something like this happens for slab (never tried before with bulk) aI recently modified my MongoDB database so it could be a configuration problem.

When I do lpad get_wflows, I have this:

{
    "state": "COMPLETED",
    "name": "Ni1_(2, 1, 0) slab workflow--1",
    "created_on": "2020-09-10T00:41:05.420000",
    "states_list": "C"
},
{
    "state": "FIZZLED",
    "name": "Ni1_(1, 1, 1) slab workflow--4",
    "created_on": "2020-09-10T02:03:31.639000",
    "states_list": "F-C"
},
] 

The script I am using:

> struct = mpr.get_structure_by_material_id("mp-23") # fcc Ni
> struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()
> slabs = generate_all_slabs(struct, 1, 10.0, 10.0, center_slab=True)
> slab_dict = {slab.miller_index:slab for slab in slabs}
> 
> ni_slab_111 = slab_dict[(1, 1, 1)]
> wf = get_wf_slab(ni_slab_111,include_bulk_opt=True, db_file='/home/f-gimbert/atomate/config/db.json')
> new_wf = add_additional_fields_to_taskdocs(wf, {"system":"surface"})
> lpad.add_wf(new_wf)

and after qlaunch singleshot.

What did I miss this time ?

I continued to look for the origin of the problem and found this error in one error file. Is a version mismatch (atomate or fireworks ?) :

/home/f-gimbert/miniconda3/lib/python3.7/site-packages/atomate/utils/database.py:51: DeprecationWarning: count is deprecated. Use Collection.count_documents instead.
if self.db.counter.find({"_id": “taskid”}).count() == 0:
/home/f-gimbert/miniconda3/lib/python3.7/site-packages/fireworks/core/rocket.py:52: PendingDeprecationWarning: isAlive() is deprecated, use is_alive() instead
while not stop_event.is_set() and master_thread.isAlive():

Traceback (most recent call last):
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run
m_action = t.run_task(my_spec)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/atomate/vasp/firetasks/write_inputs.py”, line 419, in run_task
t_obj = t_cls(**transformation_params.pop(0))
TypeError: init() got an unexpected keyword argument ‘species’

I am still trying to understand what happens with the slab workflow fizzling but I couldn’t find any clue. I found even new problem during my tests on Ni slab with (211) orientation :sweat_smile:

Problem 1/ on get_wf_slab with include_bulk_opt=True

I reinstalled all the packages, but still some workflows fizzles. (Even weirder, I had Ni(110) fizzled when started with a batch of workflows but completed successfully when run alone. Maybe a problem my launch command ?)
After doing another tests with Ni(211) orientation, I realized the bulk calculation finished successfully but the slab relaxation never started and fizzled. I have some input files in launch directory (corresponding to bulk) but nothing else. I found this error message in error file :

/home/f-gimbert/miniconda3/lib/python3.7/site-packages/fireworks/core/rocket.py:52: PendingDeprecationWarning: isAlive() is deprecated, use is_alive() instead
while not stop_event.is_set() and master_thread.isAlive():
/home/f-gimbert/miniconda3/lib/python3.7/site-packages/atomate/utils/database.py:51: DeprecationWarning: count is deprecated. Use Collection.count_documents instead.
if self.db.counter.find({"_id": “taskid”}).count() == 0:
Traceback (most recent call last):
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run
m_action = t.run_task(my_spec)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/atomate/vasp/firetasks/write_inputs.py”, line 653, in run_task
transmuter = StandardTransmuter([ts], transformations)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/pymatgen/alchemy/transmuters.py”, line 66, in init
extend_collection=extend_collection)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/pymatgen/alchemy/transmuters.py”, line 134, in append_transformation
clear_redo=clear_redo)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/pymatgen/alchemy/materials.py”, line 146, in append_transformation
s = transformation.apply_transformation(self.final_structure)
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/pymatgen/transformations/site_transformations.py”, line 585, in apply_transformation
new_structure.add_site_property(prop, self.site_properties[prop])
File “/home/f-gimbert/miniconda3/lib/python3.7/site-packages/pymatgen/core/structure.py”, line 424, in add_site_property
raise ValueError(“Values must be same length as sites.”)
ValueError: Values must be same length as sites.

At first, as Ni(110) case, I was thinking it was due to a problem between different workflows running in same time but even when running alone, the workflow fizzled. So it must a problem before the beginning of slab calculation, during input files preparation.

(new) Problem 2/ Ni(211) on get_wf_slab with include_bulk_opt=False

Seeing the above message, I decided to check in case of the slab relaxation for Ni(211) and completely different errors appeared…

WARNING in EDDRMM: call to ZHEGV failed
ERROR:custodian.custodian:VaspErrorHandler
vasp: no process killed

or this error :

ERROR:VaspErrorHandler:WARNING: Sub-Space-Matrix is not hermitian in DAV
ERROR:VaspErrorHandler:BRMIX: very serious problems
ERROR:custodian.custodian:VaspErrorHandler
vasp: no process killed

I have no idea if problem 1 and problem 2 are related. Problem 2 is maybe related to kpoints (or VASP compilation ?), I had this error for another case and it disappeared by decreasing number of kpoints.

Update :

I was able to complete a Ni(211) slab relaxation (no bulk option) with MPRelaxSet VASP input set !