Error running MgO tutorial for atomate

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth

Further Updates on the problem:

I tried to run the test by removing the Scratch directory tag in the my_fworker.yaml file.

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun -n 64 vasp_std

scratch_dir: null

After running the workflow then, it exited upon the 1st workflow of optimization. I then had to make another folder and run subsequent workflows.

I used ‘qlaunch singleshot’ for submission as mentioned in the tutorial.

If I submit in the same folder, I get error of the form:

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 93, in run_task

self.copy_files()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 126, in copy_files

dest_path + gz_ext)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/utils/fileio.py”, line 112, in copy

shutil.copy2(src, dest)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 257, in copy2

copyfile(src, dst, follow_symlinks=follow_symlinks)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 104, in copyfile

raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

shutil.SameFileError: ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ and ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ are the same file

Awaiting your reply.

Thank you

Siddharth

···

On Wednesday, October 3, 2018 at 12:23:23 PM UTC-4, Siddharth Deshpande wrote:

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth

Further Updates:

I was able to figure out the problem being in my my_qadapter.yaml file. I changed the rocket launch setting to ‘single_shot’ there by mistake, reverting it to ‘rapidfire’ mode fixed it.

I still encounter problems though when providing a scratch directory in my_fworker.yaml file. With this being the error output:

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-12-56-22-244753/tmpzudk7z1h to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-13-01-48-776341/tmpj8mx8918 to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

Thank you for you help

Siddharth

···

On Wednesday, October 3, 2018 at 3:42:52 PM UTC-4, Siddharth Deshpande wrote:

Further Updates on the problem:

I tried to run the test by removing the Scratch directory tag in the my_fworker.yaml file.

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun -n 64 vasp_std

scratch_dir: null

After running the workflow then, it exited upon the 1st workflow of optimization. I then had to make another folder and run subsequent workflows.

I used ‘qlaunch singleshot’ for submission as mentioned in the tutorial.

If I submit in the same folder, I get error of the form:

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 93, in run_task

self.copy_files()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 126, in copy_files

dest_path + gz_ext)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/utils/fileio.py”, line 112, in copy

shutil.copy2(src, dest)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 257, in copy2

copyfile(src, dst, follow_symlinks=follow_symlinks)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 104, in copyfile

raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

shutil.SameFileError: ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ and ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ are the same file

Awaiting your reply.

Thank you

Siddharth

On Wednesday, October 3, 2018 at 12:23:23 PM UTC-4, Siddharth Deshpande wrote:

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth

Hi Siddharth,

Can you attach:

  • Your FW.json

  • your my_fworker.yaml file?

Also is your scratch_dir set to a symbolic link rather than a normal directory?

Best,

Anubhav

···

On Friday, October 5, 2018 at 6:26:59 AM UTC-7, Siddharth Deshpande wrote:

Further Updates:

I was able to figure out the problem being in my my_qadapter.yaml file. I changed the rocket launch setting to ‘single_shot’ there by mistake, reverting it to ‘rapidfire’ mode fixed it.

I still encounter problems though when providing a scratch directory in my_fworker.yaml file. With this being the error output:

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-12-56-22-244753/tmpzudk7z1h to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-13-01-48-776341/tmpj8mx8918 to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

Thank you for you help

Siddharth

On Wednesday, October 3, 2018 at 3:42:52 PM UTC-4, Siddharth Deshpande wrote:

Further Updates on the problem:

I tried to run the test by removing the Scratch directory tag in the my_fworker.yaml file.

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun -n 64 vasp_std

scratch_dir: null

After running the workflow then, it exited upon the 1st workflow of optimization. I then had to make another folder and run subsequent workflows.

I used ‘qlaunch singleshot’ for submission as mentioned in the tutorial.

If I submit in the same folder, I get error of the form:

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 93, in run_task

self.copy_files()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 126, in copy_files

dest_path + gz_ext)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/utils/fileio.py”, line 112, in copy

shutil.copy2(src, dest)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 257, in copy2

copyfile(src, dst, follow_symlinks=follow_symlinks)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 104, in copyfile

raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

shutil.SameFileError: ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ and ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ are the same file

Awaiting your reply.

Thank you

Siddharth

On Wednesday, October 3, 2018 at 12:23:23 PM UTC-4, Siddharth Deshpande wrote:

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth

Hi Anubhav,

Here is the my_fworker.yaml file:

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun vasp_std

scratch_dir: /global/cscratch1/sd/deshpan5/atomate

Here is the FW.json file in the job folder:

{

“spec”: {

“_tasks”: [

{

“files_to_write”: [

{

“filename”: “FW–MgO-structure_optimization”,

“contents”: “”

}

],

“_fw_name”: “FileWriteTask”

},

{

“structure”: {

@module”: “pymatgen.core.structure”,

@class”: “Structure”,

“charge”: null,

“lattice”: {

“matrix”: [

[

2.606553,

0.0,

1.504894

],

[

0.868851,

2.457482,

1.504894

],

[

0.0,

0.0,

3.009788

]

],

“a”: 3.0097881143105405,

“b”: 3.0097883300592754,

“c”: 3.009788,

“alpha”: 60.000003627588235,

“beta”: 60.00000125635496,

“gamma”: 60.000003208803285,

“volume”: 19.279368831332597

},

“sites”: [

{

“species”: [

{

“element”: “Mg”,

“occu”: 1

}

],

“abc”: [

0.0,

0.0,

0.0

],

“xyz”: [

0.0,

0.0,

0.0

],

“label”: “Mg”

},

{

“species”: [

{

“element”: “O”,

“occu”: 1

}

],

“abc”: [

0.5,

0.5,

0.5

],

“xyz”: [

1.737702,

1.228741,

3.009788

],

“label”: “O”

}

]

},

“vasp_input_set”: {

@module”: “pymatgen.io.vasp.sets”,

@class”: “MPRelaxSet”,

“structure”: {

@module”: “pymatgen.core.structure”,

@class”: “Structure”,

“charge”: null,

“lattice”: {

“matrix”: [

[

2.606553,

0.0,

1.504894

],

[

0.868851,

2.457482,

1.504894

],

[

0.0,

0.0,

3.009788

]

],

“a”: 3.0097881143105405,

“b”: 3.0097883300592754,

“c”: 3.009788,

“alpha”: 60.000003627588235,

“beta”: 60.00000125635496,

“gamma”: 60.000003208803285,

“volume”: 19.279368831332597

},

“sites”: [

{

“species”: [

{

“element”: “Mg”,

“occu”: 1

}

],

“abc”: [

0.0,

0.0,

0.0

],

“xyz”: [

0.0,

0.0,

0.0

],

“label”: “Mg”

},

{

“species”: [

{

“element”: “O”,

“occu”: 1

}

],

“abc”: [

0.5,

0.5,

0.5

],

“xyz”: [

1.737702,

1.228741,

3.009788

],

“label”: “O”

}

]

},

“force_gamma”: true

},

“_fw_name”: “{{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}”

},

{

“vasp_cmd”: “>>vasp_cmd<<”,

“job_type”: “double_relaxation_run”,

“max_force_threshold”: 0.25,

“ediffg”: null,

“auto_npar”: “>>auto_npar<<”,

“half_kpts_first_relax”: false,

“scratch_dir”: “>>scratch_dir<<”,

“gamma_vasp_cmd”: “>>gamma_vasp_cmd<<”,

“_fw_name”: “{{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}”

},

{

“name”: “structure optimization”,

“_fw_name”: “{{atomate.common.firetasks.glue_tasks.PassCalcLocs}}”

},

{

“db_file”: “>>db_file<<”,

“additional_fields”: {

“task_label”: “structure optimization”

},

“_fw_name”: “{{atomate.vasp.firetasks.parse_outputs.VaspToDb}}”

}

]

},

“fw_id”: 13,

“created_on”: “2018-10-06T14:42:12.069950”,

“updated_on”: “2018-10-06T14:49:13.941631”,

“launches”: [

{

“fworker”: {

“name”: “CORI”,

“category”: “”,

“query”: “{}”,

“env”: {

“db_file”: “/global/homes/d/deshpan5/atomate/config/db.json”,

“vasp_cmd”: “srun vasp_std”,

“scratch_dir”: “/global/cscratch1/sd/deshpan5/atomate”

}

},

“fw_id”: 13,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/mgo-6/launcher_2018-10-06-14-49-13-899474”,

“host”: “nid00733”,

“ip”: “10.128.2.226”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-06T14:49:13.940078”,

“updated_on”: “2018-10-06T14:49:13.940081”

}

],

“launch_id”: 10

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

This file is getting created in the job folder, and my directory structure looks like this:

– test_job_run_directory (contains the .py and POSCAR)

–test_job_run_directory/launcher_…/ (this has two directories, a ‘scratch link’ and a ‘tmp…’ folder) ( see the image below)

The tmpm1… folder is where the FW.json was located then.

Thank you for your help

Siddharth

···

On Friday, October 5, 2018 at 2:13:20 PM UTC-4, Anubhav Jain wrote:

Hi Siddharth,

Can you attach:

  • Your FW.json
  • your my_fworker.yaml file?

Also is your scratch_dir set to a symbolic link rather than a normal directory?

Best,

Anubhav

On Friday, October 5, 2018 at 6:26:59 AM UTC-7, Siddharth Deshpande wrote:

Further Updates:

I was able to figure out the problem being in my my_qadapter.yaml file. I changed the rocket launch setting to ‘single_shot’ there by mistake, reverting it to ‘rapidfire’ mode fixed it.

I still encounter problems though when providing a scratch directory in my_fworker.yaml file. With this being the error output:

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-12-56-22-244753/tmpzudk7z1h to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-13-01-48-776341/tmpj8mx8918 to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

Thank you for you help

Siddharth

On Wednesday, October 3, 2018 at 3:42:52 PM UTC-4, Siddharth Deshpande wrote:

Further Updates on the problem:

I tried to run the test by removing the Scratch directory tag in the my_fworker.yaml file.

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun -n 64 vasp_std

scratch_dir: null

After running the workflow then, it exited upon the 1st workflow of optimization. I then had to make another folder and run subsequent workflows.

I used ‘qlaunch singleshot’ for submission as mentioned in the tutorial.

If I submit in the same folder, I get error of the form:

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 93, in run_task

self.copy_files()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 126, in copy_files

dest_path + gz_ext)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/utils/fileio.py”, line 112, in copy

shutil.copy2(src, dest)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 257, in copy2

copyfile(src, dst, follow_symlinks=follow_symlinks)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 104, in copyfile

raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

shutil.SameFileError: ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ and ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ are the same file

Awaiting your reply.

Thank you

Siddharth

On Wednesday, October 3, 2018 at 12:23:23 PM UTC-4, Siddharth Deshpande wrote:

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth

Hi Siddharth,

Unfortunately I wasn’t able to figure out the problem. The my_fworker.yaml file looks correctly configured as far as I can tell. The FW.json shows that the scratch_dir is correctly loaded. As long as /global/cscratch1/sd/deshpan5/atomate exists and is accessible and writeable from your compute node (and is not a directory that needs to be created), it should work. As /global/cscratch1 should be accessible from all Cori nodes, I think you should be OK for the second point.

To fix this, I think likely you’ll either need to turn off the scratch_dir (set to null) or debug the problem yourself. To do the latter, I’d probably first start with just seeing whether it’s possible to call Python’s shutil.rmtree() on a (test) directory on cscratch1. If not and you get the same error, I’d contact NERSC as this problem is then outside of our codebases and due to the way their filesystem is mounted. If you are able to do this successfully, however, then I’d suggest starting an interactive NERSC session and playing with reporting the state of the code at various points around the line of the reported error until you get a sense of what’s happening.

Note - I couldn’t see the attached image.

···

On Saturday, October 6, 2018 at 8:08:56 AM UTC-7, Siddharth Deshpande wrote:

Hi Anubhav,

Here is the my_fworker.yaml file:

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun vasp_std

scratch_dir: /global/cscratch1/sd/deshpan5/atomate

Here is the FW.json file in the job folder:

{

“spec”: {

“_tasks”: [

{

“files_to_write”: [

{

“filename”: “FW–MgO-structure_optimization”,

“contents”: “”

}

],

“_fw_name”: “FileWriteTask”

},

{

“structure”: {

@module”: “pymatgen.core.structure”,

@class”: “Structure”,

“charge”: null,

“lattice”: {

“matrix”: [

[

2.606553,

0.0,

1.504894

],

[

0.868851,

2.457482,

1.504894

],

[

0.0,

0.0,

3.009788

]

],

“a”: 3.0097881143105405,

“b”: 3.0097883300592754,

“c”: 3.009788,

“alpha”: 60.000003627588235,

“beta”: 60.00000125635496,

“gamma”: 60.000003208803285,

“volume”: 19.279368831332597

},

“sites”: [

{

“species”: [

{

“element”: “Mg”,

“occu”: 1

}

],

“abc”: [

0.0,

0.0,

0.0

],

“xyz”: [

0.0,

0.0,

0.0

],

“label”: “Mg”

},

{

“species”: [

{

“element”: “O”,

“occu”: 1

}

],

“abc”: [

0.5,

0.5,

0.5

],

“xyz”: [

1.737702,

1.228741,

3.009788

],

“label”: “O”

}

]

},

“vasp_input_set”: {

@module”: “pymatgen.io.vasp.sets”,

@class”: “MPRelaxSet”,

“structure”: {

@module”: “pymatgen.core.structure”,

@class”: “Structure”,

“charge”: null,

“lattice”: {

“matrix”: [

[

2.606553,

0.0,

1.504894

],

[

0.868851,

2.457482,

1.504894

],

[

0.0,

0.0,

3.009788

]

],

“a”: 3.0097881143105405,

“b”: 3.0097883300592754,

“c”: 3.009788,

“alpha”: 60.000003627588235,

“beta”: 60.00000125635496,

“gamma”: 60.000003208803285,

“volume”: 19.279368831332597

},

“sites”: [

{

“species”: [

{

“element”: “Mg”,

“occu”: 1

}

],

“abc”: [

0.0,

0.0,

0.0

],

“xyz”: [

0.0,

0.0,

0.0

],

“label”: “Mg”

},

{

“species”: [

{

“element”: “O”,

“occu”: 1

}

],

“abc”: [

0.5,

0.5,

0.5

],

“xyz”: [

1.737702,

1.228741,

3.009788

],

“label”: “O”

}

]

},

“force_gamma”: true

},

“_fw_name”: “{{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}”

},

{

“vasp_cmd”: “>>vasp_cmd<<”,

“job_type”: “double_relaxation_run”,

“max_force_threshold”: 0.25,

“ediffg”: null,

“auto_npar”: “>>auto_npar<<”,

“half_kpts_first_relax”: false,

“scratch_dir”: “>>scratch_dir<<”,

“gamma_vasp_cmd”: “>>gamma_vasp_cmd<<”,

“_fw_name”: “{{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}”

},

{

“name”: “structure optimization”,

“_fw_name”: “{{atomate.common.firetasks.glue_tasks.PassCalcLocs}}”

},

{

“db_file”: “>>db_file<<”,

“additional_fields”: {

“task_label”: “structure optimization”

},

“_fw_name”: “{{atomate.vasp.firetasks.parse_outputs.VaspToDb}}”

}

]

},

“fw_id”: 13,

“created_on”: “2018-10-06T14:42:12.069950”,

“updated_on”: “2018-10-06T14:49:13.941631”,

“launches”: [

{

“fworker”: {

“name”: “CORI”,

“category”: “”,

“query”: “{}”,

“env”: {

“db_file”: “/global/homes/d/deshpan5/atomate/config/db.json”,

“vasp_cmd”: “srun vasp_std”,

“scratch_dir”: “/global/cscratch1/sd/deshpan5/atomate”

}

},

“fw_id”: 13,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/mgo-6/launcher_2018-10-06-14-49-13-899474”,

“host”: “nid00733”,

“ip”: “10.128.2.226”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-06T14:49:13.940078”,

“updated_on”: “2018-10-06T14:49:13.940081”

}

],

“launch_id”: 10

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

This file is getting created in the job folder, and my directory structure looks like this:

– test_job_run_directory (contains the .py and POSCAR)

–test_job_run_directory/launcher_…/ (this has two directories, a ‘scratch link’ and a ‘tmp…’ folder) ( see the image below)

The tmpm1… folder is where the FW.json was located then.

Thank you for your help

Siddharth
On Friday, October 5, 2018 at 2:13:20 PM UTC-4, Anubhav Jain wrote:

Hi Siddharth,

Can you attach:

  • Your FW.json
  • your my_fworker.yaml file?

Also is your scratch_dir set to a symbolic link rather than a normal directory?

Best,

Anubhav

On Friday, October 5, 2018 at 6:26:59 AM UTC-7, Siddharth Deshpande wrote:

Further Updates:

I was able to figure out the problem being in my my_qadapter.yaml file. I changed the rocket launch setting to ‘single_shot’ there by mistake, reverting it to ‘rapidfire’ mode fixed it.

I still encounter problems though when providing a scratch directory in my_fworker.yaml file. With this being the error output:

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-12-56-22-244753/tmpzudk7z1h to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/pymatgen/symmetry/bandstructure.py:63: UserWarning: The input structure does not match the expected standard primitive! The path can be incorrect. Use at your own risk.

warnings.warn("The input structure does not match the expected standard primitive! "

/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/shutil.py:40: UserWarning: Cannot copy /global/u2/d/deshpan5/atomate/test/mgo-5/launcher_2018-10-05-13-01-48-776341/tmpj8mx8918 to itself

warnings.warn("Cannot copy s to itself" fpath)

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/run_calc.py”, line 204, in run_task

c.run()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/custodian/custodian.py”, line 345, in run

Custodian._delete_checkpoints(cwd)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/monty/tempfile.py”, line 118, in exit

shutil.rmtree(fpath)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 495, in rmtree

onerror(os.path.islink, path, sys.exc_info())

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 493, in rmtree

raise OSError(“Cannot call rmtree on a symbolic link”)

OSError: Cannot call rmtree on a symbolic link

Thank you for you help

Siddharth

On Wednesday, October 3, 2018 at 3:42:52 PM UTC-4, Siddharth Deshpande wrote:

Further Updates on the problem:

I tried to run the test by removing the Scratch directory tag in the my_fworker.yaml file.

name: CORI

category: ‘’

query: ‘{}’

env:

db_file: /global/homes/d/deshpan5/atomate/config/db.json

vasp_cmd: srun -n 64 vasp_std

scratch_dir: null

After running the workflow then, it exited upon the 1st workflow of optimization. I then had to make another folder and run subsequent workflows.

I used ‘qlaunch singleshot’ for submission as mentioned in the tutorial.

If I submit in the same folder, I get error of the form:

Traceback (most recent call last):

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/fireworks/core/rocket.py”, line 262, in run

m_action = t.run_task(my_spec)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 93, in run_task

self.copy_files()

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/vasp/firetasks/glue_tasks.py”, line 126, in copy_files

dest_path + gz_ext)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/site-packages/atomate/utils/fileio.py”, line 112, in copy

shutil.copy2(src, dest)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 257, in copy2

copyfile(src, dst, follow_symlinks=follow_symlinks)

File “/global/homes/d/deshpan5/.conda/envs/datagen/lib/python3.7/shutil.py”, line 104, in copyfile

raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

shutil.SameFileError: ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ and ‘/global/u2/d/deshpan5/atomate/test/Mgo-test1/POTCAR’ are the same file

Awaiting your reply.

Thank you

Siddharth

On Wednesday, October 3, 2018 at 12:23:23 PM UTC-4, Siddharth Deshpande wrote:

Hello,

I recently installed atomate and was running several tutorial suggested. I was able to run the primary Si tutorial properly. The results append properly to the mongo database and I am able to recover the output as shown on the installation page.

However, when I run the MgO tutorial using option 3: i.e. creating a python file, the structure optimization workflow gets fizzled. When I go to the scratch linked directory to check the tarred output .error and .out files, I see the error file is empty and the following output in the .out file:

2018-10-03 09:01:09,456 INFO Hostname/IP lookup (this will take a few seconds)

2018-10-03 09:01:09,458 INFO Launching Rocket

2018-10-03 09:01:16,567 INFO RUNNING fw_id: 4 in directory: /global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059

2018-10-03 09:01:16,611 INFO Task started: FileWriteTask.

2018-10-03 09:01:16,626 INFO Task completed: FileWriteTask

2018-10-03 09:01:16,639 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.

2018-10-03 09:01:16,770 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}

2018-10-03 09:01:16,772 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.

Here are last few lines of the FW.json file in the same directory:

“fw_id”: 4,

“launch_dir”: “/global/u2/d/deshpan5/atomate/test/Mgo-test/block_2018-10-03-15-50-22-070666/launcher_2018-10-03-15-52-19-468059”,

“host”: “nid13033”,

“ip”: “10.128.51.80”,

“trackers”: [],

“action”: null,

“state”: “RUNNING”,

“state_history”: [

{

“state”: “RUNNING”,

“created_on”: “2018-10-03T16:01:16.504409”,

“updated_on”: “2018-10-03T16:01:16.504412”

}

],

“launch_id”: 1

}

],

“state”: “RUNNING”,

“name”: “MgO-structure optimization”

Also, the OUTCAR.relax1 and OUTCAR.relax2 both seem to have converged. The job finishes well inside the wall time also.

I notice, that there is nothing left in the test directory except the scratch link and a temporary folder. Any attempts to resubmit fizzled job gives errors.

Let me know, what might be the cause of this, or things I should do to identify the error.

Thank you for the tool

Siddharth