Update and restart a firework task killed due to walltime error

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

FW.json (3.58 KB)

  1. Use the detect_lostruns command. e.g. “lpad detect_lostruns --fizzle”. For more, see https://pythonhosted.org/FireWorks/failures_tutorial.html

  2. In general, if you want to simply rerun the calculation starting from the failed task, you can use “lpad rerun_fws -i <FW_ID> --task-level” (see here: https://pythonhosted.org/FireWorks/rerun_tutorial.html). In your specific case, however, it seems you don’t want to simply rerun starting from the second task, but want to do additional actions, e.g., copying CONTCAR to POSCAR before running the task again. There is no way to do that w/o adding an additional Firework since the FireWorks software is not tuned for running VASP, it is just a workflow software. If you simply did the task level rerun, it would run the task with the original POSCAR. Of course, you could also modify VaspCustodianTask so that if a CONTCAR is detected in the directory, it moves it to the POSCAR before executing VASP.

Best,

Anubhav

···

On Thu, Feb 4, 2016 at 12:17 PM, Janakiraman Balachandran [email protected] wrote:

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/74bb066c-cf83-4d8b-871c-f53805708f40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

  1. I think the problem is that the Fireworks is not marked as FIZZLE, rather it still shows as RUNNING, since I think the job got killed before an update can be made. I’m wondering how to properly change its state under such conditions,
  2. I just wanted to make sure if there was a better way out before I implemented this approach. Also in case of multiple restarts, is there a way to get the full obtain link between all Fireworks in the workflow – preferably inside python code
···

On Thursday, February 4, 2016 at 4:48:39 PM UTC-5, ajain wrote:

  1. Use the detect_lostruns command. e.g. “lpad detect_lostruns --fizzle”. For more, see https://pythonhosted.org/FireWorks/failures_tutorial.html
  1. In general, if you want to simply rerun the calculation starting from the failed task, you can use “lpad rerun_fws -i <FW_ID> --task-level” (see here: https://pythonhosted.org/FireWorks/rerun_tutorial.html). In your specific case, however, it seems you don’t want to simply rerun starting from the second task, but want to do additional actions, e.g., copying CONTCAR to POSCAR before running the task again. There is no way to do that w/o adding an additional Firework since the FireWorks software is not tuned for running VASP, it is just a workflow software. If you simply did the task level rerun, it would run the task with the original POSCAR. Of course, you could also modify VaspCustodianTask so that if a CONTCAR is detected in the directory, it moves it to the POSCAR before executing VASP.

Best,

Anubhav

On Thu, Feb 4, 2016 at 12:17 PM, Janakiraman Balachandran [email protected] wrote:

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/74bb066c-cf83-4d8b-871c-f53805708f40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

The detect_lostruns command is for jobs that appear RUNNING but are actually killed (the docs explain this).

To get all the links, you can use the “_get_launchpad_and_fw_id” reserved keyword in the spec:

http://pythonhosted.org/FireWorks/reference.html#reserved-keywords-in-fw-spec

That isn’t directly the links, but you can use that information to get whatever information of the workflow that you want

···

On Thu, Feb 4, 2016 at 6:36 PM, Janakiraman Balachandran [email protected] wrote:

  1. I think the problem is that the Fireworks is not marked as FIZZLE, rather it still shows as RUNNING, since I think the job got killed before an update can be made. I’m wondering how to properly change its state under such conditions,
  2. I just wanted to make sure if there was a better way out before I implemented this approach. Also in case of multiple restarts, is there a way to get the full obtain link between all Fireworks in the workflow – preferably inside python code

On Thursday, February 4, 2016 at 4:48:39 PM UTC-5, ajain wrote:

  1. Use the detect_lostruns command. e.g. “lpad detect_lostruns --fizzle”. For more, see https://pythonhosted.org/FireWorks/failures_tutorial.html
  1. In general, if you want to simply rerun the calculation starting from the failed task, you can use “lpad rerun_fws -i <FW_ID> --task-level” (see here: https://pythonhosted.org/FireWorks/rerun_tutorial.html). In your specific case, however, it seems you don’t want to simply rerun starting from the second task, but want to do additional actions, e.g., copying CONTCAR to POSCAR before running the task again. There is no way to do that w/o adding an additional Firework since the FireWorks software is not tuned for running VASP, it is just a workflow software. If you simply did the task level rerun, it would run the task with the original POSCAR. Of course, you could also modify VaspCustodianTask so that if a CONTCAR is detected in the directory, it moves it to the POSCAR before executing VASP.

Best,

Anubhav

On Thu, Feb 4, 2016 at 12:17 PM, Janakiraman Balachandran [email protected] wrote:

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/74bb066c-cf83-4d8b-871c-f53805708f40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/68af7428-d278-4516-824e-44e3dd8c6483%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Hi Anubhav,

  • Unfortunately I don’t see a keyword called “_get_launchpad_and_fw_id”. Did you mean “_add_launchpad_and_fw_id”?
  • I see that Links is an embedded class inside the Workflow class. Is there a nice and clean way to get the information through python after you get a handle for the Workflow object through get_wf_by_fw_id() from launchpad, since I know the fw_id of the parent firework that failed.
  • The motive is to have a sense of how many times the job has been restarted – which will be avilable by getting all the links in the workflow containing the Firework of interest.
···

On Friday, February 5, 2016 at 12:11:12 PM UTC-5, ajain wrote:

The detect_lostruns command is for jobs that appear RUNNING but are actually killed (the docs explain this).

To get all the links, you can use the “_get_launchpad_and_fw_id” reserved keyword in the spec:

http://pythonhosted.org/FireWorks/reference.html#reserved-keywords-in-fw-spec

That isn’t directly the links, but you can use that information to get whatever information of the workflow that you want

On Thu, Feb 4, 2016 at 6:36 PM, Janakiraman Balachandran [email protected] wrote:

  1. I think the problem is that the Fireworks is not marked as FIZZLE, rather it still shows as RUNNING, since I think the job got killed before an update can be made. I’m wondering how to properly change its state under such conditions,
  2. I just wanted to make sure if there was a better way out before I implemented this approach. Also in case of multiple restarts, is there a way to get the full obtain link between all Fireworks in the workflow – preferably inside python code

On Thursday, February 4, 2016 at 4:48:39 PM UTC-5, ajain wrote:

  1. Use the detect_lostruns command. e.g. “lpad detect_lostruns --fizzle”. For more, see https://pythonhosted.org/FireWorks/failures_tutorial.html
  1. In general, if you want to simply rerun the calculation starting from the failed task, you can use “lpad rerun_fws -i <FW_ID> --task-level” (see here: https://pythonhosted.org/FireWorks/rerun_tutorial.html). In your specific case, however, it seems you don’t want to simply rerun starting from the second task, but want to do additional actions, e.g., copying CONTCAR to POSCAR before running the task again. There is no way to do that w/o adding an additional Firework since the FireWorks software is not tuned for running VASP, it is just a workflow software. If you simply did the task level rerun, it would run the task with the original POSCAR. Of course, you could also modify VaspCustodianTask so that if a CONTCAR is detected in the directory, it moves it to the POSCAR before executing VASP.

Best,

Anubhav

On Thu, Feb 4, 2016 at 12:17 PM, Janakiraman Balachandran [email protected] wrote:

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/74bb066c-cf83-4d8b-871c-f53805708f40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/68af7428-d278-4516-824e-44e3dd8c6483%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

(posting original response from Feb 12 to Google Group):

  1. Yes
    2 + 3. It doesn’t matter that it is an embedded class - you can get the Links object by using “wf.links” where wf is the Workflow. Note that this is a dict, so it is easy to see all parents using the keys() method and all children by using the values() method. There is also a parent_links member that will return a dict that helps track down the parents of a run.
···

On Thursday, February 11, 2016 at 7:06:33 AM UTC-8, Janakiraman Balachandran wrote:

Hi Anubhav,

  • Unfortunately I don’t see a keyword called “_get_launchpad_and_fw_id”. Did you mean “_add_launchpad_and_fw_id”?
  • I see that Links is an embedded class inside the Workflow class. Is there a nice and clean way to get the information through python after you get a handle for the Workflow object through get_wf_by_fw_id() from launchpad, since I know the fw_id of the parent firework that failed.
  • The motive is to have a sense of how many times the job has been restarted – which will be avilable by getting all the links in the workflow containing the Firework of interest.

On Friday, February 5, 2016 at 12:11:12 PM UTC-5, ajain wrote:

The detect_lostruns command is for jobs that appear RUNNING but are actually killed (the docs explain this).

To get all the links, you can use the “_get_launchpad_and_fw_id” reserved keyword in the spec:

http://pythonhosted.org/FireWorks/reference.html#reserved-keywords-in-fw-spec

That isn’t directly the links, but you can use that information to get whatever information of the workflow that you want

On Thu, Feb 4, 2016 at 6:36 PM, Janakiraman Balachandran [email protected] wrote:

  1. I think the problem is that the Fireworks is not marked as FIZZLE, rather it still shows as RUNNING, since I think the job got killed before an update can be made. I’m wondering how to properly change its state under such conditions,
  2. I just wanted to make sure if there was a better way out before I implemented this approach. Also in case of multiple restarts, is there a way to get the full obtain link between all Fireworks in the workflow – preferably inside python code

On Thursday, February 4, 2016 at 4:48:39 PM UTC-5, ajain wrote:

  1. Use the detect_lostruns command. e.g. “lpad detect_lostruns --fizzle”. For more, see https://pythonhosted.org/FireWorks/failures_tutorial.html
  1. In general, if you want to simply rerun the calculation starting from the failed task, you can use “lpad rerun_fws -i <FW_ID> --task-level” (see here: https://pythonhosted.org/FireWorks/rerun_tutorial.html). In your specific case, however, it seems you don’t want to simply rerun starting from the second task, but want to do additional actions, e.g., copying CONTCAR to POSCAR before running the task again. There is no way to do that w/o adding an additional Firework since the FireWorks software is not tuned for running VASP, it is just a workflow software. If you simply did the task level rerun, it would run the task with the original POSCAR. Of course, you could also modify VaspCustodianTask so that if a CONTCAR is detected in the directory, it moves it to the POSCAR before executing VASP.

Best,

Anubhav

On Thu, Feb 4, 2016 at 12:17 PM, Janakiraman Balachandran [email protected] wrote:

Hi,

I’m trying to understand how to handle a firework (in this case a VASP calculation) that was killed due to walltime error. I use a simple code to create a yaml file for a Ba atom.

from fireworks import Firework
from fireworks_vasp.tasks import WriteVaspInputTask, VaspCustodianTask, VaspAnalyzeTask
from pymatgen.core.structure import Structure
from pymatgen.io.vaspio import Poscar

def create_fireworks(structure, keyVal, viset=‘MPVaspInputSet’, params={}, handlers=“all”, vasp_cmd=[“aprun”,"-n",“16”,"/path/to/vasp"]):
name = structure.formula
wf_name = name
t1 = WriteVaspInputTask(structure=structure, vasp_input_set=viset, input_set_params=params)
t2 = VaspCustodianTask(vasp_cmd=vasp_cmd, handlers=handlers)
t3 = VaspAnalyzeTask()
workflow = Firework([t1, t2, t3], name=name)
return workflow

if name == ‘main’:
inFileName = ‘POSCAR_Ba’
crystalStruc = Structure.from_file(inFileName)
keyVal = ‘Ba-Atom’
workflow = create_fireworks(crystalStruc,keyVal)
workflow.to_file(“VASP_Ba.yaml”)
print ‘Program Complete’

``

I added the generated *VASP_Ba.yaml* file to the MongoDB through lpad add ```VASP_Ba.yaml*.* `The MongoDB in this case is in a remote host which is attached to a specific port of localhost through ssh.

When I ran this job for 5 mins, the job did not converge and the job got killed due to wall time. Somehow the custodian error handler did not get activated to create a soft stop. As a result the FW.json file (attached here) still shows a state of “RUNNING”. Now my questions is two fold.

  1. How to properly modify the status of the firework to ‘FIZZLED’
  2. How to restart the calculation from task2 (`VaspCustodianTask), by using the existing history in the launch directory (which would also require replacing POSCAR with CONTCAR).

I realize that I can achieve the second step by using append_wf of launchpad. But this requires creating a new firework with a new FW_ID. Is there a better way to do it say through --task-recovery of rerun?
`

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/74bb066c-cf83-4d8b-871c-f53805708f40%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups “fireworkflows” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

Visit this group at https://groups.google.com/group/fireworkflows.

To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/68af7428-d278-4516-824e-44e3dd8c6483%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.