Thanks on the answer,
I am familiar with MPI framework, I used it in the past.
Maybe I am not enough familiar with fireworks, but if I am understanding you correctly, fireworks clients or computer setup node could be setup to listen for jobs.
My only concern is following:
I want to programmatically setup a node (computer resource) to listen for jobs on each node.
Let’s say I have four servers, they all share fireworks and python via NFS. fifth node is running mongoDB and has also programmatically setup firework framework to submit same job as four tasks or workflows.
You are right, jobs are same python app with instructions on how to slice big data
All I want to is to submit these tasks or work flows and as schedule them for parallel execution on each node.
If you are telling me that fireworks has nothing that synchronize this but relies on the external protocol framework like MPI to do that via hooks, then I will need to go that way.
I understood that fireworks has also a built in queue, possibly via mongodb or otherwise. I apologies I did not look at the source yet…
Let me know if I am correct, and I will use your mlaunch tutorial!
Thanks again, --sasha
On Monday, March 2, 2015 at 1:21:11 PM UTC-5, Anubhav Jain wrote:
I am a bit confused as the queue adapters are for working with queueing systems such as PBS, SLURM, SGE, etc. i.e. a queue adapter is used when you are submitting jobs to a shared computing cluster in which you cannot run jobs directly, but the cluster administrator has set up one of these systems for submitting your jobs. For this case, the queue adapter you use is just the one that matches the queueing system set up by the cluster administrator (generally, the CommonAdapter covers most of these systems).
It looks like in your case, you are looking to multiprocess many jobs on a node with different parameters. In this case, have you looked at the following documentation on “mlaunch”?
Basically, if you enter different workflows in your database, you can use the mlaunch command to parallelize their execution over multiple cores. So you would explicitly enter all your different parameter sets as different workflows (or different Fireworks within the same workflow) and then use mlaunch to parallelize.
I hope that helps, let me know if not.
On Fri, Feb 27, 2015 at 1:54 PM, [email protected] wrote:
Hi, sorry to intrude into your time,
I am testing fireworks as a master worker pattern for replacement of commercial version of software.
This is my setup:
NFS share where py, libs and FW reside.
bunch of grid computing nodes all attached to NF share
I am able to run single and rapid fire clients, no issues there.
I am exploring, queue adapters, and I am interested in achieving following:
mongoDB is on the node1
I would like to setup a workflow that submits ScriptTask with each bash script that is running python multiprocess applications (full node /core utilization) and is passing different input parameters
all this works in singleshot mode.
Basically, main business logic application does a chunk of job …
Could you please suggest queue adapter that i should use and modify to fit my purposes …
You received this message because you are subscribed to the Google Groups “fireworkflows” group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To view this discussion on the web visit https://groups.google.com/d/msgid/fireworkflows/ea011431-8cb4-4d4f-8601-ede9fe34f7c1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.