I intend to run fireworks on a HPC cluster through a queuing system (SLURM). I have different types of job (quite few) to be simple let’s say I have 2 types of jobs : the parallel ones which runs on several cores and the sequential ones.
I have found in the tutorial (in https://pythonhosted.org/FireWorks/queue_tutorial.html) that it is possible to discriminate different type of jobs using FireWorker file:
"Perhaps the most severe limitation is that the Queue Launcher submits queue scripts with identical queue parameters (e.g., all jobs will have the same walltime, use the same number of cores, etc.)
If you have just two or three sets of queue parameters for your different job types, you can work around this limitation. First, recall that you can use the FireWorker file to restrict which jobs get run (see tutorial). If you have two types of jobs, you can run two Queue Launchers. Each of these Queue Launchers use different queue parameters, corresponding to the two types of jobs you’d like to run. In addition, each Queue Launcher should be run with a corresponding FireWorker that restricts that jobs for that launcher to the desired job type."
But I don’t understand how exactly I can use FireWorker file for this purpose : the Fireworker file seems to be a kind of aliasing system that has to be integrated in custom defined tasks … in the file my_fworker.yaml from the tutorial, there is these lines :
name: my first fireworker
name is used in the tutorial, which explain also the ‘env’ parameter, but I found no informations on ‘category’ and ‘query’
Could you give more explanations on how to do link some fws with qadapter and some other with an other qadapter ?
Thanks a lot,