You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, new xml files are created for a design point to submitting the slurm job which runs the objective calculation/overlap check for the given point slurm wrapper . This can be better parallelized if the xml files are created within the slurm job.
The text was updated successfully, but these errors were encountered:
@karthik18495 after looking back into this, I think creating the xml files from the SlurmQueueClient makes sense at least for the slurm implementation, as otherwise the dict of the design parameters would have to somehow be passed through the master slurm job for that trial, as I'm picturing this as:
From SlurmQueueClient create xml files and launch master slurm job for the trial (calls a python script that knows which xml files to load)
In master slurm job, launch a job simulating each p-eta bin, and wait for them to complete.
After all sub-jobs complete, compute desired objectives
However, I agree that it should be moved elsewhere for the joblib implementation.
Right sure. I agree with these steps. It would be great to also keep thinking on having a common software design across all these runners. We are going to have 3 runners
SlurmRunner
JoblibRunner
PanDARunner
I shall by tomorrow create a new pull request for JoblibRunner.
Currently, new xml files are created for a design point to submitting the slurm job which runs the objective calculation/overlap check for the given point slurm wrapper . This can be better parallelized if the xml files are created within the slurm job.
The text was updated successfully, but these errors were encountered: