You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have started to try out the mag pipeline. Instead of trying my own data, I now first decided to run the test profile. Makes more sense :-) But I run into some configuration trouble.
That finished okay, but runs on the login node of our HPC cluster, which is not okay.
So I created costum config file, based on the base.config file provided with the pipeline and modified it slightly
See attached here: saga_mag.config.txt
In short, I added these three lines to the process:
I then also found out that I needed to address the missing function check_max in the config file. I added that entire function by copying this into my config file:
// Function to ensure that resource requirements don't go beyond
// a maximum limit
def check_max(obj, type) {
if (type == 'memory') {
try {
if (obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1)
return params.max_memory as nextflow.util.MemoryUnit
else
return obj
} catch (all) {
println " ### ERROR ### Max memory '${params.max_memory}' is not valid! Using default value: $obj"
return obj
}
} else if (type == 'time') {
try {
if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1)
return params.max_time as nextflow.util.Duration
else
return obj
} catch (all) {
println " ### ERROR ### Max time '${params.max_time}' is not valid! Using default value: $obj"
return obj
}
} else if (type == 'cpus') {
try {
return Math.min( obj, params.max_cpus as int )
} catch (all) {
println " ### ERROR ### Max cpus '${params.max_cpus}' is not valid! Using default value: $obj"
return obj
}
}
}
It however fails, because the slurm jobs do not get memory allocated. I see that when I check the files .command.run in the process folder of different tasks.
I see something like this in the top of the file:
Hmm, I'm not sure. Unfortunately it's hard to look from my phone...
However if you've copied from base config that is maybe overkill and makes it harder to debug. You so don't need everything from in there just to get the pipeline working with your cluster.
I would suggest maybe start from scratch with making the config following the following tutorial:
I'm going to close this issue as this is a nextflow configuration issue rather than a pipeline issue, but feel free to keep commenting here and I'll try to keep checking it.
Or even better, ask on the nf-core slack (Https://nf-co.re/join for instructions if you're not already there) on the #configs channel if you're still having issues
Hi, I have started to try out the mag pipeline. Instead of trying my own data, I now first decided to run the test profile. Makes more sense :-) But I run into some configuration trouble.
Here is what I did.
I can run the command:
That finished okay, but runs on the login node of our HPC cluster, which is not okay.
So I created costum config file, based on the base.config file provided with the pipeline and modified it slightly
See attached here: saga_mag.config.txt
In short, I added these three lines to the process:
and to jobs with large memory I added:
I then also found out that I needed to address the missing function
check_max
in the config file. I added that entire function by copying this into my config file:Next I ran the pipeline with this command:
It however fails, because the slurm jobs do not get memory allocated. I see that when I check the files
.command.run
in the process folder of different tasks.I see something like this in the top of the file:
So it seems that I do get a slurm job requested, but the .command.run file does not get memory, time and cpu's assigned.
I do not understand why it is failing. I checked the nextflow slack group and found the thread for https://nextflow.slack.com/archives/CEQBS091V/p1614245714076900
Where a similar set-up is described. Any ideas what I am missing here?
The text was updated successfully, but these errors were encountered: