-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mem sort for HISAT2 output #158
Conversation
Wow, that's like 30% over... Are you sure that the final script is actually using the We should probably at least leave some overhead from that calculation though at the very least. |
Hm, what I thought when looking at the command is that the I'd also be happy to compute the |
I think the |
It is indeed - I changed to using |
Nice! Is it possible for you to test this before merging to see if it fixes the problem? Or should we just merge? |
I can let Silvia test tomorrow 👍 |
We're testing at the moment - will keep you posted on updates |
Local testing revealed that this seems to work fine even for weird samples that were not running beforehand 👍 |
main.nf
Outdated
@@ -671,11 +671,11 @@ if(params.aligner == 'hisat2'){ | |||
file "where_are_my_files.txt" | |||
|
|||
script: | |||
def avail_mem = task.memory ? "-m ${task.memory.toBytes() / task.cpus}" : '' | |||
def avail_mem = task.memory ? "-m ${(task.memory.toBytes() - 6000000000) / task.cpus}" : '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Could you just add in some extra code that checks that this isn't a negative number / stupidly small? If so then it can just be a blank string and left unspecified again. I'm just thinking that on some small (eg. testing) envs, there may be under 6GB memory given to this process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done :-)
Copying in the conversation from nf-core slack:
So we keep seeing some issues with the RNAseq workflow when sorting the hisat2 output - memory requested on the server seems to be higher than what is provided, thus the jobs get killed by the scheduler...
e.g. something like
Excceed job memory limit 63348736 > 82914560
which happens here
rnaseq/main.nf
Line 676 in e837637
I'd suggest dropping this memory request, as I never really saw issues without it in other pipelines sorting BAM files...
Sarek e.g. does this somewhat similar thing to set it permanently to
2G
https://github.com/SciLifeLab/Sarek/blob/88921e6944604d077afd8a5bc0a89b8e93937313/main.nf#L174Suggestions ?