-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much memory is required? #4
Comments
Hi, Different tasks/input sizes would need different GPU memory sizes. I don't remember exactly the GPU memory size of each task. But, all the experiments can be done with only one GPU ( 12 Gb memory). You can first use a smaller batch size (like 4) or patch size (like 24) to make the code run through and make sure everything else is ok. Then, increase the batch size or patch size to be the same as the paper. |
I used the V100 GPU which has 32G memory, still out of memory training the DN-RGB model during validation. Any suggestion to fix the problem, smaller batch size it's not helping. Training takes no more than 10G GPU memory, while validation it is out of memory, I even change the validation set with smaller size images, still not helping. |
I solved this by add '--chop', which can crop the test image into size <=100*100, avoiding memory problem |
|
Yeah, I found this '--chop' option and solved the problem at the time. Thanks.
…------------------ 原始邮件 ------------------
发件人: "yulunzhang/RNAN" ***@***.***>;
发送时间: 2021年9月18日(星期六) 晚上7:22
***@***.***>;
***@***.******@***.***>;
主题: Re: [yulunzhang/RNAN] How much memory is required? (#4)
I solved this by add '--chop', which can crop the test image into size <=100*100, avoiding memory problem
i use --chop and 4 2080ti gpu but out of memory when validation. can you tell me your gpu seting, thanks
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
i alread use --chop option and oom |
CUDA_VISIBLE_DEVICES=4,5,6,7 python main.py --model RNAN --scale 2 --save RNAN_SR_F64G10P48BIX2 --save_results --chop --patch_size 96 --n_GPUs 4 --batch_size 32 |
This issue was long time ago and I used one Tesla V100 to run the code. If it still out of memory, maybe you can crop the image into patches to go though the model. But I am not sure how they did it on Titan Xp GPU, as it mentioned in the paper.
…------------------ 原始邮件 ------------------
发件人: "yulunzhang/RNAN" ***@***.***>;
发送时间: 2021年9月18日(星期六) 晚上8:07
***@***.***>;
***@***.******@***.***>;
主题: Re: [yulunzhang/RNAN] How much memory is required? (#4)
Yeah, I found this '--chop' option and solved the problem at the time. Thanks.
…
------------------ 原始邮件 ------------------ 发件人: "yulunzhang/RNAN" @.>; 发送时间: 2021年9月18日(星期六) 晚上7:22 _@**._>; ***@***.***@.**_>; 主题: Re: [yulunzhang/RNAN] How much memory is required? (#4) I solved this by add '--chop', which can crop the test image into size <=100*100, avoiding memory problem i use --chop and 4 2080ti gpu but out of memory when validation. can you tell me your gpu seting, thanks — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
i alread use --chop option and oom
CUDA_VISIBLE_DEVICES=4,5,6,7 python main.py --model RNAN --scale 2 --save RNAN_SR_F64G10P48BIX2 --save_results --chop --patch_size 96 --n_GPUs 4 --batch_size 32
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Thank you for your work. But when I try to run this code, it occur that "out of memory", I would like to know how much memory is required for this.
Thank you!
The text was updated successfully, but these errors were encountered: