-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory #42
Comments
Same problem encountered. |
I can only set the batch size to 14 using a single 3090 graphics card, and the network training is very unstable. |
Same problem encountered! |
Can I see the results of your reproduction? I used a 3090 graphics card with a batch size of 14 to get 17 AP_40 results. ---- Replied Message ***@***.***>Date10/15/2023 13:24 ***@***.***> ***@***.***>***@***.***>SubjectRe: [ZrrSkywalker/MonoDETR] CUDA out of memory (Issue #42)
Same problem encountered!
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
[
{
***@***.***": "http://schema.org",
***@***.***": "EmailMessage",
"potentialAction": {
***@***.***": "ViewAction",
"target": "#42 (comment)",
"url": "#42 (comment)",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
***@***.***": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
|
Car [email protected], 0.70, 0.70: I only get the AP40 result of Mod. level is 19.81. |
Hello, may I know your graphics device model? |
a single 3090 GPU with batch_size=14 |
The original version is for the 3090, while the stable version is for the A100. With the skill of Group DETR, the cuda memory could reach 40G. |
If u want to adapt the model to 3090, u could set the group_detr param in cfg to 1,and comment the lines of 467-473(about conditional) in the https://github.com/ZrrSkywalker/MonoDETR/blob/main/lib/models/monodetr/depthaware_transformer.py, then the model turns to the original version. |
Hello, very good work, I used a single 3090 to train MonoDETR and the "CUDA out of memory" prompt appeared. All my configurations use the default monodetr.yaml settings, and my environment configuration is also in accordance with the requirements of README.md, but what is the reason for such a problem during the training phase? Very much looking forward to your reply, thank you!
The text was updated successfully, but these errors were encountered: