-
Notifications
You must be signed in to change notification settings - Fork 82
Issues: a-r-r-o-w/finetrainers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Possible issue when resuming from checkpoint with fp8
#246
opened Jan 25, 2025 by
neph1
1 of 2 tasks
Error when using batch_size > 1 with multi-GPU training in bf16 precision
#227
opened Jan 16, 2025 by
Yavuzhan-Baykara
2 tasks
Question about finetuning with different resolution and frame nums
#220
opened Jan 15, 2025 by
zqh0253
enable_model_cpu_offload
causes NCCL timeout during multi-gpu training
#212
opened Jan 13, 2025 by
Harahan
Add option to specify config.json instead of individual training parameters in script
#193
opened Jan 7, 2025 by
a-r-r-o-w
Batch size of 2 will break the training loop on LTX-loRA finetuning.
#173
opened Jan 3, 2025 by
ArEnSc
1 of 2 tasks
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.