Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when using batch_size > 1 with multi-GPU training in bf16 precision #227

Closed
2 tasks
Yavuzhan-Baykara opened this issue Jan 16, 2025 · 2 comments
Closed
2 tasks

Comments

@Yavuzhan-Baykara
Copy link

System Info / 系統信息

Python version: 3.11.11
CUDA version: 12.4
Diffusers version: 0.32.1
2 x H100 SXM

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

%%bash
export WANDB_MODE="offline"
export NCCL_P2P_DISABLE=1
export TORCH_NCCL_ENABLE_MONITORING=0
export FINETRAINERS_LOG_LEVEL=DEBUG

GPU_IDS="0,1"

DATA_ROOT="/workspace/finetrainers/video-dataset-disney"
CAPTION_COLUMN="prompt.txt"
VIDEO_COLUMN="videos.txt"
OUTPUT_DIR="/workspace/finetrainers/output/"

ID_TOKEN="afkx"

Model arguments

model_cmd="--model_name hunyuan_video
--pretrained_model_name_or_path /root/.cache/huggingface/hub/models--hunyuanvideo-community--HunyuanVideo/snapshots/e8c2aaa66fe3742a32c11a6766aecbf07c56e773"

Dataset arguments

dataset_cmd="--data_root $DATA_ROOT
--video_column $VIDEO_COLUMN
--caption_column $CAPTION_COLUMN
--id_token $ID_TOKEN
--video_resolution_buckets 17x512x768
--caption_dropout_p 0.05"

Dataloader arguments

dataloader_cmd="--dataloader_num_workers 0"

Diffusion arguments

diffusion_cmd=""

Training arguments

training_cmd="--training_type lora
--seed 42
--mixed_precision bf16
--batch_size 2
--train_steps 2
--rank 16
--lora_alpha 128
--target_modules to_q to_k to_v to_out.0
--gradient_accumulation_steps 1
--gradient_checkpointing
--checkpointing_steps 500
--checkpointing_limit 2
--enable_slicing
--enable_tiling"

Optimizer arguments

optimizer_cmd="--optimizer adamw
--lr 2e-5
--lr_scheduler constant_with_warmup
--lr_warmup_steps 100
--lr_num_cycles 1
--beta1 0.9
--beta2 0.95
--weight_decay 1e-4
--epsilon 1e-8
--max_grad_norm 1.0"

Miscellaneous arguments

miscellaneous_cmd="--tracker_name finetrainers-hunyuan-video
--output_dir $OUTPUT_DIR
--nccl_timeout 1800
--report_to wandb"

cmd="accelerate launch --config_file yavzan_compiled_1.yaml --gpu_ids $GPU_IDS train.py
$model_cmd
$dataset_cmd
$dataloader_cmd
$diffusion_cmd
$training_cmd
$optimizer_cmd
$miscellaneous_cmd"

echo "Running command: $cmd"
eval $cmd
echo -ne "-------------------- Finished executing script --------------------\n\n"

compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'yes'
enable_cpu_affinity: false
gpu_ids: 0,1
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

Expected behavior / 期待表现

Running command: accelerate launch --config_file yavzan_compiled_1.yaml --gpu_ids 0,1 train.py --model_name hunyuan_video --pretrained_model_name_or_path /root/.cache/huggingface/hub/models--hunyuanvideo-community--HunyuanVideo/snapshots/e8c2aaa66fe3742a32c11a6766aecbf07c56e773 --data_root /workspace/finetrainers/video-dataset-disney --video_column videos.txt --caption_column prompt.txt --id_token afkx --video_resolution_buckets 17x512x768 --caption_dropout_p 0.05 --dataloader_num_workers 0 --training_type lora --seed 42 --mixed_precision bf16 --batch_size 2 --train_steps 2 --rank 16 --lora_alpha 128 --target_modules to_q to_k to_v to_out.0 --gradient_accumulation_steps 1 --gradient_checkpointing --checkpointing_steps 500 --checkpointing_limit 2 --enable_slicing --enable_tiling --optimizer adamw --lr 2e-5 --lr_scheduler constant_with_warmup --lr_warmup_steps 100 --lr_num_cycles 1 --beta1 0.9 --beta2 0.95 --weight_decay 1e-4 --epsilon 1e-8 --max_grad_norm 1.0 --tracker_name finetrainers-hunyuan-video --output_dir /workspace/finetrainers/output/ --nccl_timeout 1800 --report_to wandb
01/16/2025 08:24:08 - INFO - finetrainers - Initialized FineTrainers
01/16/2025 08:24:08 - INFO - finetrainers - Distributed environment: DistributedType.MULTI_GPU Backend: nccl
Num processes: 2
Process index: 0
Local process index: 0
Device: cuda:0

Mixed precision type: bf16

01/16/2025 08:24:08 - INFO - finetrainers - Initializing dataset and dataloader
01/16/2025 08:24:08 - INFO - finetrainers - Initializing models
01/16/2025 08:24:08 - INFO - finetrainers - Distributed environment: DistributedType.MULTI_GPU Backend: nccl
Num processes: 2
Process index: 1
Local process index: 1
Device: cuda:1

Mixed precision type: bf16

Loading checkpoint shards: 100%|██████████| 4/4 [00:15<00:00, 3.87s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [00:15<00:00, 3.94s/it]
01/16/2025 08:24:26 - INFO - finetrainers - Initializing trainable parameters
01/16/2025 08:24:33 - INFO - finetrainers - Initializing optimizer and lr scheduler
01/16/2025 08:24:35 - INFO - finetrainers - Initializing trackers
01/16/2025 08:24:35 - DEBUG - git.cmd - Popen(['git', 'rev-parse', '--show-toplevel'], cwd=/workspace/finetrainers, stdin=None, shell=False, universal_newlines=False)
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
01/16/2025 08:24:35 - DEBUG - git.cmd - Popen(['git', 'cat-file', '--batch-check'], cwd=/workspace/finetrainers, stdin=, shell=False, universal_newlines=False)
wandb: Tracking run with wandb version 0.19.2
wandb: W&B syncing is set to offline in this directory.
wandb: Run wandb online or set WANDB_MODE=online to enable cloud syncing.
01/16/2025 08:24:35 - DEBUG - accelerate.tracking - Initialized WandB project finetrainers-hunyuan-video
01/16/2025 08:24:35 - DEBUG - accelerate.tracking - Make sure to log any initial configurations with self.store_init_configuration before training!
01/16/2025 08:24:35 - DEBUG - accelerate.tracking - Stored initial configuration hyperparameters to WandB
01/16/2025 08:24:35 - INFO - finetrainers - Starting training
01/16/2025 08:24:35 - INFO - finetrainers - Memory before training start: {
"memory_allocated": 38.661,
"memory_reserved": 60.031,
"max_memory_allocated": 39.211,
"max_memory_reserved": 60.031
}
01/16/2025 08:24:35 - INFO - finetrainers - Training configuration: {
"trainable parameters": 20447232,
"total samples": 69,
"train epochs": 1,
"train steps": 2,
"batches per device": 2,
"total batches observed per epoch": 18,
"train batch size": 4,
"gradient accumulation steps": 1
}
Training steps: 0%| | 0/2 [00:00<?, ?it/s]01/16/2025 08:24:35 - DEBUG - finetrainers - Starting epoch (1/1)
01/16/2025 08:24:37 - DEBUG - finetrainers - Starting step 1
01/16/2025 08:24:39 - ERROR - finetrainers - An error occurred during training: The expanded size of the tensor (24) must match the existing size (2) at non-singleton dimension 1. Target sizes: [2, 24, 7936, 7936]. Tensor sizes: [2, 7936, 7936]
01/16/2025 08:24:39 - ERROR - finetrainers - Traceback (most recent call last):
File "/workspace/finetrainers/train.py", line 35, in main
trainer.train()
File "/workspace/finetrainers/finetrainers/trainer.py", line 782, in train
pred = self.model_config["forward_pass"](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/finetrainers/finetrainers/hunyuan_video/hunyuan_video_lora.py", line 229, in forward_pass
denoised_latents = transformer(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
else self._run_ddp_forward(*inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in call
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 740, in forward
hidden_states, encoder_hidden_states = torch.utils.checkpoint.checkpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 496, in checkpoint
ret = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 733, in custom_forward
return module(*inputs)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 478, in forward
attn_output, context_attn_output = self.attn(
^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 588, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 117, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The expanded size of the tensor (24) must match the existing size (2) at non-singleton dimension 1. Target sizes: [2, 24, 7936, 7936]. Tensor sizes: [2, 7936, 7936]

01/16/2025 08:24:40 - ERROR - finetrainers - An error occurred during training: The expanded size of the tensor (24) must match the existing size (2) at non-singleton dimension 1. Target sizes: [2, 24, 7936, 7936]. Tensor sizes: [2, 7936, 7936]
01/16/2025 08:24:40 - ERROR - finetrainers - Traceback (most recent call last):
File "/workspace/finetrainers/train.py", line 35, in main
trainer.train()
File "/workspace/finetrainers/finetrainers/trainer.py", line 782, in train
pred = self.model_config["forward_pass"](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/finetrainers/finetrainers/hunyuan_video/hunyuan_video_lora.py", line 229, in forward_pass
denoised_latents = transformer(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
else self._run_ddp_forward(*inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in call
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 740, in forward
hidden_states, encoder_hidden_states = torch.utils.checkpoint.checkpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 496, in checkpoint
ret = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 733, in custom_forward
return module(*inputs)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 478, in forward
attn_output, context_attn_output = self.attn(
^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 588, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/diffusers/models/transformers/transformer_hunyuan_video.py", line 117, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The expanded size of the tensor (24) must match the existing size (2) at non-singleton dimension 1. Target sizes: [2, 24, 7936, 7936]. Tensor sizes: [2, 7936, 7936]

Training steps: 0%| | 0/2 [00:04<?, ?it/s]

@a-r-r-o-w
Copy link
Owner

Could you try upgrading Diffusers version or installing from main branch? I believe this issue has been fixed

@Yavuzhan-Baykara
Copy link
Author

I updated the Diffusers version, but it didn’t work. After reinstalling Cuda, it was fixed. Thank you for your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants