Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update readme.md and performance.md #258

Merged
merged 7 commits into from
Apr 24, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
[![Welcome to TorchTrain!](assets/images/titan_play_video.jpg)](https://youtu.be/ee5DOEqD35I?si=_B94PbVv0V5ZnNKE "Welcome to TorchTrain!")

## Pre-Release Updates:
#### (4/18/2024): `torchtitan` is now public but in a pre-release state and under development.
Currently we showcase pre-training Llama2 models (LLMs) of various sizes from scratch. `torchtitan` is tested and verified with the PyTorch nightly version `torch-2.4.0.dev20240412`. (We recommend latest PyTorch nightly).
#### (4/23/2024): `torchtitan` is now public but in a pre-release state and under development.
Currently we showcase pre-training Llama 3 and Llama 2 models (LLMs) of various sizes from scratch. `torchtitan` is tested and verified with the PyTorch nightly version `torch-2.4.0.dev20240412`. (We recommend latest PyTorch nightly).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe let's emphasize Llama 3 and Lllama 2


Key features available:</br>
1 - [FSDP2 (per param sharding)](docs/fsdp.md) </br>
Expand Down Expand Up @@ -49,9 +49,9 @@ pip install -r requirements.txt

### Downloading a tokenizer.model

`torchtitan` currently supports training Llama3 (8B, 70B), and Llama2 (13B, 70B) out of the box. To get started training these models, we need to download a tokenizer.model. Follow the instructions on the official [meta-llama](https://huggingface.co/meta-llama/Meta-Llama-3-8B) repository to ensure you have access to the Llama model weights.
`torchtitan` currently supports training Llama 3 (8B, 70B), and Llama 2 (13B, 70B) out of the box. To get started training these models, we need to download a tokenizer.model. Follow the instructions on the official [meta-llama](https://huggingface.co/meta-llama/Meta-Llama-3-8B) repository to ensure you have the access to Llama model weights.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think the original placement of "the" in "you have access to the Llama model weights" sounds more natural than the new placement.


Once you have confirmed access, you can run the following command to download the Llama2/3 tokenizer to your local machine.
Once you have confirmed access, you can run the following command to download Llama 3 / Llama 2 tokenizer to your local machine.

```
# pass your hf_token in order to download tokenizer.model
Expand All @@ -63,7 +63,7 @@ python torchtitan/datasets/download_tokenizer.py --repo_id meta-llama/Meta-Llama
python torchtitan/datasets/download_tokenizer.py --repo_id meta-llama/Llama-2-13b-hf --hf_token=...
```

Run the llama3 8B model locally on 8 GPUs:
Run Llama 3 8B model locally on 8 GPUs:

```
CONFIG_FILE="./train_configs/llama3_8b.toml" ./run_llama_train.sh
Expand Down
Binary file added assets/images/llama2_loss_curves.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/images/llama3_loss_curves.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed assets/images/loss_curves.png
Binary file not shown.
40 changes: 30 additions & 10 deletions docs/performance.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,43 @@
To demonstrate the effectiveness of techniques used in the torchtitan, we report both the infra metrics and loss curves of the LLaMa 13B and the LLaMa 70B training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.
To demonstrate the effectiveness of techniques used in torchtitan, we report both the infra metrics and loss curves of LLaMa 2 (13B and 70B) and LLaMa 3 (8B and 70B) training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For FSDP2, I think it worth linking to the FSDP2.readme

Copy link
Contributor

@awgu awgu Apr 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we change these "LLaMa" to "Llama"? (It could be worth a find and replace.)

Below is the WPS (word per second, or more accurately, token per second) and MFU (model FLOPS utilization) results which torchtitan achieves with FSDP2 on 64 A100 (80GB) GPUs. The way we compute WPS and MFU can be found in `train.py`.

## LLaMa 3 performance numbers

Below is the WPS (word per second, or more accurately, token per second) and MFU (model FLOPS utilization) results which torchtitan achieves on LLaMa 3 models with FSDP2 on 64 A100 (80GB) GPUs. The way we compute WPS and MFU can be found in `train.py`.

| Model size | Batch size | Activation checkpoiting | WPS | MFU |
| ----- | ----- | ----- | ----- | ----- |
| 8B | 1 | selective layer | 2876 | 56.3% |
| 8B | 1 | selective op | 2973 | 58.2% |
| 70B | 1 | full | 323 | 50.5%[^1] |

We use local batch size 1 (global batch size = local batch size 1 * number of FSDP ranks 64 = 64), because it mimics the small local batch size in large scaled training, and moreoever allows us to compare 1D (FSDP) and 2D (FSDP + TP) training under the same global batch size on both 8B and 70B LLaMa 3 models, without the out-of-memory (OOM) issue.

Next we show the loss curves for LLaMa 3 8B and LLaMa 3 70B training with both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel). All the four models are trained 3000 steps on the [C4 dataset](https://huggingface.co/datasets/allenai/c4), with global batch size 64. In terms of activation checkpointing (AC) configs, the LLaMa 3 8B training jobs use selective op AC, whereas the LLaMa 3 70B training jobs use full AC. The results are shown in the picture (a TensorBoard screenshot) below.

![image](../assets/images/llama3_loss_curves.png)


## LLaMa 2 performance numbers

Below is the WPS and MFU results which torchtitan achieves on LLaMa 2 models with FSDP2 on 64 A100 (80GB) GPUs.

| Model size | Batch size | Activation checkpoiting | WPS | MFU |
| ----- | ----- | ----- | ----- | ----- |
| 13B | 2 | no | 2162 | 61.1% |
| 13B | 2 | selective layer | 1914 | 54.1% |
| 13B | 2 | selective op | 1904 | 53.8% |
| 70B | 2 | selective layer | OOM | OOM |
| 70B | 2 | selective op | OOM | OOM |
| 70B | 1[^1] | selective op | 355 | 50.8% |
| 70B | 1[^2] | selective op | 355 | 50.8% |
| 70B | 2 | full | 353 | 50.5% |

We mostly use local batch size 2 (global batch size = local batch size 2 * number of FSDP ranks 64 = 128) in the experiments, because it mimics the small local batch size in large scaled training, and moreoever allows us to compare 1D (FSDP) and 2D (FSDP + TP) training under the same global batch size on both 13B and 70B LLaMa models, without the out-of-memory (OOM) issue. In fact, for the 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).
We mostly use local batch size 2 (global batch size 128) in the experiments, to keep the same number of tokens per training iteration between LLaMa 2 and LLaMa 3 (since the default sequence length in LLaMa 2 is 4096 which is halved compared with LLaMa 3). In fact, for LLaMa 2 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"LLaMa 2/3" -> "Llama 2/3"

Next we show the loss curves for LLaMa 2 13B and LLaMa 2 70B training with both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel). All the four models are trained 3000 steps with global batch size 128. In terms of activation checkpointing (AC) configs, the LLaMa 2 13B training jobs use selective op AC, whereas the LLaMa 70B training jobs use full AC. The results are shown in the picture (a TensorBoard screenshot) below[^3].

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, "Llama"

Next we show the loss curves for LLaMa 13B and LLaMa 70B training with both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel). All the four models are trained 3000 steps with global batch size 128. In terms of activation checkpointing (AC) configs, the LLaMa 13B training jobs use selective op AC, whereas the LLaMa 70B training jobs use full AC. The results are shown in the picture (a TensorBoard screenshot) below[^2].
![image](../assets/images/llama2_loss_curves.png)

![image](../assets/images/loss_curves.png)
[^1]: We note that on 128 A100 GPUs, the MFU of LLaMa 3 70B training can go up to 50.9%.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Llama2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This in fact is for Llama 3

[^1]: Since the 70B training with local batch size 2 will cause OOM error when selective activation checkpointing is used, we report the local batch size 1 case instead.
[^2]: Since the 70B training with local batch size 2 will cause OOM error when selective activation checkpointing is used, we report the local batch size 1 case instead.

[^2]: One may have noticed that for both 13B and 70B training, 1D parallelism has slightly better convergence than 2D parallelism in the first half of training. We believe this is caused by the stronger shuffling effect introduced by having more FSDP ranks in the 1D parallelism, and the difference in convergence speed should go away after switching to a randomized data loading solution.
[^3]: One may have noticed that for both 13B and 70B training, 1D parallelism has slightly better convergence than 2D parallelism in the first half of training. We believe this is caused by the stronger shuffling effect introduced by having more FSDP ranks in the 1D parallelism, and the difference in convergence speed should go away after switching to a randomized data loading solution.
Loading