Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update readme.md and performance.md #258

Merged
merged 7 commits into from
Apr 24, 2024
Merged

Conversation

tianyu-l
Copy link
Contributor

@tianyu-l tianyu-l commented Apr 22, 2024

Stack from ghstack (oldest at bottom):

Include llama3 performance metrics.

tianyu-l added a commit that referenced this pull request Apr 22, 2024
ghstack-source-id: 1a616ac8fa3b3060ca568d04dff20626259dcd8e
Pull Request resolved: #258
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 22, 2024
Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 22, 2024
ghstack-source-id: 7c6c6898f662142c29a535766e6efa8e0e13d61f
Pull Request resolved: #258
@@ -1,23 +1,43 @@
To demonstrate the effectiveness of techniques used in the torchtitan, we report both the infra metrics and loss curves of the LLaMa 13B and the LLaMa 70B training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.
To demonstrate the effectiveness of techniques used in torchtitan, we report both the infra metrics and loss curves of LLaMa 2 (13B and 70B) and LLaMa 3 (8B and 70B) training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For FSDP2, I think it worth linking to the FSDP2.readme

| 70B | 2 | full | 353 | 50.5% |

We mostly use local batch size 2 (global batch size = local batch size 2 * number of FSDP ranks 64 = 128) in the experiments, because it mimics the small local batch size in large scaled training, and moreoever allows us to compare 1D (FSDP) and 2D (FSDP + TP) training under the same global batch size on both 13B and 70B LLaMa models, without the out-of-memory (OOM) issue. In fact, for the 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).
We mostly use local batch size 2 (global batch size 128) in the experiments, to keep the same number of tokens per training iteration between LLaMa 2 and LLaMa 3 (since the default sequence length in LLaMa 2 is 4096 which is halved compared with LLaMa 3). In fact, for LLaMa 2 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"LLaMa 2/3" -> "Llama 2/3"

We mostly use local batch size 2 (global batch size = local batch size 2 * number of FSDP ranks 64 = 128) in the experiments, because it mimics the small local batch size in large scaled training, and moreoever allows us to compare 1D (FSDP) and 2D (FSDP + TP) training under the same global batch size on both 13B and 70B LLaMa models, without the out-of-memory (OOM) issue. In fact, for the 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).
We mostly use local batch size 2 (global batch size 128) in the experiments, to keep the same number of tokens per training iteration between LLaMa 2 and LLaMa 3 (since the default sequence length in LLaMa 2 is 4096 which is halved compared with LLaMa 3). In fact, for LLaMa 2 70B model with full activation checkpointing, the MFU can go up to 54% when local batch size is higher (but before OOM happens).

Next we show the loss curves for LLaMa 2 13B and LLaMa 2 70B training with both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel). All the four models are trained 3000 steps with global batch size 128. In terms of activation checkpointing (AC) configs, the LLaMa 2 13B training jobs use selective op AC, whereas the LLaMa 70B training jobs use full AC. The results are shown in the picture (a TensorBoard screenshot) below[^3].
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, "Llama"


![image](../assets/images/loss_curves.png)
[^1]: We note that on 128 A100 GPUs, the MFU of LLaMa 3 70B training can go up to 50.9%.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Llama2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This in fact is for Llama 3

@@ -1,23 +1,43 @@
To demonstrate the effectiveness of techniques used in the torchtitan, we report both the infra metrics and loss curves of the LLaMa 13B and the LLaMa 70B training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.
To demonstrate the effectiveness of techniques used in torchtitan, we report both the infra metrics and loss curves of LLaMa 2 (13B and 70B) and LLaMa 3 (8B and 70B) training on 64 A100 (80GB memory) GPUs. We report infra metrics achieved by FSDP2 (1D parallelism) under various configurations, and loss curves for both 1D parallelism (FSDP2) and 2D parallelism (FSDP2 + Tensor Parallel) training.
Copy link
Contributor

@awgu awgu Apr 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we change these "LLaMa" to "Llama"? (It could be worth a find and replace.)

README.md Outdated
@@ -49,9 +49,9 @@ pip install -r requirements.txt

### Downloading a tokenizer.model

`torchtitan` currently supports training Llama3 (8B, 70B), and Llama2 (13B, 70B) out of the box. To get started training these models, we need to download a tokenizer.model. Follow the instructions on the official [meta-llama](https://huggingface.co/meta-llama/Meta-Llama-3-8B) repository to ensure you have access to the Llama model weights.
`torchtitan` currently supports training Llama 3 (8B, 70B), and Llama 2 (13B, 70B) out of the box. To get started training these models, we need to download a tokenizer.model. Follow the instructions on the official [meta-llama](https://huggingface.co/meta-llama/Meta-Llama-3-8B) repository to ensure you have the access to Llama model weights.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think the original placement of "the" in "you have access to the Llama model weights" sounds more natural than the new placement.

Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 23, 2024
ghstack-source-id: c6485a3119b85b9a81c04a68fa6070a3e62cd839
Pull Request resolved: #258
Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 23, 2024
ghstack-source-id: dbf44a759c9544f5b46a2c71789132d64385ed77
Pull Request resolved: #258
README.md Outdated
#### (4/18/2024): `torchtitan` is now public but in a pre-release state and under development.
Currently we showcase pre-training Llama2 models (LLMs) of various sizes from scratch. `torchtitan` is tested and verified with the PyTorch nightly version `torch-2.4.0.dev20240412`. (We recommend latest PyTorch nightly).
#### (4/23/2024): `torchtitan` is now public but in a pre-release state and under development.
Currently we showcase pre-training Llama 3 and Llama 2 models (LLMs) of various sizes from scratch. `torchtitan` is tested and verified with the PyTorch nightly version `torch-2.4.0.dev20240412`. (We recommend latest PyTorch nightly).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe let's emphasize Llama 3 and Lllama 2

Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 24, 2024
ghstack-source-id: 17aed01a620f1dabb65b4ff2944ad30a01dbb1e3
Pull Request resolved: #258
Copy link
Contributor

@lessw2020 lessw2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this!
Added a couple grammatical corrections but overall looks good!

Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 24, 2024
ghstack-source-id: dbf54574eae4c0d1447a40e1b4f65eb8ee46bff7
Pull Request resolved: #258
Include llama3 performance metrics.

[ghstack-poisoned]
tianyu-l added a commit that referenced this pull request Apr 24, 2024
ghstack-source-id: a9bd1d33bf7bc9f5055a645c9639bcbe628afbfb
Pull Request resolved: #258
@tianyu-l tianyu-l merged commit c9488ca into gh/tianyu-l/9/base Apr 24, 2024
4 checks passed
tianyu-l added a commit that referenced this pull request Apr 24, 2024
ghstack-source-id: a9bd1d33bf7bc9f5055a645c9639bcbe628afbfb
Pull Request resolved: #258
@tianyu-l tianyu-l deleted the gh/tianyu-l/9/head branch April 24, 2024 17:58
tianyu-l added a commit to tianyu-l/torchtitan_intern24 that referenced this pull request Aug 16, 2024
ghstack-source-id: a9bd1d33bf7bc9f5055a645c9639bcbe628afbfb
Pull Request resolved: pytorch#258
philippguevorguian pushed a commit to YerevaNN/YNNtitan that referenced this pull request Aug 17, 2024
ghstack-source-id: a9bd1d33bf7bc9f5055a645c9639bcbe628afbfb
Pull Request resolved: pytorch#258
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants