Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[iluvatar] swin_transformer-pytorch 1x1 2x8 #340

Merged
merged 8 commits into from
Dec 6, 2023

Conversation

cloud9wj
Copy link
Contributor

No description provided.

@yuzhou03
Copy link
Contributor

yuzhou03 commented Nov 28, 2023

根据训练日志,1x8和1x1的加速比是7.8。【OK】
2x8和1x8的加速比是 1.97 【OK】

@yuzhou03
Copy link
Contributor

2x8 【OK】
image

@yuzhou03
Copy link
Contributor

1x1 【OK】
image


* 性能指标

| 配置 | precision| fix_hp | e2e_time | p_whole | p_train | p_core | val_loss | mem |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

val_loss 改为 final_acc1

@yuzhou03
Copy link
Contributor

yuzhou03 commented Nov 30, 2023

1x1, 2x8训练日志已存档。

1x8训练日志已存档。- 2023-12-05

@clveryang
Copy link
Contributor

1X8日志以及参数设置上传

@yuzhou03
Copy link
Contributor

yuzhou03 commented Dec 5, 2023

1x8 【OK】
image

@yuzhou03 yuzhou03 merged commit c090e35 into FlagOpen:main Dec 6, 2023
1 check passed
shh2000 added a commit that referenced this pull request Dec 21, 2023
* [kunlunxin] fix tacotron2 running error and add 1x1 & 2x8 config (#346)

* [kunlunxin] fix tacotron2 running error and add 1x1 & 2x8 config

* [kunlunxin] modify tacotron2 test_config

* [kunlunxin] update tacotron2 readme

* [kunlunxin] modify tacotron2 torch.load()

* [iluvatar] swin_transformer-pytorch 1x1 2x8 (#340)

* update iluvatar/swin_transformer-pytorch

* update

* update

* update

* fix batch size mistake in readme

* correct val_loss to final acc1

* add finnal_acc1 and mem in readme

* correct readme mem

---------

Co-authored-by: 魏杰 <[email protected]>
Co-authored-by: 杨智超 <[email protected]>
Co-authored-by: clveryang <[email protected]>

* fix get_system_info for iluvatar_monitor (#351)

Co-authored-by: zhouyu <[email protected]>

* update iluvatar mobilenetv2 config (#356)

Co-authored-by: sen.li <[email protected]>

* Update README.md (#357)

* Update README.md

* Update README.md

* [iluvatar] bertlarge inference case (#353)

* iluvatar bertlarge MLM inference case

* update ixrt readme

---------

Co-authored-by: 杨智超 <[email protected]>

* [mthreads] bert_hf 1x8 (#350)

* support bert_hf fp32/amp/bf16 training for mthreads

* update readme

* prevent overrun

* 1x1/2x8 not support

* 【mthreads】【block】resnet50 training (#246)

* support resnet50 training on mthreads

* fix typo

* support rn50 amp training on mthreads

* add test config (should revert this commit)

* update config & readme

* add get_system_info fn

* update

* 1x1/2x8 not support

---------

Co-authored-by: Zhou Yu <[email protected]>

* fix llama, add TFLOPS log (#358)

* fixllama

* add t/tflops

* [mthreads] deepspeed llama2

* update readme for sdpa

---------

Co-authored-by: jamesruio <[email protected]>
Co-authored-by: swish swish <[email protected]>
Co-authored-by: 魏杰 <[email protected]>
Co-authored-by: 杨智超 <[email protected]>
Co-authored-by: clveryang <[email protected]>
Co-authored-by: Zhou Yu <[email protected]>
Co-authored-by: zhouyu <[email protected]>
Co-authored-by: forestlee95 <[email protected]>
Co-authored-by: sen.li <[email protected]>
Co-authored-by: uuup <[email protected]>
Co-authored-by: clveryang <[email protected]>
Co-authored-by: mingyuanw-mt <[email protected]>
Co-authored-by: shh2000 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants