Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XPU] max1100 workflow update for docker and softwares #7003

Merged
merged 8 commits into from
Feb 5, 2025

Conversation

Liangliang-Ma
Copy link
Contributor

  1. update intel oneAPI basekit to 2025.0
  2. update torch/ipex/oneccl to 2.5

@loadams
Copy link
Collaborator

loadams commented Feb 5, 2025

@loadams loadams added this pull request to the merge queue Feb 5, 2025
@loadams loadams removed this pull request from the merge queue due to a manual request Feb 5, 2025
@loadams loadams merged commit e7fc598 into deepspeedai:master Feb 5, 2025
10 checks passed
tjruwase pushed a commit that referenced this pull request Feb 6, 2025
1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5

Signed-off-by: Olatunji Ruwase <[email protected]>
fitzjalen pushed a commit to fitzjalen/DeepSpeed that referenced this pull request Feb 6, 2025
)

1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5
siqi654321 pushed a commit to siqi654321/DeepSpeed that referenced this pull request Feb 7, 2025
)

1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5

Signed-off-by: siqi <[email protected]>
loadams pushed a commit that referenced this pull request Feb 7, 2025
1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5

Signed-off-by: Logan Adams <[email protected]>
traincheck-team pushed a commit to traincheck-team/DeepSpeed that referenced this pull request Feb 9, 2025
)

1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants