Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the pip group across 9 directories with 17 updates #1

Merged
merged 1 commit into from
Nov 23, 2024

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Nov 22, 2024

Bumps the pip group with 1 update in the /compression/gpt2 directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/automatic-speech-recognition directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/fill-mask directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text-generation directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text-generation/run-generation-script directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text2text-generation directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/translation directory: transformers.
Bumps the pip group with 2 updates in the /training/HelloDeepSpeed directory: transformers and tqdm.
Bumps the pip group with 16 updates in the /training/MoQ/huggingface-transformers/examples/research_projects/lxmert directory:

Package From To
torch 1.13.1 2.2.0
numpy 1.19.2 1.22.0
tqdm 4.48.2 4.66.3
certifi 2020.6.20 2024.7.4
future 0.18.2 0.18.3
idna 2.8 3.7
jinja2 2.11.2 3.1.4
joblib 0.16.0 1.2.0
jupyter-core 4.6.3 4.11.2
nbconvert 6.0.1 6.5.1
notebook 6.1.5 6.4.12
pillow 7.2.0 10.3.0
pyarrow 1.0.1 14.0.1
requests 2.22.0 2.32.2
tornado 6.0.4 6.4.2
urllib3 1.25.8 1.26.19

Updates transformers from 4.15.0 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.21.2 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates transformers from 4.5.1 to 4.38.0

Release notes

Sourced from transformers's releases.

v4.38: Gemma, Depth Anything, Stable LM; Static Cache, HF Quantizer, AQLM

New model additions

💎 Gemma 💎

Gemma is a new opensource Language Model series from Google AI that comes with a 2B and 7B variant. The release comes with the pre-trained and instruction fine-tuned versions and you can use them via AutoModelForCausalLM, GemmaForCausalLM or pipeline interface!

Read more about it in the Gemma release blogpost: https://hf.co/blog/gemma

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

You can use the model with Flash Attention, SDPA, Static cache and quantization API for further optimizations !

  • Flash Attention 2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", torch_dtype=torch.float16, attn_implementation="flash_attention_2"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)

  • bitsandbytes-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2b", device_map="auto", load_in_4bit=True
)
</tr></table>

... (truncated)

Commits
  • 08ab54a [ gemma] Adds support for Gemma 💎 (#29167)
  • 2de9314 [Maskformer] safely get backbone config (#29166)
  • 476957b 🚨 Llama: update rope scaling to match static cache changes (#29143)
  • 7a4bec6 Release: 4.38.0
  • ee3af60 Add support for fine-tuning CLIP-like models using contrastive-image-text exa...
  • 0996a10 Revert low cpu mem tie weights (#29135)
  • 15cfe38 [Core tokenization] add_dummy_prefix_space option to help with latest is...
  • efdd436 FIX [PEFT / Trainer ] Handle better peft + quantized compiled models (#29...
  • 5e95dca [cuda kernels] only compile them when initializing (#29133)
  • a7755d2 Generate: unset GenerationConfig parameters do not raise warning (#29119)
  • Additional commits viewable in compare view

Updates tqdm from 4.62.3 to 4.66.3

Release notes

Sourced from tqdm's releases.

tqdm v4.66.3 stable

tqdm v4.66.2 stable

  • pandas: add DataFrame.progress_map (#1549)
  • notebook: fix HTML padding (#1506)
  • keras: fix resuming training when verbose>=2 (#1508)
  • fix format_num negative fractions missing leading zero (#1548)
  • fix Python 3.12 DeprecationWarning on import (#1519)
  • linting: use f-strings (#1549)
  • update tests (#1549)
  • CI: bump actions (#1549)

tqdm v4.66.1 stable

  • fix utils.envwrap types (#1493 <- #1491, #1320 <- #966, #1319)
    • e.g. cloudwatch & kubernetes workaround: export TQDM_POSITION=-1
  • drop mentions of unsupported Python versions

tqdm v4.66.0 stable

  • environment variables to override defaults (TQDM_*) (#1491 <- #1061, #950 <- #614, #1318, #619, #612, #370)
    • e.g. in CI jobs, export TQDM_MININTERVAL=5 to avoid log spam
    • add tests & docs for tqdm.utils.envwrap
  • fix & update CLI completion
  • fix & update API docs
  • minor code tidy: replace os.path => pathlib.Path
  • fix docs image hosting
  • release with CI bot account again (cli/cli#6680)

tqdm v4.65.2 stable

  • exclude examples from distributed wheel (#1492)

tqdm v4.65.1 stable

  • migrate setup.{cfg,py} => pyproject.toml (#1490)
    • fix asv benchmarks
    • update docs
  • fix snap build (#1490)
  • fix & update tests (#1490)
    • fix flaky notebook tests
    • bump pre-commit
    • bump workflow actions

tqdm v4.65.0 stable

  • add Python 3.11 and drop Python 3.6 support (#1439, #1419, #502 <- #720, #620)
  • misc code & docs tidy
  • fix & update CI workflows & tests

tqdm v4.64.1 stable

... (truncated)

Commits

Updates torch from 1.13.1 to 2.2.0

Release notes

Sourced from torch's releases.

PyTorch 2.2: FlashAttention-v2, AOTInductor

PyTorch 2.2 Release Notes

  • Highlights
  • Backwards Incompatible Changes
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation

Highlights

We are excited to announce the release of PyTorch® 2.2! PyTorch 2.2 offers ~2x performance improvements to scaled_dot_product_attention via FlashAttention-v2 integration, as well as AOTInductor, a new ahead-of-time compilation and deployment tool built for non-python server-side deployments.

This release also includes improved torch.compile support for Optimizers, a number of new inductor optimizations, and a new logging mechanism called TORCH_LOGS.

Please note that we are deprecating macOS x86 support, and PyTorch 2.2.x will be the last version that supports macOS x64.

Along with 2.2, we are also releasing a series of updates to the PyTorch domain libraries. More details can be found in the library updates blog.

This release is composed of 3,628 commits and 521 contributors since PyTorch 2.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.2. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.

Summary:

  • scaled_dot_product_attention (SDPA) now supports FlashAttention-2, yielding around 2x speedups compared to previous versions.
  • PyTorch 2.2 introduces a new ahead-of-time extension of TorchInductor called AOTInductor, designed to compile and deploy PyTorch programs for non-python server-side.
  • torch.distributed supports a new abstraction for initializing and representing ProcessGroups called device_mesh.
  • PyTorch 2.2 ships a standardized, configurable logging mechanism called TORCH_LOGS.
  • A number of torch.compile improvements are included in PyTorch 2.2, including improved support for compiling Optimizers and improved TorchInductor fusion and layout optimizations.
  • Please note that we are deprecating macOS x86 support, and PyTorch 2.2.x will be the last version that supports macOS x64.
  • torch.ao.quantization now offers a prototype torch.export based flow

... (truncated)

Commits

Updates numpy from 1.19.2 to 1.22.0

Release notes

Sourced from numpy's releases.

v1.22.0

NumPy 1.22.0 Release Notes

NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

  • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
  • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
  • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
  • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
  • A new configurable allocator for use by downstream projects.

These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

Expired deprecations

Deprecated numeric style dtype strings have been removed

Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

(gh-19539)

Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

(gh-19615)

... (truncated)

Commits

Updates tqdm from 4.48.2 to 4.66.3

Release notes

Sourced from tqdm's releases.

tqdm v4.66.3 stable

tqdm v4.66.2 stable

  • pandas: add DataFrame.progress_map (#1549)
  • notebook: fix HTML padding (#1506)
  • keras: fix resuming training when verbose>=2 (#1508)
  • fix format_num negative fractions missing leading zero (#1548)
  • fix Python 3.12 DeprecationWarning on import (#1519)
  • linting: use f-strings (#1549)
  • update tests (#1549)
  • CI: bump actions (#1549)

tqdm v4.66.1 stable

  • fix utils.envwrap types (#1493 <- #1491, #1320 <- #966, #1319)
    • e.g. cloudwatch & kubernetes workaround: export TQDM_POSITION=-1
  • drop mentions of unsupported Python versions

tqdm v4.66.0 stable

  • environment variables to override defaults (TQDM_*) (#1491 <- #1061, #950 <- #614, #1318, #619, #612, #370)
    • e.g. in CI jobs, export TQDM_MININTERVAL=5 to avoid log spam
    • add tests & docs for tqdm.utils.envwrap
  • fix & update CLI completion
  • fix & update API docs
  • minor code tidy: replace os.path => pathlib.Path
  • fix docs image hosting
  • release with CI bot account again (cli/cli#6680)

tqdm v4.65.2 stable

  • exclude examples from distributed...

    Description has been truncated

Bumps the pip group with 1 update in the /compression/gpt2 directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/automatic-speech-recognition directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/fill-mask directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text-generation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text-generation/run-generation-script directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text2text-generation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/translation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 2 updates in the /training/HelloDeepSpeed directory: [transformers](https://github.com/huggingface/transformers) and [tqdm](https://github.com/tqdm/tqdm).
Bumps the pip group with 16 updates in the /training/MoQ/huggingface-transformers/examples/research_projects/lxmert directory:

| Package | From | To |
| --- | --- | --- |
| [torch](https://github.com/pytorch/pytorch) | `1.13.1` | `2.2.0` |
| [numpy](https://github.com/numpy/numpy) | `1.19.2` | `1.22.0` |
| [tqdm](https://github.com/tqdm/tqdm) | `4.48.2` | `4.66.3` |
| [certifi](https://github.com/certifi/python-certifi) | `2020.6.20` | `2024.7.4` |
| [future](https://github.com/PythonCharmers/python-future) | `0.18.2` | `0.18.3` |
| [idna](https://github.com/kjd/idna) | `2.8` | `3.7` |
| [jinja2](https://github.com/pallets/jinja) | `2.11.2` | `3.1.4` |
| [joblib](https://github.com/joblib/joblib) | `0.16.0` | `1.2.0` |
| [jupyter-core](https://github.com/jupyter/jupyter_core) | `4.6.3` | `4.11.2` |
| [nbconvert](https://github.com/jupyter/nbconvert) | `6.0.1` | `6.5.1` |
| [notebook](https://github.com/jupyter/notebook) | `6.1.5` | `6.4.12` |
| [pillow](https://github.com/python-pillow/Pillow) | `7.2.0` | `10.3.0` |
| [pyarrow](https://github.com/apache/arrow) | `1.0.1` | `14.0.1` |
| [requests](https://github.com/psf/requests) | `2.22.0` | `2.32.2` |
| [tornado](https://github.com/tornadoweb/tornado) | `6.0.4` | `6.4.2` |
| [urllib3](https://github.com/urllib3/urllib3) | `1.25.8` | `1.26.19` |



Updates `transformers` from 4.15.0 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.21.2 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `transformers` from 4.5.1 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.15.0...v4.38.0)

Updates `tqdm` from 4.62.3 to 4.66.3
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](tqdm/tqdm@v4.62.3...v4.66.3)

Updates `torch` from 1.13.1 to 2.2.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v1.13.1...v2.2.0)

Updates `numpy` from 1.19.2 to 1.22.0
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](numpy/numpy@v1.19.2...v1.22.0)

Updates `tqdm` from 4.48.2 to 4.66.3
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](tqdm/tqdm@v4.62.3...v4.66.3)

Updates `certifi` from 2020.6.20 to 2024.7.4
- [Commits](certifi/python-certifi@2020.06.20...2024.07.04)

Updates `future` from 0.18.2 to 0.18.3
- [Release notes](https://github.com/PythonCharmers/python-future/releases)
- [Changelog](https://github.com/PythonCharmers/python-future/blob/master/docs/changelog.rst)
- [Commits](PythonCharmers/python-future@v0.18.2...v0.18.3)

Updates `idna` from 2.8 to 3.7
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](kjd/idna@v2.8...v3.7)

Updates `jinja2` from 2.11.2 to 3.1.4
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](pallets/jinja@2.11.2...3.1.4)

Updates `joblib` from 0.16.0 to 1.2.0
- [Release notes](https://github.com/joblib/joblib/releases)
- [Changelog](https://github.com/joblib/joblib/blob/main/CHANGES.rst)
- [Commits](joblib/joblib@0.16.0...1.2.0)

Updates `jupyter-core` from 4.6.3 to 4.11.2
- [Release notes](https://github.com/jupyter/jupyter_core/releases)
- [Changelog](https://github.com/jupyter/jupyter_core/blob/main/CHANGELOG.md)
- [Commits](jupyter/jupyter_core@4.6.3...4.11.2)

Updates `nbconvert` from 6.0.1 to 6.5.1
- [Release notes](https://github.com/jupyter/nbconvert/releases)
- [Changelog](https://github.com/jupyter/nbconvert/blob/main/CHANGELOG.md)
- [Commits](jupyter/nbconvert@6.0.1...6.5.1)

Updates `notebook` from 6.1.5 to 6.4.12
- [Release notes](https://github.com/jupyter/notebook/releases)
- [Changelog](https://github.com/jupyter/notebook/blob/main/CHANGELOG.md)
- [Commits](jupyter/notebook@6.1.5...6.4.12)

Updates `pillow` from 7.2.0 to 10.3.0
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](python-pillow/Pillow@7.2.0...10.3.0)

Updates `pyarrow` from 1.0.1 to 14.0.1
- [Release notes](https://github.com/apache/arrow/releases)
- [Commits](apache/arrow@apache-arrow-1.0.1...go/v14.0.1)

Updates `requests` from 2.22.0 to 2.32.2
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.22.0...v2.32.2)

Updates `tornado` from 6.0.4 to 6.4.2
- [Changelog](https://github.com/tornadoweb/tornado/blob/v6.4.2/docs/releases.rst)
- [Commits](tornadoweb/tornado@v6.0.4...v6.4.2)

Updates `urllib3` from 1.25.8 to 1.26.19
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.25.8...1.26.19)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tqdm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: numpy
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tqdm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: certifi
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: future
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: idna
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: jinja2
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: joblib
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: jupyter-core
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: nbconvert
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: notebook
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: pillow
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: pyarrow
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: requests
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tornado
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: urllib3
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 22, 2024
@akaday akaday merged commit d6a2479 into master Nov 23, 2024
@dependabot dependabot bot deleted the dependabot/pip/compression/gpt2/pip-43c0fd7c67 branch November 23, 2024 21:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant