Stars
Google TPU optimizations for transformers models
Neural Network Compression Framework for enhanced OpenVINO™ inference
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Tools for handling rooted phylogenetic or genealogic trees with Julia.
A pytorch quantization backend for optimum
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
AMD related optimizations for transformer models
🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Large Language Model Text Generation Inference
Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools
Blazing fast training of 🤗 Transformers on Graphcore IPUs
Prune a model while finetuning or training.