Skip to content

This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

License

Notifications You must be signed in to change notification settings

mlcommons/mlperf-automations

Repository files navigation

MLPerf Automations and Scripts

License Downloads MLC script automation features test MLPerf Inference ABTF POC Test

Welcome to the MLPerf Automations and Scripts repository! This repository is your go-to resource for tools, automations, and scripts designed to streamline the execution of MLPerf benchmarks—with a strong emphasis on MLPerf Inference benchmarks.

Starting January 2025, MLPerf automation scripts are built on the powerful MLCFlow automation interface. This modern interface replaces the earlier Collective Mind (CM), offering a more robust and efficient framework for benchmarking workflows.


🚀 Key Features

  • Automated Benchmarking – Simplifies running MLPerf Inference benchmarks with minimal manual intervention.
  • Modular and Extensible – Easily extend the scripts to support additional benchmarks and configurations.
  • Seamless Integration – Compatible with Docker, cloud environments, and local machines.
  • MLCFlow (MLC) Integration – Utilizes the MLC framework to enhance reproducibility and automation.

🧰 MLCFlow (MLC) Automations

Building upon the robust foundation of its predecessor, the Collective Mind (CM) framework, MLCFlow elevates machine learning workflows by simplifying complex tasks such as Docker container management and caching. Written in Python, the mlcflow package offers a versatile interface, supporting both a user-friendly command-line interface (CLI) and a flexible API for effortless automation script management.

At its core, MLCFlow relies on a single powerful automation, the Script, which is extended by two actions: CacheAction and DockerAction. Together, these components provide streamlined functionality to optimize and enhance your ML workflow automation experience.


🤝 Contributing

We welcome contributions from the community! To contribute:

  1. Submit pull requests (PRs) to the dev branch.
  2. Review our CONTRIBUTORS.md for guidelines and best practices.
  3. Explore more about MLPerf Inference automation in the official MLPerf Inference Documentation.

Your contributions help drive the project forward!


💬 Join the Discussion

Connect with us on the MLCommons Benchmark Infra Discord channel to engage in discussions about MLCFlow and MLPerf Automations. We’d love to hear your thoughts, questions, and ideas!


📰 Stay Updated

Keep track of the latest development progress and tasks on our MLPerf Automations Development Board.
Stay tuned for exciting updates and announcements!


📄 License

This project is licensed under the Apache 2.0 License.


💡 Acknowledgments and Funding

This project is made possible through the generous support of:

We appreciate their contributions and sponsorship!


Thank you for your interest and support in MLPerf Automations and Scripts!

About

This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

Topics

Resources

License

Stars

Watchers

Forks