This Jupyter notebook helps you choose and run a comparison between two models from the Intel® AI Reference Models repo using Intel® Optimizations for TensorFlow*. When you run the notebook, it installs required package dependencies, displays information about your platform, lets you choose the two models to compare, runs those models, and finally displays a performance comparison chart.
Model | Framework | Mode | Platform | Supported Precisions |
---|---|---|---|---|
ResNet 50v1.5 | TensorFlow | Inference | Flex Series | Float32 TF32 Float16 BFloat16 Int8 |
ResNet 50 v1.5 | TensorFlow | Training | Max Series | BFloat16 FP32 |
ResNet 50 v1.5 | PyTorch | Inference | Flex Series, Max Series, Arc Series | Int8 FP32 FP16 TF32 |
ResNet 50 v1.5 | PyTorch | Training | Max Series, Arc Series | BFloat16 TF32 FP32 |
DistilBERT | PyTorch | Inference | Flex Series, Max Series | FP32 FP16 BF16 TF32 |
DLRM v1 | PyTorch | Inference | Flex Series | FP16 FP32 |
SSD-MobileNet* | PyTorch | Inference | Arc Series | INT8 FP16 FP32 |
EfficientNet | PyTorch | Inference | Flex Series | FP16 BF16 FP32 |
EfficientNet | TensorFlow | Inference | Flex Series | FP16 |
FBNet | PyTorch | Inference | Flex Series | FP16 BF16 FP32 |
Wide Deep Large Dataset | TensorFlow | Inference | Flex Series | FP16 |
YOLO V5 | PyTorch | Inference | Flex Series | FP16 |
BERT large | PyTorch | Inference | Max Series, Arc Series | BFloat16 FP32 FP16 |
BERT large | PyTorch | Training | Max Series, Arc Series | BFloat16 FP32 TF32 |
BERT large | TensorFlow | Training | Max Series | BFloat16 TF32 FP32 |
DLRM v2 | PyTorch | Inference | Max Series | FP32 BF16 |
DLRM v2 | PyTorch | Training | Max Series | FP32 TF32 BF16 |
3D-Unet | PyTorch | Inference | Max Series | FP16 INT8 FP32 |
3D-Unet | TensorFlow | Training | Max Series | BFloat16 FP32 |
Stable Diffusion | PyTorch | Inference | Flex Series, Max Series, Arc Series | FP16 FP32 |
Stable Diffusion | TensorFlow | Inference | Flex Series | FP16 FP32 |
Mask R-CNN | TensorFlow | Inference | Flex Series | FP32 Float16 |
Mask R-CNN | TensorFlow | Training | Max Series | FP32 BFloat16 |
Swin Transformer | PyTorch | Inference | Flex Series | FP16 |
FastPitch | PyTorch | Inference | Flex Series | FP16 |
UNet++ | PyTorch | Inference | Flex Series | FP16 |
RNN-T | PyTorch | Inference | Max Series | FP16 BF16 FP32 |
RNN-T | PyTorch | Training | Max Series | FP32 BF16 TF32 |
IFRNet | PyTorch | Inference | Flex Series | FP16 |
RIFE | PyTorch | Inference | Flex Series | FP16 |
Instead of installing or updating packages system-wide, it's a good idea to install project-specific Python packages in a Python virtual environment localized to your project. The Python virtualenv package lets you do just that. Using virtualenv is optional, but recommended.
The jupyter notebook runs on Ubuntu distribution for Linux.
-
Virtualenv Python Environment Install virtualenv on Ubuntu using these commands:
sudo apt-get update sudo apt-get install python-dev python-pip sudo pip install -U virtualenv # system-wide install
Activate virtual environment using the following command:
virtualenv -p python ai_ref_models source ai_ref_models/bin/activate
-
Jupyter Notebook Support:
Install Jupyter notebook support with the command:
pip install notebook
Refer to the Installing Jupyter guide for details.
- Clone the Intel® AI Reference Models repo:
git clone https://github.com/IntelAI/models.git
- Launch the Jupyter notebook server:
jupyter notebook --ip=0.0.0.0
- Follow the instructions to open the URL with the token in your browser, something like this:
http://127.0.0.1:8888/tree?token=<token>
- Browse to the
models/notebooks/
folder - Click the AI_Reference_Models notebook file - AI_Reference_Models.ipynb.
- Read the instructions and run through each notebook cell, in order, ending with a display of the analysis results. Note that some cells prompt you for input, such as selecting the model number you'd like to run.
- When done, you should deactivate the virtualenv, if you used one, with the command:
deactivate