From deddb0aebf73c1c653bf07063f5b8e5b2f19ab01 Mon Sep 17 00:00:00 2001 From: Jirka B Date: Tue, 14 Nov 2023 20:47:56 +0100 Subject: [PATCH] docs: pin some linked ipynb to 2.1.0 --- docs/source-pytorch/advanced/post_training_quantization.rst | 2 +- docs/source-pytorch/starter/style_guide.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source-pytorch/advanced/post_training_quantization.rst b/docs/source-pytorch/advanced/post_training_quantization.rst index 611014e74b034..12139d2cea6cd 100644 --- a/docs/source-pytorch/advanced/post_training_quantization.rst +++ b/docs/source-pytorch/advanced/post_training_quantization.rst @@ -126,7 +126,7 @@ At last, the quantized model can be saved by: Hands-on Examples ***************** -Based on the `given example code `_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process. +Based on the `given example code `_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process. .. code-block:: python diff --git a/docs/source-pytorch/starter/style_guide.rst b/docs/source-pytorch/starter/style_guide.rst index 0dac0416c5f11..3ea1f621c7ac3 100644 --- a/docs/source-pytorch/starter/style_guide.rst +++ b/docs/source-pytorch/starter/style_guide.rst @@ -219,5 +219,5 @@ This is true for both academic and corporate settings where data cleaning and ad of iterating through ideas. - Check out the live examples to get your hands dirty: -- `Introduction to PyTorch Lightning `_ +- `Introduction to PyTorch Lightning `_ - `Introduction to DataModules `_