{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "\n[Learn the Basics](intro.html) ||\n[Quickstart](quickstart_tutorial.html) ||\n[Tensors](tensorqs_tutorial.html) ||\n[Datasets & DataLoaders](data_tutorial.html) ||\n**Transforms** ||\n[Build Model](buildmodel_tutorial.html) ||\n[Autograd](autogradqs_tutorial.html) ||\n[Optimization](optimization_tutorial.html) ||\n[Save & Load Model](saveloadrun_tutorial.html)\n\n# Transforms\n\nData does not always come in its final processed form that is required for\ntraining machine learning algorithms. We use **transforms** to perform some\nmanipulation of the data and make it suitable for training.\n\nAll TorchVision datasets have two parameters -``transform`` to modify the features and\n``target_transform`` to modify the labels - that accept callables containing the transformation logic.\nThe [torchvision.transforms](https://pytorch.org/vision/stable/transforms.html) module offers\nseveral commonly-used transforms out of the box.\n\nThe FashionMNIST features are in PIL Image format, and the labels are integers.\nFor training, we need the features as normalized tensors, and the labels as one-hot encoded tensors.\nTo make these transformations, we use ``ToTensor`` and ``Lambda``.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import torch\nfrom torchvision import datasets\nfrom torchvision.transforms import ToTensor, Lambda\n\nds = datasets.FashionMNIST(\n    root=\"data\",\n    train=True,\n    download=True,\n    transform=ToTensor(),\n    target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))\n)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## ToTensor()\n\n[ToTensor](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.ToTensor)\nconverts a PIL image or NumPy ``ndarray`` into a ``FloatTensor``. and scales\nthe image's pixel intensity values in the range [0., 1.]\n\n\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Lambda Transforms\n\nLambda transforms apply any user-defined lambda function. Here, we define a function\nto turn the integer into a one-hot encoded tensor.\nIt first creates a zero tensor of size 10 (the number of labels in our dataset) and calls\n[scatter_](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html) which assigns a\n``value=1`` on the index as given by the label ``y``.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "target_transform = Lambda(lambda y: torch.zeros(\n    10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "--------------\n\n\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Further Reading\n- [torchvision.transforms API](https://pytorch.org/vision/stable/transforms.html)\n\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.13"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}