{ "cells": [ { "cell_type": "markdown", "id": "51be1de8", "metadata": {}, "source": [ "<img width=\"800px\" src=\"../fidle/img/header.svg\"></img>\n", "\n", "# <!-- TITLE --> [PYTORCH1] - Practical Lab : PyTorch\n", "<!-- DESC --> PyTorch est l'un des principaux framework utilisé dans le Deep Learning\n", "<!-- AUTHOR : Kamel Guerda (CNRS/IDRIS) -->\n", "\n", "## Objectives :\n", " - Understand PyTorch" ] }, { "cell_type": "markdown", "id": "1959d3d5-388e-4c43-8318-342f08e6b024", "metadata": { "tags": [] }, "source": [ "## **Introduction**" ] }, { "cell_type": "markdown", "id": "a6da1305-551a-4549-abed-641415823a33", "metadata": {}, "source": [ "**PyTorch** is an open-source machine learning library developed by Facebook's AI Research lab. It offers an imperative and dynamic computational model, making it particularly easy and intuitive for researchers. Its primary feature is the tensor, a multi-dimensional array similar to NumPy's ndarray, but with GPU acceleration." ] }, { "cell_type": "markdown", "id": "54c79dfb-a061-4b72-afe3-c97c28071e5c", "metadata": { "tags": [] }, "source": [ "### **Installation and usage**" ] }, { "cell_type": "markdown", "id": "20852981-c289-4c4e-8099-2c5efef58e3b", "metadata": {}, "source": [ "Whether you're working on the supercomputer Jean Zay or your own machine, getting your environment ready is the first step. Here's how to proceed:" ] }, { "cell_type": "markdown", "id": "a88f32bd-37f6-4e99-97e0-62283a146a1f", "metadata": { "tags": [] }, "source": [ "#### **On Jean Zay**" ] }, { "cell_type": "markdown", "id": "8421a9f0-130d-40ef-8a7a-066bf9147066", "metadata": {}, "source": [ "For those accessing the Jean Zay supercomputer (you should already be at step 3):\n", "\n", "1. **Access JupyterHub**: Go to [https://jupyterhub.idris.fr](https://jupyterhub.idris.fr). The login credentials are the same as those used to access the Jean Zay machine. Ensure your IP address is whitelisted (add a new IP via the account management form if needed).\n", "2. **Create a JupyterLab Instance**: Choose to create the instance either on a frontend node (e.g., for internet access) or on a compute node by reserving resources via Slurm. Select the appropriate options such as workspace, allocated resources, billing, etc.\n", "3. **Choose the Kernel**: IDRIS provides kernels based on modules installed on Jean Zay. This includes various versions of Python, Tensorflow, and PyTorch. Create a new notebook with the desired kernel through the launcher or change the kernel on an existing notebook by clicking the kernel name at the top right of the screen.\n", "4. For advanced features like Tensorboard, MLFlow, custom kernel creation, etc., refer to the [JupyterHub technical documentation](https://jupyterhub.idris.fr/services/documentation/).\n" ] }, { "cell_type": "markdown", "id": "a168594c-cf18-4ed8-babf-242b56b3e0b7", "metadata": { "tags": [] }, "source": [ "> **Task:** Verifying Your Kernel in the upper top corner\n", "> - In JupyterLab, at the top right of your notebook, you should see the name of your current kernel.\n", "> - Ensure it matches \"PyTorch 2.0\" or a similar name indicating the PyTorch version.\n", "> - If it doesn't, click on the kernel name and select the appropriate kernel from the list.\n" ] }, { "cell_type": "markdown", "id": "0aaadeee-5115-48d0-aa57-20a0a63d5054", "metadata": { "tags": [] }, "source": [ "#### **Elsewhere**" ] }, { "cell_type": "markdown", "id": "5d34951e-1b7b-4776-9449-eff57a9385f4", "metadata": {}, "source": [ "\n", "For users on other platforms:\n", "\n", "1. Install PyTorch by following the official [installation guide](https://pytorch.org/get-started/locally/).\n", "2. If you have a GPU, ensure you've installed the necessary CUDA toolkit and cuDNN libraries.\n", "3. Launch your preferred Python environment, whether it's Jupyter notebook, an IDE like PyCharm, or just the terminal.\n", "\n", "Once your setup is complete, you're ready to dive in. Let's explore the fascinating world of deep learning!" ] }, { "cell_type": "markdown", "id": "7552d5ac-eb8c-48e0-9e61-3b056d560f7b", "metadata": { "tags": [] }, "source": [ "### **Version**" ] }, { "cell_type": "code", "execution_count": 1, "id": "272e492f-35c5-4293-b504-8e8632da1b73", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Importing PyTorch\n", "import torch\n", "\n", "# TODO: Print the version of PyTorch being used\n" ] }, { "cell_type": "markdown", "id": "9fdbe225-4e06-4ad0-abca-4325457dc0e1", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "To print the version of PyTorch you're using, you can access the <code>__version__</code> attribute of the <code>torch</code> module. \n", " \n", "```python\n", "print(torch.__version__)\n", "```" ] }, { "cell_type": "markdown", "id": "72752068-02fe-4e44-8c27-40e8f66680c9", "metadata": { "tags": [] }, "source": [ "**Why PyTorch 2.0 is a Game-Changer**\n", "\n", "PyTorch 2.0 represents a major step in the evolution of this popular deep learning library. As part of the transition to the 2-series, let's highlight some reasons why this version is pivotal:\n", "\n", "1. **Performance**: With PyTorch 2.0, performance has been supercharged at the compiler level, offering faster execution and support for Dynamic Shapes and Distributed systems.\n", " \n", "2. **torch.compile**: This introduces a more Pythonic approach, moving some parts of PyTorch from C++ back to Python. Notably, across a test set of 163 open-source models, the use of `torch.compile` resulted in a 43% speed increase during training on an NVIDIA A100 GPU.\n", "\n", "3. **Innovative Technologies**: Technologies like TorchDynamo and TorchInductor, both written in Python, make PyTorch more flexible and developer-friendly.\n", " \n", "4. **Staying Pythonic**: PyTorch 2.0 emphasizes Python-centric development, reducing barriers for developers and vendors.\n", "\n", "As we progress in this lab, we'll dive deeper into some of these features, giving you hands-on experience with the power and flexibility of PyTorch 2.0.\n" ] }, { "cell_type": "markdown", "id": "bc215c02-1f16-48be-88f9-5080fd2be9ed", "metadata": { "tags": [] }, "source": [ "## **Pytorch Fundamentals**" ] }, { "cell_type": "markdown", "id": "bcd7f0fc-a714-495e-9307-e48964abd85b", "metadata": { "tags": [] }, "source": [ "### **Tensors**" ] }, { "cell_type": "markdown", "id": "6e185bf6-3d3c-4a43-b425-e6aa3da5d5dd", "metadata": { "tags": [] }, "source": [ "A **tensor** is a generalization of vectors and matrices and is easily understood as a multi-dimensional array. In the context of PyTorch:\n", "- A 0-dimensional tensor is a scalar (a single number).\n", "- A 1-dimensional tensor is a vector.\n", "- A 2-dimensional tensor is a matrix.\n", "- ... and so on for higher dimensions.\n", "\n", "Tensors are fundamental to PyTorch not just as data containers but also for their compatibility with GPU acceleration, making operations on them extremely fast. This acceleration is vital for training large neural networks.\n", "\n", "Let's start our journey with tensors by examining how PyTorch handles scalars." ] }, { "cell_type": "markdown", "id": "fa90e399-3955-4417-a4a3-c0c812ebb1d9", "metadata": { "tags": [] }, "source": [ "#### **Scalars in PyTorch**\n", "\n", "### Scalars in PyTorch\n", "\n", "A scalar, being a 0-dimensional tensor, is simply a single number. While it might seem trivial, understanding scalars in PyTorch lays the foundation for grasping more complex tensor structures. Familiarize yourself with the `torch.tensor()` function from the [official documentation](https://pytorch.org/docs/stable/generated/torch.tensor.html) before proceeding.\n", "\n", "> **Task**: Create a scalar tensor in PyTorch and examine its properties.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "b6db1841-0fab-4df0-b699-058d5a477ca6", "metadata": { "tags": [] }, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (2309926818.py, line 2)", "output_type": "error", "traceback": [ "\u001b[0;36m Cell \u001b[0;32mIn[2], line 2\u001b[0;36m\u001b[0m\n\u001b[0;31m scalar_tensor = # Your code here\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "# TODO: Create a scalar tensor with the value 7.5\n", "scalar_tensor = # Your code here\n", "\n", "# Print the scalar tensor\n", "print(\"Scalar Tensor:\", scalar_tensor)\n", "\n", "# TODO: Print its dimension, shape, and type\n" ] }, { "cell_type": "markdown", "id": "c9bc265c-9a7f-4588-8586-562b390d63d9", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "To create a scalar tensor, use the <code>torch.tensor()</code> function. To retrieve its dimension, shape, and type, you can use the <code>.dim()</code>, <code>.shape</code>, and <code>.dtype</code> attributes respectively. \n", "\n", "Here's how you can achieve that:\n", "\n", "```python\n", "scalar_tensor = torch.tensor(7.5)\n", "print(\"Scalar Tensor:\", scalar_tensor)\n", "print(\"Dimension:\", scalar_tensor.dim())\n", "print(\"Shape:\", scalar_tensor.shape)\n", "print(\"Type:\", scalar_tensor.dtype)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "fc240c26-5866-4080-bbb9-d5cde1500300", "metadata": { "tags": [] }, "source": [ "#### **Vectors in PyTorch**\n", "\n", "A vector in PyTorch is a 1-dimensional tensor. It's essentially a list of numbers that can represent anything from a sequence of data points to the weights of a neural network layer.\n", "\n", "In this section, we'll see how to create and manipulate vectors using PyTorch. We'll also look at some basic operations you can perform on them.\n", "\n", "> **Task**: Create a 1-dimensional tensor (vector) with values `[1.5, 2.3, 3.1, 4.8, 5.2]` and print its dimension, shape, and type.\n", "\n", "Start by referring to the `torch.tensor()` function in the [official documentation](https://pytorch.org/docs/stable/generated/torch.tensor.html) to understand how to create tensors of varying dimensions.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "e9503b49-38d1-45d9-910f-761da82cfbd0", "metadata": { "tags": [] }, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (138343520.py, line 2)", "output_type": "error", "traceback": [ "\u001b[0;36m Cell \u001b[0;32mIn[3], line 2\u001b[0;36m\u001b[0m\n\u001b[0;31m vector_tensor = # Your code here\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "# TODO: Create a 1-dimensional tensor (vector) with values [1.5, 2.3, 3.1, 4.8, 5.2]\n", "vector_tensor = # Your code here\n", "\n", "# Print the vector tensor\n", "print(\"Vector Tensor:\", vector_tensor)\n", "\n", "# TODO: Print its dimension, shape, and type\n" ] }, { "cell_type": "markdown", "id": "13252d1f-004f-42e0-aec9-56322b43ab72", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "Creating a 1-dimensional tensor is similar to creating a scalar. Instead of a single number, you pass a list of numbers to the <code>torch.tensor()</code> function. The <code>.dim()</code>, <code>.shape</code>, and <code>.dtype</code> attributes will help you retrieve its properties.\n", "\n", "```python\n", "vector_tensor = torch.tensor([1.5, 2.3, 3.1, 4.8, 5.2])\n", "print(\"Vector Tensor:\", vector_tensor)\n", "print(\"Dimension:\", vector_tensor.dim())\n", "print(\"Shape:\", vector_tensor.shape)\n", "print(\"Type:\", vector_tensor.dtype)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "7bfc47a8-e99d-4683-ac36-287f35a76fd0", "metadata": {}, "source": [ "#### **Vector Operations**\n", "\n", "Vectors are not just static entities; we often perform various operations on them, especially in the context of neural networks. This includes addition, subtraction, scalar multiplication, dot products, etc.\n", "\n", "> **Task**: Using the previously defined `vector_tensor`, perform the following operations:\n", "1. Add 5 to all the elements of the vector.\n", "2. Multiply all the elements of the vector by 2.\n", "3. Compute the dot product of the vector with itself." ] }, { "cell_type": "code", "execution_count": 4, "id": "86182e1c-5491-4743-a7c8-10b9effd8194", "metadata": { "tags": [] }, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (184231995.py, line 2)", "output_type": "error", "traceback": [ "\u001b[0;36m Cell \u001b[0;32mIn[4], line 2\u001b[0;36m\u001b[0m\n\u001b[0;31m vector_added = # Your code here\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "# TODO: Add 5 to all elements\n", "vector_added = # Your code here\n", "\n", "# TODO: Multiply all elements by 2\n", "vector_multiplied = # Your code here\n", "\n", "# TODO: Compute the dot product with itself\n", "dot_product = # Your code here\n", "\n", "# Print the results\n", "print(\"Vector after addition:\", vector_added)\n", "print(\"Vector after multiplication:\", vector_multiplied)\n", "print(\"Dot Product:\", dot_product)" ] }, { "cell_type": "markdown", "id": "75773a02-3ab4-4325-99fb-7a742e997f21", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "PyTorch tensors support regular arithmetic operations. For the dot product, you can use the <code>torch.dot()</code> function.\n", "\n", "```python\n", "\n", "vector_added = vector_tensor + 5\n", "vector_multiplied = vector_tensor * 2\n", "dot_product = torch.dot(vector_tensor, vector_tensor)\n", "\n", "print(\"Vector after addition:\", vector_added)\n", "print(\"Vector after multiplication:\", vector_multiplied)\n", "print(\"Dot Product:\", dot_product)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "2b4766ba-ef9a-4f24-ba43-7358097a7b61", "metadata": { "tags": [] }, "source": [ "#### **Matrices in PyTorch**\n", "\n", "A matrix in PyTorch is represented as a 2D tensor. Just as vectors are generalizations of scalars, matrices are generalizations of vectors, providing an additional dimension. Matrices are crucial for a range of operations in deep learning, including representing datasets, transformations, and more.\n" ] }, { "cell_type": "markdown", "id": "2ec7544d-ef87-4773-88d8-cee731d1c43c", "metadata": { "tags": [] }, "source": [ "##### **Creating Matrices**\n", "\n", "Before diving into manual matrix creation, it's beneficial to know some utility functions PyTorch provides:\n", "\n", "- `torch.rand()`: Generates a matrix with random values between 0 and 1.\n", "- `torch.eye()`: Creates an identity matrix.\n", "- `torch.zeros()`: Generates a matrix filled with zeros.\n", "- `torch.ones()`: Generates a matrix filled with ones.\n", "\n", "You can explore more about these functions in the [official documentation](https://pytorch.org/docs/stable/tensors.html).\n", "\n", "> **Task**: Using the above functions, create the following matrices:\n", "> 1. A 3x3 matrix with random values.\n", "> 2. A 5x5 identity matrix.\n", "> 3. A 2x4 matrix filled with zeros.\n", "> 4. A 4x2 matrix filled with ones.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "5014b564-6bf5-4f00-a513-578ca72d94a8", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code for creating the matrices goes here\n", "\n" ] }, { "cell_type": "markdown", "id": "86b2708c-45c6-4b2c-b526-41491fcafa08", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To create these matrices, make use of the following functions:\n", "\n", "1. `torch.rand(size)`: Use this function and specify the size as `(3, 3)` to create a 3x3 matrix with random values.\n", "2. `torch.eye(n, m)`: Use this to generate an identity matrix. For a square matrix like 5x5, n and m would both be 5.\n", "3. `torch.zeros(m, n)`: For a 2x4 matrix filled with zeros, specify m=2 and n=4.\n", "4. `torch.ones(m, n)`: Similar to the `zeros` function but fills the matrix with ones.\n", "\n", "```python\n", "# 1. 3x3 matrix with random values\n", "random_matrix = torch.rand(3, 3)\n", "print(random_matrix)\n", "\n", "# 2. 5x5 identity matrix\n", "identity_matrix = torch.eye(5, 5)\n", "print(identity_matrix)\n", "\n", "# 3. 2x4 matrix filled with zeros\n", "zero_matrix = torch.zeros(2, 4)\n", "print(zero_matrix)\n", "\n", "# 4. 4x2 matrix filled with ones\n", "one_matrix = torch.ones(4, 2)\n", "print(one_matrix)\n", "```\n", "</details>\n" ] }, { "cell_type": "markdown", "id": "60ff5e51-699e-46a1-8cc7-1d5fc9a4d078", "metadata": {}, "source": [ "#### **Matrix Operations in PyTorch**\n", "\n", "Just like vectors, matrices can undergo a variety of operations. Some of the basic ones include matrix addition, subtraction, and multiplication. More advanced operations include matrix inversion, transposition, and determinant calculation.\n" ] }, { "cell_type": "markdown", "id": "c6bdb9d9-b299-4d63-b92f-7c4b8c32a1b7", "metadata": { "tags": [] }, "source": [ "##### **Basic Matrix Operations**\n", "\n", "> **Task**: Perform the following operations on matrices:\n", "> 1. Create two 3x3 matrices with random values.\n", "> 2. Add the two matrices.\n", "> 3. Subtract the second matrix from the first one.\n", "> 4. Multiply the two matrices element-wise.\n", "\n", "Remember, for matrix multiplication that results in the dot product, you'd use `torch.mm` or `@`, but for element-wise multiplication, you use `*`.\n", "\n", "Here's the [official documentation](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.matmul) on matrix operations for your reference.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6be8c647-c455-4d3b-8a21-c4b7102ffa75", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code for creating the matrices and performing the operations goes here" ] }, { "cell_type": "markdown", "id": "0020b26b-b2bb-4efa-9bf3-3f037acd050e", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Here's how you can perform the given matrix operations:\n", "\n", "```python\n", "# 1. Create two 3x3 matrices with random values\n", "matrix1 = torch.rand(3, 3)\n", "matrix2 = torch.rand(3, 3)\n", "print(\"Matrix 1:\\n\", matrix1)\n", "print(\"\\nMatrix 2:\\n\", matrix2)\n", "\n", "# 2. Add the two matrices\n", "sum_matrix = matrix1 + matrix2\n", "print(\"\\nSum of matrices:\\n\", sum_matrix)\n", "\n", "# 3. Subtract the second matrix from the first one\n", "difference_matrix = matrix1 - matrix2\n", "print(\"\\nDifference of matrices:\\n\", difference_matrix)\n", "\n", "# 4. Multiply the two matrices element-wise\n", "product_matrix = matrix1 * matrix2\n", "print(\"\\nElement-wise product of matrices:\\n\", product_matrix)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "07f57464-76e2-4670-8332-3fcec2e162bd", "metadata": {}, "source": [ "#### **Higher-Dimensional Tensors in PyTorch**\n", "\n", "While scalars, vectors, and matrices cover 0D, 1D, and 2D tensors respectively, in deep learning, especially in tasks like image processing, you often encounter tensors with more than two dimensions.\n", "\n", "For instance, a colored image is often represented as a 3D tensor: height x width x channels (e.g., RGB channels). A batch of such images would then be a 4D tensor: batch_size x height x width x channels.\n", "\n", "Let's get our hands dirty with some higher-dimensional tensors!\n" ] }, { "cell_type": "markdown", "id": "3dd1fea7-d290-49fe-ac1f-5a8387e3d386", "metadata": { "tags": [] }, "source": [ "##### **Creating a 3D Tensor**\n", "\n", "> **Task**: Create a 3D tensor representing 2 images of size 4x4 with 3 channels (like RGB) filled with random values.\n", "\n", "Use the `torch.rand` function, and remember to specify the dimensions correctly.\n", "\n", "Here's the [official documentation](https://pytorch.org/docs/stable/tensors.html#creation-ops) for tensor creation.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e7c8ac6e-f870-4b5d-ac2c-05be1d0cc9f1", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code for creating the 3D tensor goes here" ] }, { "cell_type": "markdown", "id": "efe61750-a91f-428a-b4e2-7df0cc2a782b", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Creating a 3D tensor with the given specifications can be achieved using the `torch.rand` function. Here's how:\n", "\n", "```python\n", "# Create a 3D tensor representing 2 images of size 4x4 with 3 channels\n", "image_tensor = torch.rand(2, 4, 4, 3)\n", "print(image_tensor)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "8cfbcaa0-a0f6-4869-ba94-65d4439a60ca", "metadata": {}, "source": [ "#### **Reshaping Tensors**\n", "\n", "In deep learning, we often need to reshape our tensors. For instance, an image represented as a 3D tensor might need to be reshaped into a 1D tensor before passing it through a fully connected layer. PyTorch provides methods to make this easy.\n", "\n", "The most commonly used method for reshaping tensors in PyTorch is the `view()` method. Another method that offers more flexibility (especially when you're unsure about the size of one dimension) is `reshape()`.\n", "\n", ">[Task]: Using the official documentation, find out how to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) and [`reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape) methods. Create a 2x3 tensor using `torch.tensor()` and then reshape it into a 3x2 tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e6758ba7-aa35-42f0-87c1-86b88de64238", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a 2x3 tensor\n", "\n", "# Reshape it into a 3x2 tensor\n" ] }, { "cell_type": "markdown", "id": "fea31255-c2fe-47b2-b03b-c2b35953e05a", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "To reshape a tensor using <code>view()</code> method:\n", "\n", "```python\n", "tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])\n", "reshaped_tensor = tensor.view(3, 2)\n", "```\n", "<br>\n", "Alternatively, using the <code>reshape()</code> method:\n", "\n", "```python\n", "reshaped_tensor = tensor.reshape(3, 2)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "c580dbca-b75a-4b97-a24a-6a19c7cdf8d1", "metadata": {}, "source": [ "#### **Broadcasting**\n", "\n", "Broadcasting is a powerful feature in PyTorch that allows you to perform operations between tensors of different shapes. When possible, PyTorch will automatically reshape the tensors in a way that makes the operation valid. This can significantly reduce manual reshaping and is efficient in memory usage.\n", "\n", "However, it's essential to understand the rules and nuances of broadcasting to use it effectively and avoid unexpected behaviors.\n", "\n", ">[Task]: Given a tensor `A` of shape (4, 1) and another tensor `B` of shape (1, 4), use PyTorch operations to produce a result tensor of shape (4, 4). Check the [official documentation on broadcasting](https://pytorch.org/docs/stable/notes/broadcasting.html) for guidance.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "44566fb7-87ed-41ef-a86e-db32a1cf2179", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define tensor A of shape (4, 1) and tensor B of shape (1, 4)\n", "\n", "# Perform an operation to get a result tensor of shape (4, 4)\n" ] }, { "cell_type": "markdown", "id": "2602f2c4-f507-4a9a-8e8d-dee5e95efc61", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "You can simply use addition, subtraction, multiplication, or any other element-wise operation. When you do this operation, PyTorch will automatically broadcast the tensors to a compatible shape. For example:\n", "\n", "```python\n", "A = torch.tensor([[1], [2], [3], [4]])\n", "B = torch.tensor([[1, 2, 3, 4]])\n", "result = A * B\n", "print(result)\n", "```\n", "</details>\n" ] }, { "cell_type": "markdown", "id": "ba2cc439-8ecc-4d92-b78f-39ef762678f8", "metadata": { "tags": [] }, "source": [ "### **GPU Support with CUDA**" ] }, { "cell_type": "markdown", "id": "575536c5-87a7-4781-8557-558627f14c0a", "metadata": { "tags": [] }, "source": [ "PyTorch seamlessly supports operations on Graphics Processing Units (GPUs) through CUDA, an API developed by NVIDIA for their GPUs. If you have a compatible NVIDIA GPU on your machine, PyTorch can utilize it to speed up tensor operations which can be orders of magnitude faster than on a CPU.\n", "\n", "To verify if your PyTorch installation can use CUDA, you can check the attribute `torch.cuda.is_available()`. This returns `True` if CUDA is available and PyTorch can use GPUs, otherwise it returns `False`.\n", "\n", ">[Task]: Print whether CUDA support is available on your system. The [CUDA documentation](https://pytorch.org/docs/stable/cuda.html) might be useful for this task." ] }, { "cell_type": "code", "execution_count": null, "id": "38e84bb7-5026-4262-8b78-b368c55a1450", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Check and print if CUDA is available\n", "cuda_available = None # Replace None with the appropriate code\n", "print(\"CUDA available:\", cuda_availablez" ] }, { "cell_type": "markdown", "id": "646b5660-5131-4ce0-9592-0fd14608c6df", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To check if CUDA is available, you can utilize the torch.cuda.is_available() function.\n", "```python\n", "cuda_available = torch.cuda.is_available()\n", "print(\"CUDA available:\", cuda_available)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "86c8d7ed-0931-4874-bb27-e796ae1a1d7a", "metadata": {}, "source": [ "When developing deep learning models in PyTorch, it's a good habit to write device-agnostic code. This means your code can automatically use a GPU if available, or fall back to using the CPU if not. The `torch.device` object allows you to specify the device (either CPU or GPU) where you'd like your tensors to be allocated.\n", "\n", "To dynamically determine the device, a common pattern is to check `torch.cuda.is_available()`, and set the device accordingly. This is particularly useful when you want your code to be flexible, regardless of the underlying hardware.\n", "\n", ">[Task]: Define a `device` variable that is set to 'cuda:0' if CUDA is available and 'cpu' otherwise. Create a tensor on this device. The [documentation about torch.device](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device) might be handy." ] }, { "cell_type": "code", "execution_count": null, "id": "91e05e75-03ad-44cb-9842-89e2017ee709", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define the device\n", "device = None # Replace None with the appropriate code\n", "\n", "# Create a tensor on the specified device\n", "tensor_on_device = torch.tensor([1, 2, 3, 4, 5], device=device)" ] }, { "cell_type": "markdown", "id": "3b80406b-b1cc-4831-a6ba-8e6385703755", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To define the device variable dynamically:\n", "\n", "```python\n", "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n", "```\n", "<br>\n", "After setting the device, you can create tensors on it directly using the device argument.\n", "\n", "</details>\n" ] }, { "cell_type": "markdown", "id": "574a2192-cc09-4d2c-8f01-97b051b7ffc8", "metadata": { "tags": [] }, "source": [ "### **Automatic Differentiation with Autograd**" ] }, { "cell_type": "markdown", "id": "7f5406f6-e295-4f70-a815-9eef18352390", "metadata": { "tags": [] }, "source": [ "PyTorch's `autograd` module provides the tools for automatically computing the gradients for tensors. This feature is a cornerstone for neural network training, as gradients are essential for optimization algorithms like gradient descent.\n", "\n", "When we create a tensor, `requires_grad` is set to `False` by default, meaning it won't track operations. However, if we set `requires_grad=True`, PyTorch will start to track all operations on the tensor.\n", "\n", "Let's start with a simple example:\n", "\n", ">**Task:** Create a tensor that holds a single value, let's say 2, and set `requires_grad=True`. Then, define a simple operation like squaring the tensor. Finally, inspect the resulting tensor. The [documentation for requires_grad](https://pytorch.org/docs/stable/autograd.html#torch.Tensor.requires_grad) might be handy." ] }, { "cell_type": "code", "execution_count": null, "id": "fe63ab93-55be-434d-822f-8fd9cd727941", "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Create a tensor, perform a simple operation, and print its data and grad_fn separately.\n" ] }, { "cell_type": "markdown", "id": "fa7ee20c-c2d6-4dcf-bb37-9eda580b5dc5", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To create a tensor with requires_grad=True and square it:\n", "\n", "```python\n", "# TODO: Create a tensor, perform a simple operation, and print its data and grad_fn separately.\n", "x = torch.tensor([2.0], requires_grad=True)\n", "y = x ** 2\n", "print(\"Data:\", y.data)\n", "print(\"grad_fn:\", y.grad_fn)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "c14dde16-a6be-4151-94cb-96ae98f0648a", "metadata": {}, "source": [ "Once the operation is executed on a tensor, a new attribute grad_fn is created. This attribute references a function that has created the tensor. In our example, since we squared the tensor, grad_fn will be of type PowBackward0.\n", "\n", "This grad_fn attribute provides a link to the computational history of the tensor, allowing PyTorch to backpropagate errors and compute gradients when training neural networks." ] }, { "cell_type": "markdown", "id": "0965e79e-558a-45a9-8ab2-614c503e59c0", "metadata": { "tags": [] }, "source": [ "#### **Computing Gradients**" ] }, { "cell_type": "markdown", "id": "36fb6c5b-9b39-4a2f-a767-61032b1b4ffc", "metadata": {}, "source": [ "Now, let's compute the gradients of `out` with respect to `x`. To do this, we'll call the `backward()` method on the tensor `out`.\n", "\n", ">[Task]: Compute the gradients of `out` by calling the `backward()` method on it. Afterwards, print the gradients of `x`. The [documentation for backward()](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) may be useful.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "83685760-bde9-4327-88f7-cfe02bdb3309", "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Compute the gradient and print it." ] }, { "cell_type": "markdown", "id": "9b1d104b-efef-4fff-869d-8dde1131868e", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To compute the gradient:\n", "\n", "```python\n", "y.backward()\n", "print(x.grad)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "d7f5aecb-8623-481f-a5cf-f8b6dd0c9a37", "metadata": { "tags": [] }, "source": [ "#### **Gradient Accumulation**" ] }, { "cell_type": "markdown", "id": "1a4df0a1-12a0-4129-a258-915fa8440193", "metadata": {}, "source": [ "In PyTorch, the gradients of tensors are accumulated into the `.grad` attribute each time you call `.backward()`. This means that if you call `.backward()` multiple times, the gradients will add up.\n", "\n", "However, by default, calling `.backward()` consumes the computational graph to save memory. If you intend to call `.backward()` multiple times on the same graph, you need to specify `retain_graph=True` during all but the last call.\n", "\n", ">[Task]: Create a tensor, perform an operation on it, and then call `backward()` twice. Use `retain_graph=True` in the first call to retain the computational graph. Observe the `.grad` attribute after each call.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "50a04095-9d7e-48ba-90ed-06718cd379f0", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a tensor\n", "w = torch.tensor([1.0], requires_grad=True)\n", "\n", "# Operation\n", "result = w * 2\n", "\n", "# TODO: Call backward twice (using retain_graph=True for the first call) and print the grad after each call\n", "# ...\n" ] }, { "cell_type": "markdown", "id": "d699e58d-d479-466a-b592-cbf68d185c3b", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "result.backward(retain_graph=True)\n", "print(w.grad) # This should print 2\n", "\n", "result.backward()\n", "print(w.grad) # This should print 4, as gradients get accumulated\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "88d30f87-2469-4289-ad8a-51a25a2e8b82", "metadata": {}, "source": [ "#### **Zeroing Gradients**\n" ] }, { "cell_type": "markdown", "id": "2ea93580-9a35-4f5d-8f29-0a324d28d28a", "metadata": { "tags": [] }, "source": [ "\n", "In neural network training, we typically want to update our weights with the gradients after each forward and backward pass. This means that we don't want the gradients to accumulate across multiple passes. Hence, it's common to zero out the gradients at the start of a new iteration.\n", "\n", ">[Task]: Using the tensor from the previous cell, zero out its gradients and verify that it has been set to zero.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "9cb03a91-d1df-4bbf-a0d2-b5580c643e12", "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Zero out the gradients of w and print" ] }, { "cell_type": "markdown", "id": "4a89ff66-b1ef-413a-a41c-847e8c832e4b", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "\n", "w.grad.zero_()\n", "print(w.grad)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "85f75515-3d89-4249-b00a-03c13cca92d4", "metadata": { "tags": [] }, "source": [ "#### **Non-Scalar Backward**" ] }, { "cell_type": "markdown", "id": "86a54a2c-e8c1-4278-a3fe-ed60564ebd07", "metadata": { "tags": [] }, "source": [ "When dealing with non-scalar tensors, `backward` requires an additional argument: the gradient of the tensor with respect to some scalar (usually a loss). \n", "\n", ">[Task]: Create a tensor of shape (2, 2) with `requires_grad=True`. Compute a non-scalar result by multiplying the tensor with itself. Then, compute backward with a gradient argument. You can consult the [backward documentation](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) for reference." ] }, { "cell_type": "code", "execution_count": null, "id": "cc0e4271-c356-4a4e-9a3a-5df1403a4211", "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Create a tensor, perform an operation, and compute backward with a gradient argument" ] }, { "cell_type": "markdown", "id": "e7ee72f3-f51c-4849-b41d-136028029185", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "\n", "v = torch.tensor([[2.0, 3.0], [4.0, 5.0]], requires_grad=True)\n", "result = v * v\n", "\n", "grads = torch.tensor([[1.0, 1.0], [1.0, 1.0]])\n", "result.backward(grads)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "2e403021-4854-4e97-9898-82ed355293e7", "metadata": { "tags": [] }, "source": [ "#### **Stopping Gradient Tracking**\n" ] }, { "cell_type": "markdown", "id": "ba644253-8523-480d-8318-a87047671a21", "metadata": { "tags": [] }, "source": [ "\n", "There are scenarios where we don't want to track the gradients for certain operations. This can be achieved in two main ways:\n", "\n", "1. **Using `torch.no_grad()`**: This context manager ensures that the enclosed operations are excluded from gradient tracking.\n", "2. **Using `.detach()`**: Creates a tensor that shares the same storage but does not require gradients.\n", "\n", ">[Task]: Create a tensor with `requires_grad=True`. Then, demonstrate both methods above to prevent gradient computation.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1feb2f9b-0c5f-4e9d-b042-e74052bc83a9", "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Demonstrate operations without gradient tracking\n", "\n" ] }, { "cell_type": "markdown", "id": "a5eff82b-bfbd-4be7-afa3-dc00f5341568", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "\n", "# Using torch.no_grad()\n", "with torch.no_grad():\n", " result_no_grad = v * v\n", "print(result_no_grad.requires_grad)\n", "\n", "# Using .detach()\n", "detached_tensor = v.detach()\n", "result_detach = detached_tensor * detached_tensor\n", "print(result_detach.requires_grad)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "efe66a5d-ac63-4623-8182-3b5aff58abbe", "metadata": { "tags": [] }, "source": [ "## **Building a Simple Neural Network with PyTorch**" ] }, { "cell_type": "markdown", "id": "aa4b7630-fc1e-4f7b-b86b-3c0d233cdc49", "metadata": { "jp-MarkdownHeadingCollapsed": true, "tags": [] }, "source": [ "Neural networks are the cornerstone of deep learning. They are organized as a series of interconnected nodes or \"neurons\" that are structured into layers: an input layer, several hidden layers, and an output layer. Data flows through this network, undergoing transformations at each node, until it emerges at the output.\n", "\n", "With PyTorch's `torch.nn` module, constructing these neural networks becomes straightforward. Let's dive into its main components:" ] }, { "cell_type": "markdown", "id": "8e98f379-5580-477c-8b7b-c641f5edf710", "metadata": { "tags": [] }, "source": [ "### **nn.Module: The Base Class for Neural Networks**" ] }, { "cell_type": "markdown", "id": "15d72ea2-c846-44f5-85d5-bd1990c154bc", "metadata": {}, "source": [ "Every neural network in PyTorch is derived from the `nn.Module` class. This class offers:\n", "- Organization and management of the layers.\n", "- Capabilities for GPU acceleration.\n", "- Implementation of the forward pass.\n", "\n", "When we inherit from `nn.Module`, our custom neural network class benefits from these functionalities.\n", "For more details, you can refer to the official [documentation](https://pytorch.org/docs/stable/generated/torch.nn.Module.html).\n", "\n", "\n", "\n", ">**Task:** Familiarize yourself with the structure of a simple neural network provided below. Later, you'll be enriching it." ] }, { "cell_type": "code", "execution_count": null, "id": "425abefe-54b9-4944-bc6e-cc78de892c66", "metadata": { "tags": [] }, "outputs": [], "source": [ "import torch.nn as nn\n", "\n", "class SimpleNet(nn.Module):\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(SimpleNet, self).__init__()\n", " # Define layers here\n", "\n", " def forward(self, x):\n", " # Call the layers in the correct order here\n", " return x" ] }, { "cell_type": "markdown", "id": "892e3b55-097b-436e-bbf8-a380fd7d9e35", "metadata": { "tags": [] }, "source": [ "### **Linear Layers: Making Connections**" ] }, { "cell_type": "markdown", "id": "564c17bb-543f-42f6-8c5d-b855ccaf71e6", "metadata": {}, "source": [ "In PyTorch, a linear layer performs an affine transformation. It has both weights and biases which get updated during training. The transformation it performs can be described as:\n", "\n", "$ y = xA^T + b $\n", "\n", "Where:\n", "- \\( x \\) is the input\n", "- \\( A \\) represents the weights\n", "- \\( b \\) is the bias\n", "\n", "The `nn.Linear` class in PyTorch creates such a layer.\n", "\n", "[Documentation Link for nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)\n", "\n", "\n", "> **Task:** Add an input layer and an output layer to the `SimpleNet` class. \n", ">\n", "> - The input layer should transform from `input_size` to `hidden_size`.\n", "> - The output layer should transform from `hidden_size` to `output_size`.\n", "> - After defining the layers in the `__init__` method, call them in the `forward` method to perform the transformations.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "daa8829a-05e9-474e-b6e6-c7f749e22295", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Modify the below code by adding input and output linear layers in the appropriate places\n", "\n", "class SimpleNet(nn.Module):\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(SimpleNet, self).__init__()\n", " # Define layers here\n", "\n", " def forward(self, x):\n", " # Call the layers in the correct order here\n", " return x\n" ] }, { "cell_type": "markdown", "id": "c5038840-2713-4492-b7ab-c70469a2e96e", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "To define the input and output linear layers, use the `nn.Linear` class in the `__init__` method:\n", "\n", "Then, in the `forward` method, pass the input through the defined layers.\n", "\n", "```python\n", "class SimpleNet(nn.Module):\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(SimpleNet, self).__init__()\n", " self.input_layer = nn.Linear(input_size, hidden_size)\n", " self.output_layer = nn.Linear(hidden_size, output_size)\n", "\n", " def forward(self, x):\n", " x = self.input_layer(x)\n", " x = self.output_layer(x)\n", " return x\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "c2bb82c9-8949-4472-84fe-def36c514150", "metadata": { "tags": [] }, "source": [ "### **Activation Functions: Introducing Non-Linearity**" ] }, { "cell_type": "markdown", "id": "d989e2d8-5530-45f3-8664-e0d1b9eb627a", "metadata": {}, "source": [ "Activation functions are critical components in neural networks, introducing non-linearity between layers. This non-linearity allows networks to learn from the error and make adjustments, which is essential for learning complex patterns.\n", "\n", "In PyTorch, many activation functions are available as part of the `torch.nn` module, such as ReLU, Sigmoid, and Tanh.\n", "\n", "For our `SimpleNet` model, we'll use the ReLU (Rectified Linear Unit) activation function after the input layer. The ReLU function is defined as \\(f(x) = max(0, x)\\).\n", "\n", "Learn more about [ReLU and other activation functions in the official documentation](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity).\n", "\n", "> **Task**: Update your `SimpleNet` class to include the ReLU activation function after the input layer. For this, you'll need to both define the activation function in `__init__` and apply it in the `forward` method.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "9e426301-5a55-46a2-8305-241b8f1ca4bf", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Copy the previous SimpleNet definition and modify the code to include the ReLU activation function." ] }, { "cell_type": "markdown", "id": "212ef244-f7bf-49a2-b4c9-b1b90af315de", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "To include the ReLU activation in your neural network:\n", "\n", "1. Define the ReLU activation function in the `__init__` method.\n", "2. Apply the activation function in the `forward` method after passing through the `input_layer`.\n", "\n", "```python\n", "class SimpleNet(nn.Module):\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(SimpleNet, self).__init__()\n", " self.input_layer = nn.Linear(input_size, hidden_size)\n", " self.relu = nn.ReLU() # Defining the ReLU activation function\n", " self.output_layer = nn.Linear(hidden_size, output_size)\n", "\n", " def forward(self, x):\n", " x = self.input_layer(x)\n", " x = self.relu(x) # Applying the ReLU activation function\n", " x = self.output_layer(x)\n", " return x\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "640ef2f4-6816-4c5e-955c-c14c33349512", "metadata": {}, "source": [ "#### **Adjusting the Network: Adding Dropout**" ] }, { "cell_type": "markdown", "id": "e5596abf-b262-461d-ad5f-6a3488a79a42", "metadata": { "tags": [] }, "source": [ "[Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html) is a regularization technique that can improve generalization in neural networks. It works by randomly setting a fraction of input units to 0 at each update during training time. \n", "\n", "> **Task**: Modify the `SimpleNet` class to include a dropout layer with a dropout probability of 0.5 between the input layer and the output layer. Don't forget to call this layer in the forward method. \n", ">\n", "> Remember, after modifying the class structure, you'll need to re-instantiate your model object." ] }, { "cell_type": "code", "execution_count": null, "id": "1c68ffd4-1de6-4d77-a15f-705b24c924af", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Add a dropout layer to your previous code" ] }, { "cell_type": "markdown", "id": "d78c2dab-95c1-441c-b661-80bfba9a2dfd", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Here's how you can modify the SimpleNet class to include dropout:\n", "\n", "```python\n", "\n", "class SimpleNet(nn.Module):\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(SimpleNet, self).__init__()\n", " self.input_layer = nn.Linear(input_size, hidden_size)\n", " self.dropout = nn.Dropout(0.5)\n", " self.output_layer = nn.Linear(hidden_size, output_size)\n", "\n", " def forward(self, x):\n", " x = self.input_layer(x)\n", " x = self.dropout(x)\n", " return self.output_layer(x)\n", " \n", "model = SimpleNet(input_size, hidden_size, output_size).to(device) \n", "```\n", "Don't forget to create a new instance of your model: model = SimpleNet(input_size, hidden_size, output_size).to(device)\n", "</details>" ] }, { "cell_type": "markdown", "id": "ce1cb22c-8288-4c69-9dcb-56896de49794", "metadata": { "tags": [] }, "source": [ "### **Utilizing the Neural Network**" ] }, { "cell_type": "markdown", "id": "255c3bf2-419d-4d14-82d6-7959e9280670", "metadata": { "tags": [] }, "source": [ "Once our neural network is defined, it's time to put it to use. This section will cover:\n", "\n", "1. Instantiating the network\n", "2. Transferring the network to GPU (if available)\n", "3. Making predictions using the network (forward pass)\n", "4. Understanding training and evaluation modes\n", "5. Performing a backward pass to compute gradients" ] }, { "cell_type": "markdown", "id": "9f28cee5-c7a0-48c5-8341-6da6fae516c5", "metadata": { "tags": [] }, "source": [ "#### **1. Instantiating the Network**" ] }, { "cell_type": "markdown", "id": "0760bef6-d77a-4b7b-b5c7-18b208d93b98", "metadata": {}, "source": [ "\n", "To use our `SimpleNet`, we first need to create an instance of it. While creating an instance, the network's weights are also initialized.\n", "\n", "> **Task**: Instantiate the `SimpleNet` class. Use `input_size=5`, `hidden_size=3`, and `output_size=1` as parameters." ] }, { "cell_type": "code", "execution_count": null, "id": "ae9bfc87-5b09-476c-b32b-92c09f992fe3", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code here: Instantiate the model" ] }, { "cell_type": "markdown", "id": "f951e5d2-e0b4-451d-9a9b-44256f8a224c", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To instantiate the SimpleNet class:\n", "\n", "```python\n", "\n", "model = SimpleNet(input_size=5, hidden_size=3, output_size=1)\n", "print(model)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "35567e41-6de6-429b-be4b-a14598313aca", "metadata": { "tags": [] }, "source": [ "#### **2. Transferring the Network to GPU**\n" ] }, { "cell_type": "markdown", "id": "b3f3b3c3-4d7a-46db-9634-1e14b277c808", "metadata": { "tags": [] }, "source": [ "\n", "PyTorch makes it very straightforward to transfer our model to a GPU if one is available. This is done using the .to() method.\n", "\n", "> **Task**: Check if GPU (CUDA) is available. If it is, transfer the model to the GPU." ] }, { "cell_type": "code", "execution_count": null, "id": "91cb61a0-d890-4697-88d9-7749ea2bf144", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Check for GPU availability and transfer the model to GPU if available." ] }, { "cell_type": "markdown", "id": "8a405f2d-3d8d-4e4c-90d1-54a05ff08b90", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To transfer the model to the GPU if it's available:\n", "\n", "```python\n", "\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "model = model.to(device)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "175ab7cc-cddf-4460-ab01-f0193c2908d7", "metadata": { "tags": [] }, "source": [ "#### **3. Making Predictions using the Network (Forward Pass)**" ] }, { "cell_type": "markdown", "id": "e3724444-e0a6-48b0-8872-0b53b000a3bd", "metadata": {}, "source": [ "With our model instantiated and potentially on a GPU, we can use it to make predictions. This involves passing some input data through the model, which is commonly referred to as a forward pass.\n", "\n", "> **Task**: Create a tensor of size [1, 5] (representing one sample with five features) with random values. Transfer this tensor to the same device as your model (GPU or CPU). Then, pass this tensor through your model to get the prediction." ] }, { "cell_type": "code", "execution_count": null, "id": "00e818ee-72e0-4960-a87e-a27b771d58eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a tensor, transfer it to the right device, and perform a forward pass.\n" ] }, { "cell_type": "markdown", "id": "8bc38fde-0c14-45a6-b237-76ec7beab7f0", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To make predictions using your model:\n", "\n", "```python\n", "\n", "# Create a tensor with random values\n", "input_tensor = torch.randn(1, 5).to(device)\n", "\n", "# Pass the tensor through the model\n", "output = model(input_tensor)\n", "print(output)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "fad9f46f-b591-4a2f-b2bf-3b4cf54cf961", "metadata": { "tags": [] }, "source": [ "#### **4. Understanding Training and Evaluation Modes**" ] }, { "cell_type": "markdown", "id": "2f197278-8d74-4a69-8da9-caf3f952e7bc", "metadata": {}, "source": [ "Every PyTorch model has two modes:\n", "- `train` mode: In this mode, certain layers like dropout or batch normalization behave differently than during evaluation. For instance, dropout will randomly set a fraction of input units to 0 at each update during training.\n", "- `eval` mode: Here, the model behaves in a deterministic manner. Dropout layers don't drop activations, and batch normalization uses the entire dataset's statistics instead of the current mini-batch's statistics.\n", "\n", "Setting the model to the correct mode is crucial. Let's demonstrate this.\n", "\n", "> **Task**: Set your model to `train` mode, then perform a forward pass using the same input tensor multiple times and observe the outputs. Then, set your model to `eval` mode and repeat. Notice any differences?" ] }, { "cell_type": "code", "execution_count": null, "id": "4c2d921d-d409-4ae6-8ee4-8376fc9a209d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Perform the forward passes multiple times with the same input in both modes and observe the outputs." ] }, { "cell_type": "markdown", "id": "0dbd65fa-b86b-4516-9fb1-aceae0c9d8a3", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Here's how you can demonstrate the difference:\n", "\n", "```python\n", "# Set to train mode\n", "model.train()\n", "\n", "# Forward pass multiple times\n", "print(\"Train mode:\")\n", "for i in range(5):\n", " print(model(input_tensor))\n", "\n", "# Set to eval mode\n", "model.eval()\n", "print(\"Eval mode:\")\n", "# Forward pass multiple times\n", "for i in range(5):\n", " print(model(input_tensor))\n", "```\n", " \n", "If there were layers like dropout in your model, you'd notice that the outputs in training mode might differ on each pass, while in evaluation mode, they remain consistent.\n", "</details>" ] }, { "cell_type": "markdown", "id": "e8c55be3-71f7-45e7-91d1-c556e8108fef", "metadata": { "tags": [] }, "source": [ "## **The Training Procedure in PyTorch**" ] }, { "cell_type": "markdown", "id": "eac54af7-c8db-4a19-861b-2eecf68fb44e", "metadata": { "tags": [] }, "source": [ "Training a neural network involves several key components: defining a loss function to measure errors, selecting an optimization method to adjust the model's weights, and iterating over the dataset multiple times. In this section, we will break down these components step by step, starting with the basics and moving towards more complex tasks." ] }, { "cell_type": "markdown", "id": "3e9231a9-105c-4aed-bfa5-846ddc07245f", "metadata": { "tags": [] }, "source": [ "### **Datasets and DataLoaders: Handling and Batching Data**" ] }, { "cell_type": "markdown", "id": "8dbc3fcf-5a29-4fd8-9e82-3eaae4c8dc90", "metadata": {}, "source": [ "In PyTorch, the torch.utils.data.Dataset class is used to represent a dataset. This abstract class requires the implementation of two primary methods: __len__ (to return the number of items) and __getitem__ (to return the item at a given index). However, PyTorch provides a utility class, TensorDataset, that wraps tensors in the dataset format, making it easier to use with the DataLoader.\n", "\n", "The torch.utils.data.DataLoader class is a more powerful tool, responsible for:\n", "\n", "- Batching the data\n", "- Shuffling the data\n", "- Loading the data in parallel using multiprocessing workers\n", "\n", "Let's wrap some data in a Dataset and use a DataLoader to handle batching and shuffling.\n", "\n", "> **Task**: Convert the input and target tensors into a dataset and dataloader. For this exercise, set the batch size to 32.\n", "\n", "Below we define synthetic data that is learnable.\n", "This way, we're essentially modeling the relationship $y=mx+c+noise$ where:\n", "- $y$ is the target or output.\n", "- $m$ is the slope of the line.\n", "- $c$ is the y-intercept.\n", "- $x$ is the input.\n", "- $noise$ is a small random value added to each point to make the data more realistic." ] }, { "cell_type": "code", "execution_count": null, "id": "f8335e62-e0c0-4381-9c20-1ca8ed78516c", "metadata": { "tags": [] }, "outputs": [], "source": [ "num_samples = 1000\n", "\n", "# Define the relationship\n", "m = 2.0\n", "c = 1.0\n", "noise_factor = 0.05\n", "\n", "\n", "\n", "# Generate input tensor\n", "input_tensor = torch.linspace(-10, 10, num_samples).view(-1, 1)\n", "\n", "# Generate target tensor based on the relationship\n", "target_tensor = m * input_tensor + c + noise_factor * torch.randn(num_samples, 1)\n", "import matplotlib.pyplot as plt\n", "plt.figure(figsize=(10,6))\n", "plt.scatter(input_tensor.numpy(), target_tensor.numpy(), color='blue', marker='o')\n", "plt.title(\"Synthetic Data Visualization\")\n", "plt.xlabel(\"Input\")\n", "plt.ylabel(\"Target\")\n", "plt.grid(True)\n", "plt.show()\n" ] }, { "cell_type": "code", "execution_count": null, "id": "9535ad7e-6534-491b-b38d-b61cdd60b39d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Convert our data into a dataset\n", "# ...\n", "\n", "# Create a data loader for mini-batch training\n", "# ..." ] }, { "cell_type": "markdown", "id": "da99866e-ebd0-403d-8159-8a36d601bf09", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Use the TensorDataset class from torch.utils.data to wrap your tensors in a dataset format. After defining your dataset, you can use the DataLoader class to create an iterator that will return batches of data.\n", " \n", "```python\n", "from torch.utils.data import DataLoader, TensorDataset\n", "\n", "# Convert our data into a dataset\n", "dataset = TensorDataset(input_tensor, target_tensor)\n", "\n", "# Create a data loader for mini-batch training\n", "batch_size = 32\n", "data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "ea5aee0c-6c8a-485f-b099-9844a28bafa3", "metadata": { "tags": [] }, "source": [ "> **Task**: Explore the `dataset` and `data_loader`:\n", "> 1. Print the total number of samples in the dataset and DataLoader.\n", "> 2. Iterate one time over both and print the shape of items you retrieve." ] }, { "cell_type": "code", "execution_count": null, "id": "244a8198-60c5-4154-93ab-3d96fbf3488a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Total number of samples\n", "# ...\n", "\n", "# Dataset elements\n", "# ...\n", "\n", "# DataLoader elements\n", "# ..." ] }, { "cell_type": "markdown", "id": "882438f7-3cc7-4a20-a223-41ede7856ef4", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "When you iterate over the dataset, each item you get from the iteration should be a tuple of (input, target), so you should retrieve two elements each of len 1.\n", "\n", "On the other hand, when you iterate over the data_loader, each item you get from the iteration is a mini-batch of data. Thus, the length you get from each iteration should correspond to the batch size you've set (i.e., 5 in our case), except possibly the last batch if the dataset size isn't a perfect multiple of the batch size.\n", "\n", "```python\n", "# Total number of samples\n", "print(f\"Total samples in dataset: {len(dataset)}\")\n", "print(f\"Total batches in DataLoader: {len(data_loader)}\")\n", "\n", "# Dataset elements\n", "(index, (data, target)) = next(enumerate(dataset))\n", "print(f\"Sample {index}: Data shape {data.shape}, Target shape {target.shape}\")\n", "\n", "# DataLoader elements\n", "(index, (batch_data, batch_target)) = next(enumerate(data_loader))\n", "print(f\"Batch {index}: Data shape {batch_data.shape}, Target shape {batch_target.shape}\")\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "8dc08bb3-e5b2-4a7d-be10-6adc496a812d", "metadata": { "tags": [] }, "source": [ "### **Splitting the Dataset: Training, Validation, and Testing Sets**\n" ] }, { "cell_type": "markdown", "id": "659a4899-cb14-4a47-b990-ea1a77592102", "metadata": {}, "source": [ "When training neural networks, it's common to split the dataset into at least two sets:\n", "\n", "1. **Training Set**: This set is used to train the model, i.e., adjust the weights using gradient descent.\n", "2. **Validation Set** (optional, but often used): This set is used to evaluate the model during training, allowing for hyperparameter tuning without overfitting.\n", "3. **Test Set**: This set is used to evaluate the model's performance after training, providing an unbiased assessment of its performance on new, unseen data.\n", "\n", "In PyTorch, we can use the `random_split` function from `torch.utils.data` to easily split datasets.\n", "\n", "First, let's define the lengths for each split:" ] }, { "cell_type": "code", "execution_count": null, "id": "32202871-2911-44e6-8ad6-6d848cb3ede0", "metadata": { "tags": [] }, "outputs": [], "source": [ "total_samples = len(dataset)\n", "train_size = int(0.8 * total_samples)\n", "val_size = total_samples - train_size" ] }, { "cell_type": "markdown", "id": "a1f7a839-8ee0-460f-bef0-87ca30f7409e", "metadata": {}, "source": [ "> **Task**: Using the random_split function, split the dataset into a training set and a validation set using the sizes provided above.\n", "[Here's the documentation for random_split](https://pytorch.org/docs/stable/data.html#torch.utils.data.random_split).\n", "> **Task**: Create the train_loader and val_loader" ] }, { "cell_type": "code", "execution_count": null, "id": "50a80fc9-ef6e-4118-ad6a-3dea9d16e94f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Splitting the dataset\n" ] }, { "cell_type": "markdown", "id": "b01bb0d7-17c0-4edd-a2b6-17e4ca74b2aa", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "\n", "# Splitting the dataset\n", "from torch.utils.data import random_split\n", "train_dataset, val_dataset = random_split(dataset, [train_size, val_size])\n", "train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)\n", "val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "e2729431-701c-4451-931c-2ae0ed58dbb5", "metadata": { "tags": [] }, "source": [ "> **Task**: Now, using the provided training and validation datasets, print out the number of samples in each set. Also, fetch one sample from each set and print its shape.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "770c42f6-7a52-4856-a4fe-23a60666389a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "id": "583948e8-898a-4336-92c6-aaddef6adbcf", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "\n", "# Print number of samples in each set\n", "print(f\"Number of training samples: {len(train_dataset)}\")\n", "print(f\"Number of validation samples: {len(val_dataset)}\")\n", "\n", "# Fetching one sample from each set and printing its shape\n", "train_sample, train_target = train_dataset[0]\n", "print(f\"Training sample shape: {train_sample.shape}, Target shape: {train_target.shape}\")\n", "\n", "val_sample, val_target = val_dataset[0]\n", "print(f\"Validation sample shape: {val_sample.shape}, Target shape: {val_target.shape}\")\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "0fdec6d6-9b32-457d-b8e6-d94d8e020e4f", "metadata": { "tags": [] }, "source": [ "### **Loss Functions: Measuring Model Errors**" ] }, { "cell_type": "markdown", "id": "899ce66c-e878-4f6a-b37c-34cdeae438a1", "metadata": {}, "source": [ "Every training process needs a metric to determine how well the model's predictions align with the actual data. This metric is called the loss function or cost function. PyTorch provides many [loss functions](https://pytorch.org/docs/stable/nn.html#loss-functions) suitable for different types of tasks.\n", "\n", "Different problems might require different loss functions. PyTorch provides a variety of [loss functions](https://pytorch.org/docs/stable/nn.html#loss-functions) suited for different tasks. For instance:\n", "- **Mean Squared Error (MSE)**: Commonly used for regression tasks.\n", "- **Cross-Entropy Loss**: Suited for classification tasks.\n", "\n", "\n", "For a simple regression task, a common choice is the Mean Squared Error (MSE) loss. \n", "\n", "> **Task**: Familiarize yourself with the [MSE loss documentation](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html). You will soon use it in the training loop.\n", "\n", "> **Task**: Instantiate the Mean Squared Error (MSE) loss provided by PyTorch for our current neural network." ] }, { "cell_type": "code", "execution_count": null, "id": "692e83d7-7382-4ab2-9caf-daa3a77bfd4d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define the loss function.\n" ] }, { "cell_type": "markdown", "id": "7fe8dcb5-8a43-4561-88a0-a4a2a2d1bf53", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To define the MSE loss in PyTorch, you can use:\n", "\n", "```python\n", "\n", "criterion = nn.MSELoss()\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "e957d999-0a56-4320-808a-05d1af6b81c7", "metadata": { "tags": [] }, "source": [ "### **Optimizers: Adjusting Weights**" ] }, { "cell_type": "markdown", "id": "d3d4a09d-8838-4fd3-9e16-bfdc5018abde", "metadata": {}, "source": [ "Optimizers adjust the weights of the network based on the gradients computed during backpropagation. Different optimizers might update weights in varying ways. For example, the popular **Stochastic Gradient Descent (SGD)** optimizer simply updates weights in the direction of negative gradients, while **Adam** and **RMSprop** are more advanced optimizers that consider aspects like momentum and weight decay.\n", "\n", "PyTorch offers a wide range of [optimizers](https://pytorch.org/docs/stable/optim.html). \n", "\n", "\n", "> **Task**: Review the [SGD optimizer documentation](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD). It will be pivotal in the training loop you'll construct.\n", "\n", "> **Task**: For this exercise, let's use the SGD optimizer. Instantiate it, setting our neural network parameters as the ones to be optimized and choosing a learning rate of 0.01.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "39c8dfa8-7ea0-44e4-9429-118a6333bfe1", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define the optimizer.\n" ] }, { "cell_type": "markdown", "id": "05e37f67-519a-4c49-97b3-2fafb7176de1", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To define the SGD optimizer in PyTorch, you can use:\n", "\n", "```python\n", "optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)\n", "```\n", "Because of how simple the task is, you will probably need a really small learning rate to reach good results.\n", "</details>\n", "\n" ] }, { "cell_type": "markdown", "id": "13b2fb3e-5391-4e66-ba83-55e66935d2aa", "metadata": { "tags": [] }, "source": [ "### **Setting Up the Basic Training Loop Function**" ] }, { "cell_type": "markdown", "id": "7a364925-b4d9-4ffd-b3f8-be30a5bb1613", "metadata": { "jp-MarkdownHeadingCollapsed": true, "tags": [] }, "source": [ "Having a training loop within a function allows us to reuse the same code structure for different models, datasets, or other training parameters without redundancy. This modular approach also promotes code clarity and maintainability.\n", "\n", "Let's define the training loop function which takes the model, data (inputs and targets), loss function, optimizer, and the number of epochs as parameters. The function should return the history of the loss after each epoch.\n", "\n", "A typical training loop consists of:\n", "1. Sending the input through the model (forward pass).\n", "2. Calculating the loss.\n", "3. Propagating the loss backward through the model to compute gradients (backward pass).\n", "4. Updating the weights using the optimizer.\n", "5. Repeating the steps for several epochs.\n", "\n", "\n", "Training with the entire dataset as one batch can be memory-intensive and sometimes not as effective. Hence, in practice, we usually divide our dataset into smaller chunks or mini-batches and update our weights after each mini-batch.\n", "\n", "> **Task**: Create a function named `train_model` that encapsulates the training loop for the `SimpleNet` model. The function should follow the signature the next code cell:" ] }, { "cell_type": "code", "execution_count": null, "id": "734864fe-46b6-4435-b58d-19b085ebd3f9", "metadata": { "tags": [] }, "outputs": [], "source": [ "def train_model(model, dataloader, loss_function, optimizer, epochs):\n", " # Your code here\n", " pass" ] }, { "cell_type": "markdown", "id": "a6fee8dc-59da-4d48-918e-d6e093e997e5", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "Here's how the train_model function might look:\n", "```python\n", "\n", "def train_model(model, dataloader, loss_function, optimizer, epochs):\n", " # Store the loss values at each epoch\n", " loss_history = []\n", " \n", " for epoch in range(epochs):\n", " for inputs, targets in dataloader:\n", " # Ensure that data is on the right device\n", " inputs, targets = inputs.to(device), targets.to(device)\n", " \n", " # Reset the gradients to zero\n", " optimizer.zero_grad()\n", " \n", " # Execute a forward pass\n", " outputs = model(inputs)\n", " \n", " # Calculate the loss\n", " loss = loss_function(outputs, targets)\n", " \n", " # Conduct a backward pass\n", " loss.backward()\n", " \n", " # Update the weights\n", " optimizer.step()\n", " \n", " # Append the loss to the history\n", " loss_history.append(loss.item())\n", " \n", " print(f\"Epoch [{epoch+1}/{epochs}], Loss: {loss_history[-1]:.4f}\")\n", " \n", " return loss_history\n", "```\n", "</details>" ] }, { "cell_type": "markdown", "id": "c4e4b485-ffa6-487d-8dbc-b0b0590a796a", "metadata": { "tags": [] }, "source": [ "### **Training the Neural Network**" ] }, { "cell_type": "markdown", "id": "15ba6b07-728f-4444-a3a9-af8cfeb884e1", "metadata": {}, "source": [ "With all the components defined in the previous sections, it's now time to integrate everything and set the training process in motion.\n", "\n", "> **Task**: Combine all the previously defined elements to initiate the training procedure for your neural network model.\n", "> 1. Don't forget to Move your model and to the same device (GPU or CPU).\n", "> 2. Train the model using the `train_loader` and `val_loader`.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "90d043f7-213d-42a7-a14b-e6b716003b70", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code here to initiate the training process\n" ] }, { "cell_type": "markdown", "id": "398aaeec-5d6d-4ef6-bd24-27d51b32c148", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "To train the model, you need to integrate all the previously defined components:\n", "\n", "```python\n", "# Moving the model to the device\n", "model = SimpleNet(input_size=1, hidden_size=10, output_size=1).to(device)\n", "\n", "# Training the model using the train_loader\n", "loss_history = train_model(model, train_loader, criterion, optimizer, epochs=50)\n", "```\n", "Make sure you have defined the loss_function, optimizer, and epochs in the previous sections.\n", "</details>" ] }, { "cell_type": "code", "execution_count": null, "id": "c7cf3df1-9fe2-4eee-a5bf-386f77b257f1", "metadata": { "tags": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "# Plotting the loss curve\n", "plt.figure(figsize=(10,6))\n", "plt.plot(loss_history, label='Training Loss')\n", "plt.title(\"Loss Curve\")\n", "plt.xlabel(\"Epochs\")\n", "plt.ylabel(\"Loss\")\n", "plt.legend()\n", "plt.grid(True)\n", "plt.show()\n" ] }, { "cell_type": "markdown", "id": "2b7f9d87-c172-427c-a2f4-1090b1120148", "metadata": { "tags": [] }, "source": [ "## **Conclusion: Moving Beyond the Basics**" ] }, { "cell_type": "markdown", "id": "6074877c-c149-4af9-8503-153455edd42a", "metadata": {}, "source": [ "\n", "You've now built and trained a simple neural network using PyTorch, and you might be wondering: why aren't my results as good as I expected?\n", "\n", "While you've certainly made strides, the journey of mastering deep learning and neural networks is filled with nuance, challenges, and constant learning. Here are some reasons why your results might not be optimal and what you'll discover in your next steps:\n", "\n", "1. **Hyperparameters Tuning**: So far, we've set values like learning rate and batch size somewhat arbitrarily. These values are critical and often require careful tuning specific to each problem. \n", "\n", "2. **Learning Rate Scheduling**: A fixed learning rate might not always be the best strategy. Reducing the learning rate during training, known as learning rate annealing or scheduling, often leads to better convergence.\n", "\n", "3. **Model Architecture**: The neural network we built is basic. There's an entire world of architectures out there, designed for specific types of data and tasks. The right architecture can make a significant difference.\n", "\n", "4. **Regularization**: To prevent overfitting, techniques like dropout, weight decay, and early stopping can be applied. We haven't touched upon these, but they're crucial for ensuring your model generalizes well to unseen data.\n", "\n", "5. **Data Quality and Quantity**: While we used synthetic data for simplicity, real-world data is messy. Cleaning and preprocessing data, augmenting it, and ensuring it's representative can have a significant impact on performance.\n", "\n", "6. **Optimization Techniques**: There are advanced optimization algorithms and techniques that can speed up training and lead to better convergence. Techniques like momentum, adaptive learning rates (e.g., Adam, RMSprop) can play a crucial role.\n", "\n", "7. **Evaluation Metrics**: We've looked at loss values, but in real-world scenarios, understanding and selecting the right evaluation metrics for the task (accuracy, F1-score, AUC-ROC, etc.) is vital. \n", "\n", "8. **Training Dynamics**: Understanding how models train, visualizing the activations, weights, and gradients, and knowing when and why a model is struggling can offer insights into how to improve performance.\n", "\n", "Remember, while the mechanics of building and training a neural network are essential, the art of deep learning lies in understanding the nuances and iterating based on insights and knowledge. The next steps in your learning, focusing on methodology, will provide the tools and knowledge to navigate these complexities and achieve better results.\n", "\n", "Keep learning, experimenting, and iterating! The world of deep learning is vast, and there's always something new to discover." ] }, { "cell_type": "markdown", "id": "ca6048e4-f3cf-40eb-bd50-c95f281f0554", "metadata": { "tags": [] }, "source": [ "## **Extra for the Fast Movers: Diving Deeper**" ] }, { "cell_type": "markdown", "id": "46a25dfd-1cc9-444d-98d6-966e7cc9da07", "metadata": {}, "source": [ "To further enhance your understanding and capability with PyTorch, this section introduces additional topics that cater to more advanced use-cases. These tools and techniques can be essential when dealing with larger and more complex projects, providing valuable insights into optimization and performance." ] }, { "cell_type": "markdown", "id": "30edeed8-321b-4b1f-ace6-0decd8a167e5", "metadata": { "tags": [] }, "source": [ "### **Profiling with PyTorch Profiler in TensorBoard**" ] }, { "cell_type": "markdown", "id": "256bd4a2-aa6f-4a50-9c5d-854ca25293de", "metadata": {}, "source": [ "PyTorch, starting from version 1.9.0, incorporates the PyTorch Profiler as a TensorBoard plugin. This integration allows users to profile their PyTorch code and visualize the results directly within TensorBoard.\n", "Below, we will be instrumenting PyTorch Code for TensorBoard Profiling.\n", "\n", "Use this [documentation](http://www.idris.fr/jean-zay/pre-post/profiler_pt.html) to achieve the next tasks.\n", "\n", "> **Task:** Before instrumenting your PyTorch code, you'll need to import the necessary modules for profiling.\n", "\n", "> **Task:** Modify the training loop to invoke the profiler. " ] }, { "cell_type": "code", "execution_count": null, "id": "86b471a6-7de6-40f0-af58-c41e8e8acbae", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your imports here\n", "\n", "# Your code here\n", "def train_model_with_profiling(model, train_loader, criterion, optimizer, epochs, profiler_dir='./profiler'):\n", " # Your code here\n", " pass" ] }, { "cell_type": "markdown", "id": "f389816a-fa2a-4668-9f0b-07d2a5abf5e1", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "```python\n", "from torch.profiler import profile, tensorboard_trace_handler, ProfilerActivity, schedule\n", "\n", "def train_model_with_profiling(model, dataloader, loss_function, optimizer, epochs, profiler_dir='./profiler'):\n", " # Store the loss values at each epoch\n", " loss_history = []\n", " \n", " with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n", " schedule=schedule(wait=1, warmup=1, active=12, repeat=1),\n", " on_trace_ready=tensorboard_trace_handler(profiler_dir)) as prof:\n", " for epoch in range(epochs):\n", " for inputs, targets in dataloader:\n", " # Ensure that data is on the right device\n", " inputs, targets = inputs.to(device), targets.to(device)\n", " \n", " # Reset the gradients to zero\n", " optimizer.zero_grad()\n", " \n", " # Execute a forward pass\n", " outputs = model(inputs)\n", " \n", " # Calculate the loss\n", " loss = loss_function(outputs, targets)\n", " \n", " # Conduct a backward pass\n", " loss.backward()\n", " \n", " # Update the weights\n", " optimizer.step()\n", " \n", " # Append the loss to the history\n", " loss_history.append(loss.item())\n", " \n", " # Notify profiler of step boundary\n", " prof.step()\n", " \n", " print(f\"Epoch [{epoch+1}/{epochs}], Loss: {loss_history[-1]:.4f}\")\n", " \n", " return loss_history\n", "```\n", "Make sure you have defined the loss_function, optimizer, and epochs in the previous sections.\n", "</details>" ] }, { "cell_type": "code", "execution_count": null, "id": "cb82f0a9-522f-4746-87f9-ba7b7952d863", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Training the model using the train_loader\n", "loss_history = train_model_with_profiling(model, train_loader, criterion, optimizer, 10, profiler_dir='./profiler')" ] }, { "cell_type": "markdown", "id": "313e4f40-521a-4beb-a278-c1ca9502b499", "metadata": {}, "source": [ "> **Task:** Visualize the profiling, you will need to open a Tensorboard interface using the Blue button on the top left corner.\n", ">\n", "> **Make sur to specify the logdir with \"--logid=/path/to/profiler_folder\".**" ] }, { "cell_type": "markdown", "id": "06f86768-3b78-4874-b083-64bc365080fb", "metadata": { "tags": [] }, "source": [ "### **Learning Rate Scheduling**" ] }, { "cell_type": "markdown", "id": "44721444-ba4a-44d0-9b65-16890dd4f097", "metadata": {}, "source": [ "One of the key hyperparameters to tune during neural network training is the learning rate. While it's possible to set a static learning rate for the entire training process, in practice, dynamically adjusting the learning rate often leads to better convergence and overall performance. This dynamic adjustment is often referred to as learning rate scheduling or annealing.\n", "Concept of Learning Rate Scheduling\n", "\n", "The learning rate determines the step size at each iteration while moving towards a minimum of the loss function. If it's too large, the optimization might overshoot the minimum. Conversely, if it's too small, the training might get stuck, or convergence could be very slow.\n", "\n", "A learning rate scheduler changes the learning rate during training based on the provided scheduling policy. By adjusting the learning rate during training, you can achieve faster convergence and better final results.\n", "Using Learning Rate Schedulers in PyTorch\n", "\n", "PyTorch provides a variety of learning rate schedulers through the torch.optim.lr_scheduler module. Some of the popular ones are:\n", "- StepLR: Decays the learning rate of each parameter group by gamma every step_size epochs.\n", "- ExponentialLR: Decays the learning rate of each parameter group by gamma every epoch.\n", "- ReduceLROnPlateau: Reduces the learning rate when a metric has stopped improving.\n", "\n", "> **Task:** Take a look at the [documentation]() or click on the hint in the following cell then integrate an LR scheduler in your own code that you wrote before " ] }, { "cell_type": "markdown", "id": "0c79a170-35d0-438f-b01b-a3f236f8b724", "metadata": { "tags": [] }, "source": [ "\n", "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "Below, you have a typical training loop with a learning rate scheduler.\n", " \n", "```python\n", "from torch.optim.lr_scheduler import StepLR\n", "optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n", "scheduler = StepLR(optimizer, step_size=10, gamma=0.1)\n", "for epoch in range(epochs):\n", " for input, target in data:\n", " optimizer.zero_grad()\n", " output = model(input)\n", " loss = loss_fn(output, target)\n", " loss.backward()\n", " optimizer.step()\n", " \n", " # Step the learning rate scheduler\n", " scheduler.step()```\n", "</details>\n" ] }, { "cell_type": "markdown", "id": "33f99f6e-3120-495a-a25b-8b9f3d14deb2", "metadata": { "tags": [] }, "source": [ "### **Automatic Mixed Precision**" ] }, { "cell_type": "markdown", "id": "217a7249-6655-4587-92b8-72dea7de8c9d", "metadata": {}, "source": [ "Training deep neural networks can be both time-consuming and resource-intensive. One way to address this problem is by leveraging mixed precision training. In essence, mixed precision training uses both 16-bit and 32-bit floating-point types to represent numbers in the model, which can speed up training without sacrificing the accuracy of the final model.\n", "\n", "**Overview of AMP (Automatic Mixed Precision)**\n", "\n", "AMP (Automatic Mixed Precision) is a set of utilities provided by PyTorch to enable mixed precision training more effortlessly. The main advantages of AMP are:\n", "- Faster Training: By using reduced precision, the model requires less memory bandwidth, resulting in faster data transfers and faster matrix multiplication.\n", "- Reduced GPU Memory Usage: This enables training of larger models or utilization of larger batch sizes.\n", "\n", "PyTorch has integrated the AMP utilities starting from version 1.6.\n", "\n", "> **Task**: Setup AMP in the training function by checking the [documentation](http://www.idris.fr/eng/ia/mixed-precision-eng.html). You will need to do the necessary imports, initialize the GradScaler, modify the training loop by including \"with autocast():\" around the forward and loss computation." ] }, { "cell_type": "code", "execution_count": null, "id": "ad131b4b-02ba-472d-af78-a048868e3efc", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "id": "de38cb30-7b24-48cb-b804-ed296e38e3fb", "metadata": { "tags": [] }, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "Below, you have a typical training loop with autocast.\n", " \n", "```python\n", "from torch.cuda.amp import autocast, GradScaler\n", "scaler = GradScaler()\n", "for epoch in epochs:\n", " for input, target in data:\n", " optimizer.zero_grad()\n", " \n", " with autocast():\n", " output = model(input)\n", " loss = loss_fn(output, target)\n", " \n", " scaler.scale(loss).backward()\n", " scaler.step(optimizer)\n", " scaler.update()\n", "```\n", "</details>\n" ] }, { "cell_type": "markdown", "id": "a3f7818a-fea1-4a12-b52a-cd83e0ae2ffe", "metadata": {}, "source": [ "### **Pytorch Compiler**" ] }, { "cell_type": "markdown", "id": "dbb5f69b-009e-40b3-94f0-5a420afbd003", "metadata": {}, "source": [ "**For this section, you will need to use Pytorch with a version superior to 2.0.**\n", "\n", "PyTorch, a widely adopted deep learning framework, has consistently evolved to offer users better performance and ease of use. One such advancement is the introduction of the PyTorch Compiler. This cutting-edge feature accelerates PyTorch code execution by JIT-compiling it into optimized kernels. What's even more impressive is its ability to enhance performance with minimal modifications to the original codebase.\n", "\n", "Historically, PyTorch has introduced compiler solutions like TorchScript and FX Tracing. However, the introduction of torch.compile with PyTorch 2.0 has taken performance optimization to a new level. It provides a seamless experience, enabling you to transform typical PyTorch functions and even torch.nn.Module instances into their faster, compiled counterparts.\n", "\n", "For those eager to dive deep into its workings and benefits, detailed documentation and tutorials have been made available:\n", "- [torch.compile Tutorial](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html)\n", "- [PyTorch 2.0 Release Notes](https://pytorch.org/get-started/pytorch-2.0/)\n", "\n", "> **Task:** Your task is to make your existing PyTorch model take advantage of the performance benefits offered by torch.compile. This will not only make your model run faster but also give you hands-on experience with one of the latest features in PyTorch." ] }, { "cell_type": "markdown", "id": "8d5236bc-08e4-4142-8c9c-fd7007474ff2", "metadata": {}, "source": [ "<details>\n", "<summary>Hint (click to reveal)</summary>\n", "\n", "1. **Ensure Dependencies**:\n", " - Ensure that you have the required dependencies, especially PyTorch version 2.0 or higher.\n", "\n", "2. **Check for GPU Compatibility**:\n", " - For optimal performance, it's recommended to use a modern NVIDIA GPU (H100, A100, or V100).\n", "\n", "3. **Compile Functions**:\n", " - You can optimize arbitrary Python functions as shown in the example:\n", " ```python\n", " def your_function(x, y):\n", " # ... Your PyTorch code here ...\n", " opt_function = torch.compile(your_function)\n", " ```\n", "\n", " - Alternatively, use the decorator approach:\n", " ```python\n", " @torch.compile\n", " def opt_function(x, y):\n", " # ... Your PyTorch code here ...\n", " ```\n", "\n", "4. **Compile Modules**:\n", " - If you have a PyTorch module (a class derived from `torch.nn.Module`), you can compile it similarly:\n", " ```python\n", " class YourModule(torch.nn.Module):\n", " # ... Your module definition here ...\n", "\n", " model = YourModule()\n", " opt_model = torch.compile(model)\n", " ```\n", "\n", "</details>" ] }, { "cell_type": "markdown", "id": "bd4066a6-3f24-4b63-b2be-da0350ec6145", "metadata": {}, "source": [ "Remember, while torch.compile optimizes performance, the underlying logic remains the same. Ensure to test and validate your compiled model's outputs against the original to confirm consistent behavior." ] }, { "cell_type": "markdown", "id": "4340d5df", "metadata": {}, "source": [ "---\n", "<img width=\"80px\" src=\"../fidle/img/logo-paysage.svg\"></img>" ] } ], "metadata": { "kernelspec": { "display_name": "pytorch-gpu-2.0.1_py3.10.12", "language": "python", "name": "module-conda-env-pytorch-gpu-2.0.1_py3.10.12" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 5 }