No submission required

Google Colaboratory link for working online CIFAR10.ipynb (Open with Colaboratory > Open in Playground Mode)

In this tutorial, we will learn how to classify real images using same LeNet architecture used for MNIST using Pytorch with autograd feature. There will be no need to define the backward pass or weight updates manually. Please pay careful attention to train function. We will also try to apply state-of-the-art models like ResNet.

The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. CIFAR

  1. Understanding the architecture.

    • How many kernels are in the convolutional layers?

    • How many fully connected layers are there, and how large are the weight matrices?

  2. Architecture modifications. For example, you could vary the number of

    • Feature maps in a convolutional layer

    • Convolutional layers

    • Maxpooling layers

    • Fully connected layers

  3. Training process

    • Try to add batch-normalization layer after each convolution. How does it affect the learning and final classification error?

    • Add data augmentation (comment out lines) and see how it affects the training process.

    • Try to change the weight initialization.

    • Change optimizer to SGD. How the learning is affected? (after tuning the learning parameter)

  4. Try state-of-the-art models by replacing the LeNet nn.module object by other architectures from here and see how training differs. (Turn on GPU)