\( \DeclareMathOperator*\argmax{argmax} \)

Due at 11:59pm on Thursday, Mar. 8. Please submit to Blackboard and follow the general guidelines regarding homework assignments.

  1. Open the ConvolutionBasics notebook. Your task is to fill in the missing parts of the code. Submit the completed notebook.
    • Exercise 1 is intended to help you think about the definition of 1D convolution.

    • Exercise 2 is intended to help you think about how convolution generalizes to 2D (and higher).

    • Exercise 3 is supposed to illustrate that 1D convolution is equivalent to multiplication by a Toeplitz matrix, which is a way of seeing that a convolutional net is a special case of a neural net.

  2. Design a ConvNet that detects this pattern anywhere in a binary input image. \[ \begin{array}{cccc} 1 &1 &1 &1\cr 1 &0 &0 &1\cr 1 &0 &0 &1\cr 1 &1 &1 &1 \end{array} \] (When detecting this pattern, it doesn’t matter what the input is outside the \(4\times 4\) block.) The output of the ConvNet should be a single feature map that encodes the existence of the above pattern at the locations in the input image. Here’s a constraint on your solution: the maximum allowable size of your kernels is \(3\times 3\). Use the Heaviside step function as your nonlinearity (since we are modeling Boolean functions and don’t care about backprop here). Write code to implement your ConvNet on \(9\times 9\) input images. Put this code in a separate notebook and title it ManualConvNet.

    • In your submitted notebook, one cell should contain the kernels in each layer, along with biases/thresholds. (Hint: no pooling is necessary.) Explain very briefly why this is a correct answer.

    • Show tests of your code on a few input images.

  3. The LeNet notebook contains sample code to train LeNet on MNIST. Modify the code to include biases in all the layers, and add code for training the biases. Note that all neurons in a feature map share the same bias. Run the code to show that the learning curve goes down. (This code is written for pedagogical purposes, not efficiency. It’s slow so we are not bothering with any time-consuming model selection. We will do that later with a deep learning framework.)