Due at 12:00 noon on Monday, March 11th (later than normal to relieve your stress during midterm exams). Please submit to Gradescope. Please follow the general guidelines regarding homework assignments.

This assignment is based on the research done in the Seung lab. Microscope images are taken of adjacent sections of brain tissue (serial section microscopy) and algorithms scan through these images and segment all the neurons. A similar approach (generalized to 3D) was used to reconstruct the neuronal wiring diagram of a fly brain.

In this assignment, we will use the UNet architecture to give pixelwise labelings of neuron boundaries and then we will use connected components to fill in these boundaries, giving us segmentations. The notebook for this pset can be found here: UNet.ipynb.

Using a GPU in this exercise should speed up training by more than a factor of 10 so it is recommended to use the GPU capabilities of Collaboratory for this exercise. Ensure that the GPU is enabled by clicking on Runtime -> Change runtime type -> Hardware accelerator and making sure GPU is selected.

  1. Visualize the Dataset
    • You don’t have to submit anything for this part. Just run the code under the section “Visualize Dataset” to view examples of microscope images, neuron borders, and the resulting connected components segmentation. Can you identify the neuron borders without help from the ground truth border maps?
  2. Implement UNet
    • Use the paper for guidance here, especially figure 1. Make 3 modifications to the architecture in this assignment:
      1. Output a single feature map instead of two. Sigmoid of this value gives us our predicted probability of the pixel not being a boundary.
      2. Use a network that is 4 times narrower. Note that “narrow” refers to the number of channels/feature maps, not they xy size of each feature map. So for example, the feature maps with 64 channels in the original UNet paper now have 16 channels.
      3. Use “same” padding convention so convolution operations do not change the xy size of feature maps. This ensures the output is the same xy size as the input.
    • We have put some helper functions in the cell under “Create UNet”. Your task is to fill in the following sections:
      1. ConvBlock: perform two 3x3 convolutions (two horizontal blue arrows in figure 1)
      2. DownConvBlock: perform downsampling followed by two 3x3 convolutions (1 red arrow plus 2 horizontal blue arrows in figure 1)
      3. UpConvBlock: perform upsampling, concatenate feature maps, then do two 3x3 convolutions (1 gray, 1 green and 2 blue arrows in figure 1)
      4. UNet.forward: perform forward pass of UNet
  3. Train UNet
    • You don’t have to submit anything for this part. Simply run the cells in the “Train UNet” section. You can stop once validation loss is not improving. This should be a good check that your UNet implementation is correct.
    • You should see both train and validation loss decrease fairly quickly.
  4. Make Segmentations
    • You don’t have to submit anything for this part. Run the cells in the “Make Segmentations” section. This uses connected components to generate a segmentation. Can you see any qualitative differences between training set and validation segmentations?