Understanding PyTorch

Part one

The goal of this mini-project is to study the performance of different model architectures when comparing two hand-written digits visible in a two-channel image (from the MNIST dataset). The effects of weight sharing and an auxiliary loss was studied. It was a good exercice to get familiar with PyTorch and to help visualize the effect of different deep learning techniques (eg. weight sharing and auxiliary loss) on the dataset. Our best simple model achieved roughly 89% accuracy.


ReportSource Code

Part two

In this project, the goal is to create a mini deep learning library. This task proved to be challenging and was a great educational tool to get a better and more profound understanding of neural networks. A fully connected linear layer object, different activation and loss functions and an optimizer where implemented. By making a framework from scratch, I now have a better idea of what it takes to build such a tool and can show more appreciation and understanding towards the practicality and ease of use of libraries such as PyTorch.


ReportSource Code

Project done for the Deep-Learning (EE-559) class, EPFL, Spring 2020.