Imposing a probability density on a latent variable of a classification network
Type: Bachelor CS
If you are interested please contact :
Standard neural-net based classification networks first compress the input data into a lower dimensional latent variable (a vector) and then input that to a so-called softmax layer that is used to identify the input class. In this assignment we want to find out whether by adversarial training, as in , but without a decoding network, we can impose a Normal distribution on the latent variable, such that all elements of the latent variable are independent, have zero mean and variance 1.
- Study  to the extent that the adversarial training principle is understood and can be implemented
- Build and train a neural net classifier for a toy example, e.g. MNIST  digit recognition, with and without imposing the probability distribution.
- Evaluate whether the probability density was imposed successfully and evaluate its impact on the classification performance.
- Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial Autoen- coders. arXiv e-prints, page arXiv:1511.05644, Novem- 2015.
- MNIST Dataset (https://deepai.org/dataset/mnist)