UTFaculteitenEEMCSDisciplines & departementenDMBAssignmentsOpen AssignmentsOpen Bachelor Assignments[M] Imposing a probability density on a latent variable of a classification network

[M] Imposing a probability density on a latent variable of a classification network

BACHELOR Assignment

Imposing a probability density on a latent variable of a classification network

Type: Bachelor CS 

Period: TBD

Student: (Unassigned)

If you are interested please contact :

Introduction:

Standard neural-net based classification networks first compress the input data into a lower dimensional latent variable (a vector) and then input that to a so-called softmax layer that is used to identify the input class. In this assignment we want to find out whether by adversarial training, as in [1], but without a decoding network, we can impose a Normal distribution on the latent variable, such that all elements of the latent variable are independent, have zero mean and variance 1.

Assignment:

  1. Study [1] to the extent that the adversarial training principle is understood and can be implemented
  2. Build and train a neural net classifier for a toy example, e.g. MNIST [2] digit recognition, with and without imposing the probability distribution.
  3. Evaluate whether the probability density was imposed successfully and evaluate its impact on the classification performance.

 Literature:

  1. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial Autoen- coders. arXiv e-prints, page arXiv:1511.05644, Novem- 2015.
  2. MNIST Dataset (https://deepai.org/dataset/mnist)