[M] Adversarial noise in convolutional deep networks

BACHELOR Assignment

Adversarial noise in convolutional deep networks


Type: Bachelor CS

Period: TBD

Student: (Unassigned)

If you are interested please contact :

Introduction:

Deep and convolutional networks (CNNs) have become the ‘de facto’ standard to solve problems in applications of signal processing, computer vision and data science. Their structure and training strategies allow to learn complex and effective representations of the data.

However,  CNNs have been demonstrated to lack robustness to variations of the input data: small, imperceptible alterations of the input, known as adversarial noise, are able to completely change the prediction output of a classifier. This weakness makes the application of CNNs in critical applications, like automated car driving, difficult.

 Assignment:

Several adversarial attacks have been created in the past years and several defense techniques were proposed to increase the robustness of the classifiers [1]. The assignment consists of studying the impact that a push-pull layer [2] has on the robustness of state-of-the-art architectures against different classes of adversarial attacks. Eventually, modifications to the push-pull layer with the aim of increasing the robustness of the classifiers are encouraged.

References:

[1] https://arxiv.org/pdf/1801.00553.pdf

[2] https://link.springer.com/article/10.1007/s00521-020-04751-8