Adversarial noise in convolutional deep networks
Type: Bachelor CS
If you are interested please contact :
Deep and convolutional networks (CNNs) have become the ‘de facto’ standard to solve problems in applications of signal processing, computer vision and data science. Their structure and training strategies allow to learn complex and effective representations of the data.
However, CNNs have been demonstrated to lack robustness to variations of the input data: small, imperceptible alterations of the input, known as adversarial noise, are able to completely change the prediction output of a classifier. This weakness makes the application of CNNs in critical applications, like automated car driving, difficult.
Several adversarial attacks have been created in the past years and several defense techniques were proposed to increase the robustness of the classifiers . The assignment consists of studying the impact that a push-pull layer  has on the robustness of state-of-the-art architectures against different classes of adversarial attacks. Eventually, modifications to the push-pull layer with the aim of increasing the robustness of the classifiers are encouraged.