UTFacultiesEEMCSDisciplines & departmentsSCSEducationAssignmentsOpen AssignmentsOpen Master AssignmentsAugust 22, 2022: Improving knowledge distillation as a defense against membership inference attacks in deep learning

August 22, 2022: Improving knowledge distillation as a defense against membership inference attacks in deep learning

MAster assignment

improving knowledge distillation as a defense against membership inference attacks in deep learning

TYPE : MASTER CS

Period: Start date: any

Student: Unassigned

If you are interested please contact:

Description:

Machine learning models are often trained on sensitive data, such as medical records or bank transactions, posing high privacy risks. Several attacks have been developed in the last decade to retrieve model information on the training data. One of those attacks is membership inference [1,2], which uses the model parameters or predictions to determine whether a given data point was part of the training set or not. Among the mitigations proposed in the literature so far against such attacks, knowledge distillation seems to be among the most effective [3].

The main task of this project is to further explore knowledge distillation as a mitigation against membership inference attacks. This includes assessing it on different datasets and potentially develop enhancements to the distillation technique. Don’t worry, you don’t have to invent everything by yourself, we already have some ideas you can start with, explore, and assess. Obviously, if you will come up with a better improvement by yourself it would be even better!

Your starting point will be an already implemented version of the mitigation in TensorFlow (Python). You are expected to spend the first weeks/months to get some confidence with the related literature and the Python tool. From there, you can start exploring further, assessing the mitigation on new datasets, tuning the parameters, try different improvements, and so on. Feel free to reach us out if you’d like to hear more about this project!

Requirements:

References:

[1] Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE symposium on security and privacy (SP). IEEE, 2017. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7958568

[2] Nasr, Milad, Reza Shokri, and Amir Houmansadr. "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning." 2019 IEEE symposium on security and privacy (SP). IEEE, 2019. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835245

[3] Shejwalkar, Virat, and Amir Houmansadr. "Membership privacy for machine learning models through knowledge transfer." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 11. 2021. https://ojs.aaai.org/index.php/AAAI/article/download/17150/16957