UTFacultiesBMSEventsPhD Defence Haleh Asgarinia | Privacy and Machine Learning-Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations

PhD Defence Haleh Asgarinia | Privacy and Machine Learning-Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations

Privacy and Machine Learning-Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations

The PhD defence of Haleh Asgarinia will take place in the Waaier building of the University of Twente and can be followed by a live stream.
Live Stream

Haleh Asgarinia is a PhD student in the department Philosophy. (Co)Promotors are prof.dr. P.A.E. Brey and dr. A. Henschke from the faculty of Behavioural, Management and Social Sciences.

This dissertation consists of five chapters, each written as independent research papers that are unified by an overarching concern regarding information privacy and machine learning-based artificial intelligence (AI). This dissertation addresses the issues concerning privacy and AI by responding to the following three main research questions (RQs): RQ1. 'How does an AI system affect privacy?'; RQ2. 'How effectively does the General Data Protection Regulation (GDPR) assess and address privacy issues concerning both individuals and groups?'; and RQ3. 'How can the value of privacy be embedded into systems?'

To respond to the RQs, this dissertation adopts the privacy impact assessment (PIA) as the overall methodology. A PIA encompasses three distinct stages; the first, the analytical stage, concerns the analysis of how AI (particularly focusing on inference as a process that includes inferred information, AI models’ performance, and accessing information uncovered by AI models) impacts privacy. Second, the legal assessment stage concerns whether AI that processes personal information and develops models complies with the GDPR. Finally, the design requirements stage features proposals for design requirements for systems aimed at protecting privacy. Accordingly, this dissertation is structured in three parts, each corresponding to a specific stage of a PIA and responding to one of the RQs.

Part I, which addresses the first stage of the PIA, comprises three chapters that altogether respond to RQ1. Chapter 2 analyses how AI impacts the descriptive aspect of privacy; this part argues that AI challenges the current definitions of privacy and that the ‘source control’ and ‘actual access’ definitions of privacy, once revised in the face of counter-examples involving inferred information, converge. Chapter 3, which considers how AI impacts the normative aspect of privacy, particularly the value of privacy, argues that AI affects the social value of privacy, which depends on trust, as this dimension of privacy is constituted when AI models perform accurately. Chapter 4 examines how AI impacts the normative aspect of privacy, particularly the right to privacy, and it argues that, although accessing information uncovered by AI models raises concerns about the privacy of algorithmically designed groups, the right to privacy cannot be recognised for such groups. This is the disruptive feature of AI that has led to consideration of new approaches other than the traditional one, which involves recognising the right to privacy, to protect the privacy of these groups. Instead of recognising the right to privacy for algorithmically designed groups, this chapter suggests taking a moral principle for the moral obligation of protecting vulnerable groups within an ethics of vulnerability.

Part II concerns the second stage of the PIA and consists of one chapter that responds to RQ2. Chapter 5, in addition to evaluating whether AI that processes personal information and develops models complies with the GDPR, also assesses whether the GDPR adequately addresses the privacy issues raised by AI. It specifically focuses on group privacy and argues that GDPR has limitations in protecting the privacy of algorithmically designed groups and that the privacy of such vulnerable entities must be considered in the context of privacy and data protection.

Part III, which is related to the third stage of the PIA, also consists of one chapter that responds to RQ3. Chapter 6 proposes design requirements to protect privacy by integrating privacy into systems and argues that privacy is instrumentally valuable for the sake of autonomy. Accordingly, to embed the value of privacy into systems, design requirements are articulated through the translation of norms that promote and protect autonomy.