UTFacultiesEEMCSDisciplines & departmentsMORResearch Talk: Accounting and Optimizing for Differential Privacy via Concentration Bounds

Research Talk: Accounting and Optimizing for Differential Privacy via Concentration Bounds Oliver Kasut (Arizona State University)

Abstract

Differential privacy (DP) is an approach to data privacy that applies random algorithms designed to ensure that changing one entry of a dataset has a small effect on the output distribution. There are two fundamental problems in DP: finding the optimal random mechanisms to use, and accounting for exactly how much privacy a given algorithm achieves. In fact, these two problems are strongly coupled, via probability concentration bounds. A concentration bound such as the law of large numbers, central limit theorem, or Chernoff bound allow one to relate a complicated probability to a simple expression, typically based on just one or a few statistics (e.g. expectation and variance). Applying such a bound to the DP problem reveals that the amount of privacy achieved is directly related to just a few statistics, and so the best mechanism is the one that optimizes these statistics. In settings where the DP mechanism must be applied over many iterations, we find that the optimal mechanism should minimize a certain Kullback-Leibler divergence. The noise distribution that optimizes this divergence has a surprisingly jagged shape, which we call the “Cactus mechanism”.