InfoSec Seminar, Seminar: Black-box leakage estimation, and some thoughts on its applicability to membership inference and synthetic data.

Speaker: Giovanni Cherubin

Date/Time: 11-Mar-2021, 16:00 UTC

Venue: Virtual Seminar

Details

Abstract:

We consider the problem of measuring the information leakage of a generic system, seen as a black-box. We are interested in measuring the probability that an (optimal) adversary guesses some secret information contained by the system, by only observing the system’s behaviour (e.g., its outputs, running time). This formulation captures a wide class of attacks, ranging from side channels and traffic analysis to membership inference.


In this talk, I will present recent advances in this area, which demonstrate that leakage estimation can be seen as equivalent to supervised classification in ML. Thanks to this equivalence, i) we can import new leakage estimation tools, based on ML, which can tackle real-world systems effectively, and ii) we can scope the limitations in the field of leakage estimation via impossibility results.


I will also spend some time discussing how these methods are applicable to problems such as: i) estimating the leakage of ML models, and ii) measuring the information leakage of synthetic data.


Bio:

Giovanni is a Research Fellow of the Alan Turing Institute (UK). His main interests span the areas (and intersection) of Machine Learning and Privacy. His current research focuses on information leakage estimation in the context of Security and Privacy, and particularly on its application to quantifying the security and privacy of Machine Learning. He also works on Machine Learning methods with distribution-free guarantees, such as conformal prediction.

Add to Calendar

This page was last modified on 27 Mar 2014.