ACE Seminar: Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning

Speaker: Battista Biggio

Date/Time: 28-Jun-2019, 11:00 UTC

Venue: Alan Turing Institute, Jack Good Meeting Room


This talk will take place at: Alan Turing Institute, Jack Good Meeting Room, 96 Euston Road, London, NW1 2DB (map)

Please, fill this form to attend.


Data-driven AI and machine-learning technologies have become pervasive, and even able to outperform humans on specific tasks. However, it has been shown that they suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there. This has severely questioned their suitability for mission-critical applications, including self-driving cars and autonomous vehicles. This phenomenon is even more evident in the context of cyber security domains with a clearer adversarial nature, like malware and spam detection, in which data is purposely manipulated by cyber criminals to undermine the outcome of automatic analyses. As current data-driven AI and machine-learning methods have not been designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that attackers can exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the research field of adversarial machine learning, along with the design of more secure and explainable learning algorithms.In this talk, I review previous work on evasion attacks, where malicious samples are manipulated at test time to evade detection, and poisoning attacks, which can mislead learning by manipulating even only a small fraction of the training data. I discuss some defense mechanisms against both attacks in the context of real-world applications, including computer vision, biometric identity recognition and computer security. Finally, I briefly discuss our ongoing work on attacks against deep-learning algorithms, and sketch some promising future research directions.


Battista Biggio (MSc ’06, PhD ‘10) is an Assistant Professor at the Department of Electrical and Electronic Engineering at the University of Cagliari, Italy, and a co-founder of Pluribus One, a startup company developing secure AI algorithms for cybersecurity tasks. In 2011, he visited the University of Tuebingen, Germany. His pioneering research on adversarial machine learning involved the development of secure learning algorithms for spam and malware detection, and computer-vision problems, playing a leading role in the establishment and advancement of this research field. On these topics, he has published more than 60 papers, collecting more than 2770 citations (Google Scholar, April 2019). Dr. Biggio regularly serves as a reviewer and program committee member for several international conferences and journals on the aforementioned research topics (including ICML, NeurIPS, IEEE Symp. S&P and ACM CCS), co- organizes three well-established workshops (AISec, DLS, S+SSPR) and he is Associate Editor for three high-impact journals (Pattern Recognition, IEEE TNNLS , and IEEE Comp. Intell. Magazine). He is chair of the TC1 on Statistical Pattern Recognition of the IAPR, a senior member of the IEEE and a member of the IAPR and ACM.

Add to Calendar

This page was last modified on 27 Mar 2014.