ACE Seminar: Adversarial machine learning: the case of optimal attack strategies against recommendation systems

Speaker: Prof Negar Kiyavash

Date/Time: 06-Jul-2017, 16:00 UTC

Venue: Gordon Street(25) - Room 500

Details

Abstract

Adversarial machine learning which lies in the intersection of learning and security aims to understand the effects of adversaries on learning algorithms and safe guard against them by design of protection mechanisms.
In this talk, we discuss the effect of strategic adversaries in recommendation systems. Such systems can be modeled using a multistage sequential prediction framework where at each stage, the recommendation system combines the predictions of set of experts about an unknown outcome with the aim of accurately predicting the outcome. The outcome is often the "rating/interest" of a user in an item. Specifically, we study an adversarial setting in which one of the experts is malicious and his goal is to impose the maximum loss on the system. We show that in some settings the greedy policy of always reporting false prediction is asymptotically optimal for the malicious expert. Our result could be viewed as a generalization of the regret bound for learning from expert advice problem in the adversarial setting with respect to the best dynamic policy, rather than the conventional regret bound for the best action  (static policy) in hindsight.

Bio

Negar Kiyavash is Willett Faculty Scholar at the University of Illinois and a joint Associate Professor of Industrial and Enterprise Engineering and Electrical and Computer Engineering. She is also affiliated with the Coordinated Science Laboratory (CSL) and the Information Trust Institute. She received her Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign in 2006. Her research interests are in design and analysis of algorithms for network inference and security. She is a recipient of NSF CAREER and AFOSR YIP awards and the Illinois College of Engineering Dean's Award for Excellence in Research.


Add to Calendar

This page was last modified on 27 Mar 2014.