Dr. Mohammad Mahmoody
Associate Professor at University of Virginia
Classical learning algorithms are designed for benign settings in which normally generated samples are used for learning a model which is later tested on normal instances sampled from the same distributions. With massive deployment of learning systems in daily life, a new approach aims for finding *robust* learning methods that can tolerate adversarial perturbations in training or test data. In this talk, we show a connection between training-time attacks (aka data poisoning attacks) on learning algorithms and the well studied problem of collective coin flipping (CCF) in cryptography. A secure CCF protocol is a distributed way of generating “unbiased random bits” that can resist collusion of subsets of parties up to a certain size. In particular, we design new attacks on CCF and show that they can be translated into poisoning attacks on learners.
Mohammad Mahmoody is an associate professor of Computer Science at the University of Virginia.
He obtained his PhD from Princeton University in 2010 and spent a few years in Cornell University as postdoc associate before joining the University of Virginia as an assistant professor. Prior to that, he was an undergraduate student at Sharif University of Technology till 2004. He was a recipient of NSF Career Award (2014) and Princeton’s Wu Prize(2009), and his research interests include Theory of Cryptography and its interplays with Computational Complexity and Robust Learning.