This event has passed.
Friday, April 15, 2022 – 8:00AM to 9:00AM
In recent years, adversarial machine learning researchers have developed an impressive suite of attacks (and proposed defenses) for deep neural networks. Despite these advances, the field remains locked in a 'cat and mouse' cycle, where new defenses are proposed on a regular basis, only to be broken by more powerful adaptive attacks. In this talk, I describe our experience with this cycle and suggest a move towards more realistic threat models as a way out. I present our recent work on physical backdoor attacks, and how more realistic threat models are changing the way we think about backdoor attacks against DNNs.
Emily Wenger is a fourth year computer science PhD student at the University of Chicago, advised by Ben Zhao and Heather Zheng. Her research focuses on security and privacy issues of machine learning systems. Her work has been published at top computer security (CCS, USENIX) and computer vision (CVPR) conferences and has been covered by media outlets including the New York Times, MIT Tech Review, and Nature. She is the recipient of the GFSD, Harvey, and Neubauer fellowships. Previously, she worked for the US Department of Defense and interned at Meta AI Research.