Adversarial Examples: bugs, features, or just categorical learning in a small world?

Video recording:

Speaker: Sahar Niknam (Department of Computer Science, Faculty of Science Technology and Medicine, University of Luxembourg)
Title: Adversarial Examples: bugs, features, or just categorical learning in a small world?
Time: Wednesday, 2023.03.29, 10:00 a.m. (CET)
Place: fully virtual (contact Dr. Jakub Lengiewicz to register)
Format: 30 min. presentation + 30 min. discussion

Abstract: When adversarial examples were introduced for the first time in 2014, they ruined some of the most ambitious dreams for the future of deep learning and AI in general. The earliest reaction was, of course, to develop defense methods by exploring mathematical possibilities of robust learning against specific attacks. However, simultaneously, there have been efforts contributed to explaining this deep learning’s vulnerability on higher semantic levels.

This presentation consists of an overview of some of the most popular adversarial attacks, followed by explaining the math behind the earliest developed gradient-based attacks. After that, we will review a couple of studies that give a different interpretation of adversarial examples, not as a  weakness, but as an outcome of a different learning algorithm, compared to that of humans. Finally I will give a summary of my curious work with a basic adversarial attack to understand and explain, so to say, ‘cognition’ in neural networks.

Sahar Niknam is a doctoral researcher in the Department of Computer Science, in the Faculty of Science Technology and Medicine of the University of Luxembourg.