Version Mildenberg
lecture: Deep Learning Blindspots
Tools for Fooling the "Black Box"
In the past decade, machine learning researchers and theorists have created deep learning architectures which seem to learn complex topics with little intervention. Newer research in adversarial learning questions just how much “learning" these networks are doing. Several theories have arisen regarding neural network “blind spots” which can be exploited to fool the network. For example, by changing a series of pixels which are imperceptible to the human eye, you can render an image recognition model useless. This talk will review the current state of adversarial learning research and showcase some open-source tools to trick the "black box."
This talk aims to:
- present recent research on adversarial networks
- showcase open-source libraries for fooling a neural network with adversarial learning
- recommend possible applications of adversarial networks for social good
This talk will include several open-source libraries and research papers on adversarial learning including:
Intriguing Properties of neural networks (Szegedy et al., 2013): https://arxiv.org/abs/1312.6199
Explaining and Harnessing Adversarial Examples (Goodfellow et al., 2014) https://arxiv.org/abs/1412.6572
DeepFool: https://github.com/LTS4/DeepFool
Deeppwning: https://github.com/cchio/deep-pwning
Info
Day:
2017-12-28
Start time:
14:00
Duration:
01:00
Room:
Saal Adams
Track:
Resilience
Language:
en
Links:
Feedback
Click here to let us know how you liked this event.
Concurrent Events
Speakers
Katharine Jarmul |