We introduce natural adversarial examples – real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call IMAGENET-A. This dataset serves as a new way to measure classifier robustness.
https://arxiv.org/abs/1907.07174
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song
