We introduce natural adversarial examples – real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call IMAGENET-A. This dataset serves as a new way to measure classifier robustness.


Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song

< Prev Next >