ImageNet Roulette (Trevor Paglen, 2019) uses a neural network trained on the “people” categories from the ImageNet dataset to classify pictures of people. It’s meant to be a peek into how artificial intelligence systems classify people, and a warning about how quickly AI becomes horrible when the assumptions built into it aren’t continually and exhaustively questioned.
NOTES AND WARNING:
ImageNet Roulette regularly classifies people in racist, misogynistic, cruel, and otherwise horrible ways. This is because the underlying training data contains those categories (and pictures of people that have been labelled with those categories). We did not make the underlying training data which is responsible for these classifications. It comes from a popular data set called ImageNet, which was created at Stanford University and which is a standard benchmark used in image classification and object detection.
ImageNet Roulette is meant in part to demonstrate how bad politics propagate through technical systems, often without the creators of those systems even being aware of it.