classification
TPG
2023-07
An article by Linda Kronman that "analyses the cybernetic loop between human and machine classification by examining artworks that depict instances of bias when machine vision is classifying humans …
TPG
2023-04
These photos have been manipulated in a process of selectively re-imagining male-coded parts in the artist's appearance within the images using a commonly available out-painting text-to-image AI tool.
TPG
2021-01
A journey into the art of face analysis and classification.
TPG
2020-09
Javier Lloret Pardo - Annotators View Image Annotators constitute the hidden labour of AI vision. The current ubiquitous techniques of image classification, segmentation and scene description …
I write this from my small New York apartment in my fourth month of isolation. The pandemic has required each of us to slow down and do less, and I keep thinking of a childhood friend who once told me, “We’re human beings, not human doings”. Even as a teenager, I knew this was an important paradigm shift: it meant that we could rethink how we …
2020-04
This article is an overview of the projects 'Epic Handwashing in a Time of Lost Narratives' and 'A Kitchen of One's Own' weaving a thread between the technical and the conceptual: the projects are linked historically by the writing and arguments put forth by Virginia Woolf, technologically by computational juxtapositions of text and image, as well …
2020-04
Philipp Schmitt's 'Declassifier' uses a computer vision algorithm trained on COCO, an image dataset developed by Microsoft in 2014.
In September 2019 the ImageNet creator Fei-Fei Li gave a talk at The Photographers' Gallery talking through the events and key people that led to the creation of visual datasets.
In 2019 The Photographers' Gallery digital programme launched 'Data / Set / Match', a year-long programme that explores new ways to present, visualise and interrogate contemporary image datasets. This introductory essay presents some key concepts and questions that make the computer vision dataset an object of concern for artists, photographers, …
TPG
2019-09
Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize …
An article from the NYT Privacy Project on the The Racist History of Facial Recognition. Starting with early scientific facial analysis in the 19th century trying to locate through “pictorial …
We introduce natural adversarial examples – real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial …
The reason biases against women or people of colour appear in technology are complex. They’re often attributed to data sets being incomplete and the fact that the technology is often made by people …
Layers of Abstraction: A Pixel at the Heart of Identity Shinji Toya and Murad Khan, 2019 This project centres around a critical examination of the limits of categorisation in machine learning …
ImageNet Roulette (Trevor Paglen, 2019) uses a neural network trained on the “people” categories from the ImageNet dataset to classify pictures of people. It’s meant to be a peek into how artificial …
TPG
2016-09
prostheticknowledge: Generating Videos with Scene Dynamics Proof of concept computer science research from Carl Vondrick, Hamed Pirsiavash and Antonio Torralba can generate video content from a …