This article is an overview of the projects 'Epic Handwashing in a Time of Lost Narratives' and 'A Kitchen of One's Own' weaving a thread between the technical and the conceptual: the projects are linked historically by the writing and arguments put forth by Virginia Woolf, technologically by computational juxtapositions of text and image, as well as poetically in the viewer’s experience through a speculative remix.
'Epic Hand Washing in the Time of Lost Narratives' by xtine burrough and Sabrina Starnaman is a speculative remix that confronts Epic Kitchens, a dataset of first-person cooking videos, with quotations from literature written during or about prior pandemics such as the bubonic plague and the global influenza pandemic of 1918-19. The project reveals the arbitrary nature of information preservation and highlights the constructed nature of digitised materials. Blurring the lines between art and archive, or information and dataset, this work furthers discourse about the digital dataset as an authority of knowledge curation.
When a computer vision algorithm recognises something in a picture, it soberly frames what it ‘sees’ in confetti-coloured rectangles, digital hues that contrast with the everyday shapes and colours that we see in a space with plain eye. Each neatly labelled with a single category, these annotations highlight answers but don't give explanations. To the uninitiated, it seems almost magical, or at least akin with some sort of intelligence.
Philipp Schmitt's 'Declassifier' uses a computer vision algorithm trained on COCO, an image dataset developed by Microsoft in 2014. In the work, photographs from Schmitt’s series 'Tunnel Vision' are tested and overlaid with the images used to generate the algorithm in the first place. By doing so, Schmitt exposes the myth of magically intelligent machines; the visual data by which machine learning algorithms learn to make predictions is hardly ever shown, let alone credited.
The Future Is Here!, the title of Mimi Onuoha’s video project reflecting the human side of crowdsourced image labelling, is spot on. The stories I have been told by crowd workers from across the globe doing this work full-time indeed often have an eerily Gibsonian ring to them. Especially the stories from Venezuela.
I met with Kate Crawford and Trevor Paglen on the press preview of their exhibition Training Humans in Milan at Osservatorio Prada. It was the morning of September 11th –not a neutral day to unthink photography and the power operations of vast populations of images. On the contrary, it was the most apt one to seriously consider Crawford and Paglen’s proposition that "images are no longer spectacle but they are in fact looking back at us, being actors in a process of massive value extraction".
In Heather Dewey-Hagborg’s artwork ‘How do you see me?’, commissioned for the Data/Set/Match programme at The Photographers’ Gallery, the artist explores how machines see us. A question that has been carefully slipping through several areas of production and research during the past couple of decades. At the same time an essential need has also emerged to understand the processes and internal mechanisms that are usually hidden from or mysterious to the user: commenting on those who code, train, build these mechanisms and how this translates into what happens outside of the screen.
In 2019 The Photographers' Gallery digital programme launched 'Data / Set / Match', a year-long programme that explores new ways to present, visualise and interrogate contemporary image datasets. This introductory essay presents some key concepts and questions that make the computer vision dataset an object of concern for artists, photographers, thinkers and photographic institutions.