I proposed what would become Lacework in the Summer of 2019. In my proposal, I describe a cycle of videos curated from MIT's 'Moments In Time' dataset, each then slowed down, interpolated, and upscaled immensely into imagined detail, one flowing into another like a river...
I write this from my small New York apartment in my fourth month of isolation. The pandemic has required each of us to slow down and do less, and I keep thinking of a childhood friend who once told me, “We’re human beings, not human doings”. Even as a teenager, I knew this was an important paradigm shift: it meant that we could rethink how we define ourselves beyond endless production and consumption. Allowing oneself to be a human being seemed to resist the gig economy, workerism, the idea of “a calling”— all the ways that society has been structured to combine a person’s work into their core identity. The way busyness became a humblebrag. Human doings.
Lacework is a new work by Everest Pipkin that uses artificial neural networks to reinscribe the videos of MIT’s Moments in Time Dataset. Using algorithms that stretch time and add details to images, Pipkin creates a series of hallucinatory slow-motion vignettes from the videos of everyday actions that form the collection. By manipulating the source videos of the MIT dataset, Lacework presents a river of these moments, as captured in amber; flowing from one to another into a cascade of gradual, unfolding details.
'Epic Hand Washing in the Time of Lost Narratives' by xtine burrough and Sabrina Starnaman is a speculative remix that confronts Epic Kitchens, a dataset of first-person cooking videos, with quotations from literature written during or about prior pandemics such as the bubonic plague and the global influenza pandemic of 1918-19. The project reveals the arbitrary nature of information preservation and highlights the constructed nature of digitised materials. Blurring the lines between art and archive, or information and dataset, this work furthers discourse about the digital dataset as an authority of knowledge curation.
This article is an overview of the projects 'Epic Handwashing in a Time of Lost Narratives' and 'A Kitchen of One's Own' weaving a thread between the technical and the conceptual: the projects are linked historically by the writing and arguments put forth by Virginia Woolf, technologically by computational juxtapositions of text and image, as well as poetically in the viewer’s experience through a speculative remix.
When a computer vision algorithm recognises something in a picture, it soberly frames what it ‘sees’ in confetti-coloured rectangles, digital hues that contrast with the everyday shapes and colours that we see in a space with plain eye. Each neatly labelled with a single category, these annotations highlight answers but don't give explanations. To the uninitiated, it seems almost magical, or at least akin with some sort of intelligence.
Philipp Schmitt's 'Declassifier' uses a computer vision algorithm trained on COCO, an image dataset developed by Microsoft in 2014. In the work, photographs from Schmitt’s series 'Tunnel Vision' are tested and overlaid with the images used to generate the algorithm in the first place. By doing so, Schmitt exposes the myth of magically intelligent machines; the visual data by which machine learning algorithms learn to make predictions is hardly ever shown, let alone credited.
The Future Is Here!, the title of Mimi Onuoha’s video project reflecting the human side of crowdsourced image labelling, is spot on. The stories I have been told by crowd workers from across the globe doing this work full-time indeed often have an eerily Gibsonian ring to them. Especially the stories from Venezuela.
I met with Kate Crawford and Trevor Paglen on the press preview of their exhibition Training Humans in Milan at Osservatorio Prada. It was the morning of September 11th –not a neutral day to unthink photography and the power operations of vast populations of images. On the contrary, it was the most apt one to seriously consider Crawford and Paglen’s proposition that "images are no longer spectacle but they are in fact looking back at us, being actors in a process of massive value extraction".
In Heather Dewey-Hagborg’s artwork ‘How do you see me?’, commissioned for the Data/Set/Match programme at The Photographers’ Gallery, the artist explores how machines see us. A question that has been carefully slipping through several areas of production and research during the past couple of decades. At the same time an essential need has also emerged to understand the processes and internal mechanisms that are usually hidden from or mysterious to the user: commenting on those who code, train, build these mechanisms and how this translates into what happens outside of the screen.