From Spectacle to Extraction. And All Over Again.

2019-11-29
Gaia Tedone is a curator and researcher with an expansive interest in the technologies of image formation and online curatorial practices.
More on Gaia Tedone

I met with Kate Crawford and Trevor Paglen on the press preview of their exhibition Training Humans in Milan at Osservatorio Prada. It was the morning of September 11th –not a neutral day to unthink photography and the power operations of vast populations of images. On the contrary, it was the most apt one to seriously consider Crawford and Paglen’s proposition that "images are no longer spectacle but they are in fact looking back at us, being actors in a process of massive value extraction" (see minute 08:57).

The show was a highly anticipated one, being the result of a two-year collaborative project which saw Kate Crawford (NYU research professor and co-founder of the AI Now Institute in New York) and Trevor Paglen (internationally renowned artist, researcher and awarder of multiple prizes) engaged in opening the so-called black box of AI. Their different disciplinary orientations converged around the question of what is an image in AI systems, who gets to decide its meaning and for what purposes. These issues echo old questions that photography criticism has dealt with for decades, which today need urgent revising –both in terms of their formulations and answers. Particularly, in light of the epistemological challenges, computer vision and artificial intelligence are bringing to the wider infrastructures of meaning. I got ready to train myself for seeing what was lying inside the black box of AI – inside the white cube of Osservatorio Prada.

At the beginning of the press conference, Crawford and Paglen kicked in with a joke about how any conversation about AI inevitably starts with the CIA and some cruel cat experiments. They were referring to the material on display on the first floor of the exhibition: leaked documents from the facial recognition project financed by the CIA and carried out by Woodrow Wilson Bledsoe in 1963, and some footage from Colin Blakemore’s experiments on cats undertook in the 1970s when studying the animals’ visual cortex. This material served the purpose of putting in historical perspective the main issue at stake: how both humans and non-humans see. AI is not a new business. And neither is vision. There is a history behind it and even a pre-history (or archaeology) to it, as Crawford and Paglen rightly point out, which involves centuries of experiments and trials at the crossover between science, technology, and creative practices.

Similarly, this project does not come out of the blue. It emerges out of a specific historical moment – one that we might refer to as 'image capitalism' or 'platform capitalism' or 'computational capitalism'– and in response to an emergent field of practice. This field is populated by a number of artists-researchers-practitioners who are preoccupied, much alike Crawford and Paglen, with the social, ethical and political implications of AI and the materiality of its computer vision algorithms. This is a point that needs to be made, as Crawford and Paglen lament an insufficient presence of critical inquiry towards these issues, whilst gracefully taking up the role of 'AI ambassadors' within the so-called institutional art establishment. However, a survey of current creative practices engaged with the algorithms of machine learning was never really on Crawford and Paglen’s agenda and neither would have made it into an interesting exhibition.

Their ambition is somehow different: it is to rethink the fraught relationship between an image and a label in training sets, as we move from a regime of spectacle to one of extraction, under a new political economy of meaning making, whose lexicon and syntax is being built by the AI industry.

These were some of the issues we discussed in an improvised stage set on the second floor of the exhibition space at Osservatorio Prada. Crawford and Paglen sat on two poufs we borrowed from the nearby 'instragrammable' installation ImageNet Roulette. There, all of us visiting the exhibition sat in front of two screens with cameras and put our faces and bodies up for scrutiny. In the presence of the algorithmic oracle, we amusingly waited for our ages, genders, and professions to be detected and labelled.

But, was this a game or a serious game, to say it with Harun Farocki? Perhaps, it was spectacle, desperately attempting to smuggle itself back in the vicious cycle of algorithmic extraction. After all, the two – spectacle and extraction – seem intimately related to me, as Art meets AI at the interface of the networked image.

With the duo’s back set against the backdrop of some flickering screens and a wallpaper of images drawn from one of the datasets on display – the Selfie dataset – we began our conversation. I was interested in hearing the genealogy of their collaborative project and the research questions that animated this joint enterprise.

The conversation flowed as Crawford and Paglen unpacked the title of the show and the concept of “predator vision of AI” – one that is key to their proposition of AI as a value-extracting industry that is making fundamental interventions into the very fabric of contemporary visual culture.

They then moved into explaining what is really at stake when attempting to read and critically interpret datasets, underlining the racial and gender biases inscribed in the process of labelling and classifying images into discrete categories. They talked about the subtle gradient between description and judgement and the power operations at play as meaning gets codified into images whilst images are, by all means, computational code.

I enquired about the importance of ImageNet within the historical trajectory of training datasets that the exhibition traces. Crawford and Paglen specifically touched upon the question of human labour and the use of Amazon Mechanical Turk workers: how this emerged as a defining treat of ImageNet’s monumental effort at providing the most comprehensive and diverse coverage of the image world.

We then moved into one of the most controversial aspects of the project, that is the ethical issues involved in putting this kind of material on display without people’s previous awareness nor consent. In other words, Crawford and Paglen’s choice, or “bet”, as they describe it, to reproduce at the level of the exhibition the same intrusion into people’s lives and privacy as the one that AI performs when the trained algorithms scrape images from social media accounts and online spaces. Their answer was well prepped – they suggested that there is a structure in place by which visitors of the show can have their images removed, if they want to, and pondered why we, as humans, should see any different than machines.

Yet, more questions prompted in my mind: As these images are put on display in an art gallery, don’t they revert back to the status of spectacle? How can we critically think about circulation and its relation to the new political economy of meaning making? What networks of power are involved in this very process of circulation, as images travel from computer labs to social media and art galleries and back into tech labs? Isn’t the exhibition ultimately replicating the same mechanisms Crawford and Paglen are criticising? These questions can be addressed to all art projects grappling with the politics of dataset representation, including the ones on view this year on the Media Wall of The Photographers’ Gallery as part of the programme Data / Set / Match.

My final question concerned the politics of archiving and the issue of training datasets vanishing from the Internet. This question was meant to provoke a reflection about the current state of affairs, one characterised by a race towards 'cleaning up' the Internet from what now starts to be seen as highly controversial material. Training sets are vanishing overnight, perhaps risking the erasure of an important piece of AI history that, in spite of its errors and mistakes, can produce a roadmap for the future development in the field. Crawford and Paglen’s view on the topic is pondered, as this is something the two have discussed at length as they were writing the article 'Excavating AI. The Politics of Images in Machine Learning Training Sets', which was published few days after this video interview.

Both text and project have raised a healthy amount of criticism, generally favourable. However, a number of computer scientists and art critics have expressed their concerns, mostly on Twitter, in relation to Crawford and Paglen’s stand on the importance of preserving training sets, the confusions that can emerge when datasets with very different histories and logics are exhibited together, and Paglen’s selection and display choices in occasion of his current commission at the Barbican Curve in London. This white noise is somehow welcomed, considering the urgency of the issues at stake, which call into question wider dynamics of data governance, copyright, privacy and surveillance. In a sense, it acts as a reminder that we – as users, spectators, researchers, art practitioners and citizens who live in the lucky side of the 'digital divide' – need to maintain a degree of critical alertness when it comes to assessing the politics of representation in the algorithmic world. Primarily, because we are both the objects and subjects of this cycle of spectacle-extraction-spectacle.

An exhibition like Training Humans is important not so much as for the individual images it puts on display – which the neurons in the visual cortex process as easily as they forget – but as for the critical debate it stages around the wider implications of these seemingly innocuous practices of image making and image sharing. This is a problem that does not require a binary conversation, neither one conducted by 'few key players' only. Rather, it needs to be a much more nuanced and collective one. And that is why we ought to use our human intelligence, together with that of our machines, to pay attention to the granular as well as to the systemic, to the materiality of images as well as to their invisible networks of power and circulation. In other words, we need to keep questioning what we see, but also what we do not see, what is being said and what is not being said, by whom and in which context. After all, vision, as the founder of ImageNet Fei-Fei Li reminds us, “begins with the eyes but truly starts with the brain”.

*The interview can be watched as a video playlist, here

Gaia Tedone is a curator and researcher with an expansive interest in the technologies of image formation and online curatorial practices.
More on Gaia Tedone

Suggested Citation:

Tedone, G. (2019) 'From Spectacle to Extraction. And All Over Again.', The Photographers’ Gallery: Unthinking Photography. Available at: https://unthinking.photography/articles/from-spectacle-to-extraction-and-all-over-again
< Prev Next >