I’m looking at you, looking at me

November 2019

Doreen A. Ríos is an independent curator, lecturer and researcher specialised in digital culture.

Read full Bio

We see a car approaching, carrying the editor who is making a compilation of light swallowing time. A meteor, a different sunset. The car turns on to the driveway of a home. The car turns on a machine to chew the dark. We see an editor. (2017 Rapoport)

© Kate Elliott

How do machines see us? This question has been carefully slipping through several areas of production and research during the past couple of decades. From James Bridle’s Tumblr blog the New Aesthetic where he has built a collection of images that, in his words, “makes fun, I mean critiques, the new ways of seeing the world, an echo of the society, technology, politics and people that co-produce them” or the vulnerability of technology as seen by Matthew Plummer-Fernandez in, another Tumblr blog, Algopop, where he collects how certain algorithms hilariously fail to deliver what they’re expected to, we can see an essential need to understand the processes led by black boxes. By ‘black boxes’ here I mean “a usually complicated electronic device whose internal mechanism is usually hidden from or mysterious to the user” (Merriam-Webster 2019) – and, with this, I’m commenting on those who code, train, build these mechanisms and how this translates into what happens outside of the screen.

Although these collections don’t necessarily answer, directly, the question, how do machines see us, they certainly reveal the asymmetries and lack of representation through a process of dematerialising, mainly through images, the world as seen by machines.

In Heather Dewey-Hagborg’s artwork ‘How do you see me?’ commissioned for the Data/Set/Match programme at The Photographers’ Gallery, the artist answers to several surrounding questions about how machines see us. The way that Dewey-Hagborg approaches this subject is not only by understanding and showing how machine learning systems see but also by tricking such systems through adversarial processes in order for them to recognise patterns that seem rather abstract to the human eye but that are understood through them as Heather’s face.

In this piece, the artist looks to get closer to the one 'behind the camera', or perhaps more accurately, the one(s) coding the system. This means she is establishing a point of contact where the interaction can be looped forever: I’m looking at you, looking at me, looking at you, looking at me… (∞). This reciprocal action also works as an interesting extension of the feedback loop we encounter in any screen-based exchanges but also in the structures behind how many of the current AI systems are being trained by turkers who put into use other AI interfaces which assist with this task. Therefore, the human – machine collaboration is also, always, in a constant loop – from the data gathering, to its interpretation and all the way into its algorithmic implementation.

It is interesting to see the motivation behind the development of this artwork: Dewey - Hagborg starts by stating, “We live in a world in which we are constantly looked at, studied, analysed. Cameras are everywhere. These systems know a tremendous amount about me - but what do I know about them?”. This is probably the key question behind how machines see us, in a world where the incursion of facial recognition systems and AI have coupled with an asymmetric flow of information and an ever elusive understanding of how they work. This means, it is not enough to point, or even to open, the black box but it is also important to acknowledge who created it, what is it used for, what are the goals behind it, and, last but not least, share its contents.

‘How do you see me?’ is divided into two phases: facial detection and the recognition of Heather’s face, each one reflecting on questions such as: what does a face mean for a machine learning system; what elements are encompassed by a face; how is this shaped by the forces behind it; and what does all of this mean for the contemporary economic and socio-political climate. For approaching the first phase, we could start by saying that for us – humans - as Daniel Rubinstein suggested on the ‘What Does the Dataset Want?’ event - there’s a philosophical approach to such questions since a face is a social construct and what it communicates depends on several cultural and socio-political factors. However, for a machine this can be translated into a series of numbers, which end up overly simplifying the diversity of what a face can look like. In the following images we can see what a face detection system can recognise as a face; details that the artist used for the first phase of this project.

On the second phase of the project the black box of the facial recognition system is officially opened. The artist however shares its contents by deceiving them, and those behind them as well, by making them see what she wants them to see. Of course, what becomes important about this process has a lot to do not only with becoming the one who sets the rules but also of showing that representing reality is not a necessary concern for the development of said facial recognition systems.

What we can witness by seeing these figures displayed on a loop on the Media Wall of The Photographers’ Gallery is a dance of unknown coded faces as well as a revelation of their materiality. A materiality that does not relate necessarily to objects or tangible matter, but one that has to do with processes, data and virtual interfaces. This plays along really well with Christiane Paul’s proposal when defining neomateriality, which can be understood as “a concept used to describe an objecthood that incorporates networked digital technologies, and embeds, processes, and reflects back the data of humans and the environment, or reveals its own coded materiality and the way in which digital processes see our world” (Paul 2015). Neomateriality becomes incredibly important, both in Dewey-Hagborg’s production but also in her processes and active search for questioning the way in which certain areas of society seem comfortable with delegating sensible decisions to, clearly, non-neutral algorithms.

© Tim Bowditch

It seems useful to take the concept of the proxy within the production and development of ‘How do you see me?’ since what the artist proposes is to become the human bridge for rewiring the algorithmic content set within a facial recognition system and hack it. Revisiting the etymology of the proxy we find that it refers to the authority or power to act for another however, it has been appropriated by the digital lingo, which makes sense of it within the context of data exchange in which we have forgotten those behind it. Just like the Research Centre for Proxy Politics pointed out on their 2017 symposium in Berlin The Proxy and Its Politics - On evasive objects in a networked age:
...proxies are now emblematic of a post-democratic political age, one increasingly populated by bot militias, puppet states, and communication relays. Thus, the proxy works as a dialectical figure that is woven into the fabric of networks, where action and stance seem to be masked, calculated and remote-controlled. The proxy thrives within a habitat defined by sameness, characterised by constant monitoring of human and non-human actors.

Heather Dewey-Hagborg, therefore, actively responds to this preconditioned homogeneity by mediating the in-betweener and hijacking the role of the proxy in itself. 'How do you see me?' makes visible the ways to disrupt and deceive an algorithmic process and it thus becomes a translator between the machinic eye of facial recognition systems and the politics around the datasets that feed them. Furthermore, the act of translating the hidden procedures behind facial-recognition software can empower others to understand the vulnerability of such processes and even becoming a new node in this web of recodification.