Not spam: a conversation with Shelby Shaw
Flash Fictions on Alternative Networks was an email publication featuring 11 short image-text collaborations between humans and automation softwares. The stories were selected through an open call and included automated poetry pieces, networked explorations of visual memory, and space exploration tales among other narratives.
The publication was launched in August 2021 using a Mailchimp's "Automated Mailings" feature and more than 250 people subscribed to receive it over the first six weeks. On the 19th of September The Photographers’ Gallery digital programme was notified via email that Omnivore, Mailchimp’s content manager algorithm, had categorised the email publication as “in conflict with our Acceptable Use Policy”. The conflict seemed to originate from Anastasia, a work of fiction by Shelby Shaw exploring how spam bots exploit gender tropes that can be visited here. Over the course of the next couple of weeks, Sam Mercer, digital producer at the Gallery, engaged in a conversation with Mailchimp’s human and automated customer service agents to unsuccessfully overturn the decision taken by Omnivore. The exchange can be read here. The following conversation between Sam Mercer and Shelby Shaw reflects on the flaws of the automated content management black box.
What did you think about the customer support interaction with six or seven Mailchimp “agents”?
This email support thread reads like the chapters of an epistolary novel. I think it's important for someone reading the thread to know that it was a real conversation you had with multiple agents, as it can sound like it's mimicking a fake conversation with customer service. It's both ironic and funny, which makes a great parallel to what the work was about in the first place: a representation of a fake conversation sent through email. It's very much “life imitating art imitating life”. Except that there are consequences here.
Every time there's a new agent from Mailchimp, they never really say so-and-so passed your email on to me or I'll be working with you now. They just pick up where the other person left off, which brings to mind a question of whether or not there is a real person on the other side of the thread. This also happens when you get into an online customer support chat and an agent says, Hi, I'm Bob and I'm your customer service rep today, where you can find yourself wondering if Bob is a real human being. A lot of companies make an effort to make it clear for customers that their agents are human. And after the chat they'll ask for your feedback with questions like: how much did you feel that the customer service rep cared about you?
When you move from chatting with online bots—where you get the sense that companies may mimic the "authenticity" of human discussions—into email conversations, dealing with online customer service can become a mental spiral. As soon as an agent signs off with a specific name, this heavy issue emerges for me, where I don't know if I should think of them as human or nonhuman. So when I respond to a customer service email, to the name that was signed off, I always feel humiliated, because I'm acting as if Bob really wrote to me while Bob might be a bot or just not Bob at all. Maybe Bob has become Bill, who then becomes Doug, who becomes Donna… depending on whoever picked up the shift after Bob. There's a weird passivity within your Mailchimp thread. There's Evelyn, there's Dee, there's Ralph, there's Dominic, all these people who you believe are human, writing back to you.
Reading the thread of your communication with Mailchimp, I understand that you've been dealing with them in a similar way to my piece, which makes our conversation here a bit metanarrative.
It makes me think about the history of chatbots and especially Joseph Weizenbaum's ELIZA; a computer programme that acted in the manner of a psychotherapist. It was impressive how people who knew that ELIZA was a bot responding by a script still felt like they could talk to it. There is a famous anecdote of Weizenbaum's own assistant who asked for private time with ELIZA, to have a therapy session with what she knew was a bot but which she still found helpful. And the wild thing about ELIZA is that the programme was ultimately super simple as it just picked up on certain syntactic parts of what people typed in, and then parroted them back in a very formulaic way that mimics human language. So if someone said I'm sad, the programme would respond, Why do you feel sad? and then Tell me more. In the end ELIZA wasn't saying anything but just echoing specific keywords in a way that would prompt someone to reply. Your perception of reality is challenged in this communication.
Perhaps accidentally, I gave them [in email 9] a link to your website where you refer to the work as “mimicking spam”, which was something that they then picked up on. I wondered if you had any thoughts about Mailchimp's review of your work?
It was amusing to think that there was a Mailchimp representative who looked at my website and tried to make sense of an artwork that was realised through their service. The weird thing about their visit to the website was that they quoted directly from it during your conversation.
“AI collaborative variations on stock photos of a non-existent woman, delivery via six ‘spam’ emails.”
It's interesting how my own description of the work was basically used against the work, despite having written it as part of the project’s context. The customer service agent saw in my description the word spam as a keyword for the project—even in quotation marks—yet they used it as proof that this work is spam content and therefore cannot be supported.
After that instance, I went into a bit of detail about the work and their response repeated what you’ve just said:
“We certainly understand where you are coming from here and that this is indeed not spam. Even personally, I can say that I see the goal behind this art form and what it communicates stereotypical ideas of spam and deeper implications.” [Email 12]
Mailchimp's customer service finally recognises what you've been explaining to them, they then suspend their bot, but the bot is too powerful and overrides the human commands which apparently nobody else at Mailchimp seems to be able to override again. That's a problem.
For me, this shows the difficulty of automated artificial intelligence systems making decisions that humans can quite clearly see are incorrect. Do you have any ideas about how you might have changed the work to make it pass through the automated Omnivore bot?
While I often work with digital and analogue media in my art, and my research explores the affect of technology, I’m simultaneously very distrustful of technology. If everyone becomes dependent on automated technology then the ability to just be human will be gradually lost. I sometimes wish we were living in a perpetual late nineties, pre-"smart" devices…
It's hard to say how I might've changed Anastasia to evade Omnivore. One of the reasons why I didn’t change the work in order to fit Mailchimp's scheme is because we don't know how. At the very end of your conversation thread, Mailchimp tells you that the work will be determined as spam based on some trigger words that their AI bot called Omnivore will pick up, which is going to then block it. You ask whether they can let us know which words trigger the bot, then someone totally new writes back to you to say “we do not share publicly the triggers for Omnivore as they are constantly changing to keep up with industry trends and patterns and disclosing them could allow that information to disseminate to untoward actors. Your understanding is appreciated.” [email 17] But what is changing in the process of keeping up with industry trends and patterns of email?
Most of us have probably received an email at some point that we weren't sure was spam or not. When writing Anastasia's emails, it was an interesting challenge to write clearly and yet find a way to make it obvious that it is incorrect. Each email is also very brief. So it was quite difficult for me to create, because writing is my primary medium. To write spam, I had to pretend to be a bot pretending to be a person. I had to be sensitive to legible and illegible, understandable and not understandable language in terms of grammatical rules, syntax, and tone.
I bring this up because the Mailchimp agents told us that the project was flagged as spam due to content, but they can't tell us the trigger words they found. I, however, didn't use any words that are out of the ordinary. I have typos or word combinations that don't make proper sentences, but I don't have any swear words or pornographic content, for example. I think it's fishy for Mailchimp to admit that the reason they flagged Anastasia as spam relates to a syntactical and grammatical issue, because that brings up questions about how proficient Mailchimp's users must be with the English language in order to send emails that don't get blocked as spam..
Suggested Citation:Mercer, S. & Shaw, S. (2022) 'Not spam: a conversation with Shelby Shaw', The Photographers’ Gallery: Unthinking Photography. Available at: https://unthinking.photography/articles/not-spam-a-conversation-with-shelby-shaw