With two upcoming exhibitions that center on AI, the multimedia artist is drawing attention to a hidden world of modern phrenology, racist categorizations and the surreality of facial recognition.
On a warm early-August afternoon in his studio in the Bedford-Stuyvesant neighborhood of Brooklyn, the artist Trevor Paglen is wearing a white T-shirt, his head is shaved bald and a concrete wall backgrounds him. He is discussing, via Zoom, the ways in which the power of Silicon Valley might be less assured than many would think. “If you looked at early-20th-century railroad barons, you’d have been like, ‘These guys are unstoppable,’” he says. “Or if you’d gone back to the 14th century, with kings, everybody would have said, ‘Oh, this is just how it is, how God decided it.’ People think things are inevitable until they’re not.”
In his work, Paglen, 45, inspects biases, agendas and mistruths. He has photographed CIA black sites, learned how to scuba dive in order to see the physical cables that permit invisible data transfers, worked with filmmaker Laura Poitras on her documentary about whistleblower Edward Snowden and launched the first satellite sculpture into space. He also surfs in Hawaii, explores the deserts of the southwest United States and maps the cosmos. Though his art practice has taken countless forms, he has consistently worked to reveal the hidden power dynamics that, even to those who are aware of them, can feel infinite, inescapable and, at times, inevitable.
Today, that means turning his attention toward AI, where classification systems can perpetuate the apparatuses that allow for subjugation, he says. From surveillance to predictive policing, AI can reinforce the hierarchies that movements for equality like Black Lives Matter are protesting.
With upcoming exhibitions at Pace Gallery in London and at the Carnegie Museum of Art in Pittsburgh, Paglen investigates the shadowy history of AI, marrying his cerebral concepts with approachable art. In Bloom, which opens September 10 at Pace, two sculptures demonstrate the discriminatory history of facial recognition technologies. One, called The Standard Head, is based on a supposedly neutral face programmed into an early computer by Woody Bledsoe, a relatively unknown mathematician, from a composite of seven faces of different white men in the early 1960s. Likely receiving funding from the CIA, Bledsoe’s project attempted to recognize images of faces based on their points of difference from this ur-face. Another artwork, The Model (Personality), is a bronze-plated phrenology skull on which Paglen inscribed the psychological categories used by some police forces to identify those purportedly more likely to commit crimes.
Growing up on Air Force bases, where his father worked as an ophthalmologist and his mother was an Episcopalian priest, Paglen moved between Maryland (where he was born), Texas and California. When he was 12, his family relocated to a base in Wiesbaden, Germany. He struggled with the language and fitting in. “It was a very radical kind of undoing of what your assumptions were about how life works and how you imagine yourself in the world,” Paglen says of his years in Germany. He went on to get a PhD in geography from the University of California, Berkeley and an MFA degree from the School of the Art Institute of Chicago.
Today, he vacillates between a laid-back attitude and an intense focus on questions of global historical importance. “Trevor is that intriguing combination of someone who is both a quintessential California surfer and a fierce intellectual who is passionately committed to justice,” says Kate Crawford, a distinguished research professor at NYU and the co-founder of the university’s AI Now Institute and Paglen’s longtime friend and collaborator. “What makes him very unusual is that he has this absolutely uncompromising side in terms of his commitment to his work; at the same time, he is also really funny and chill and prepared to see the amusing side of where we are in the world, even though we’re in a very dark place right now.”
One of the central issues Paglen points out about AI is that it often lacks crucial nuance. An image of a person, when run through most AI algorithms, is given discrete, unwavering categories. (Is he trustworthy or untrustworthy?) Self-driving cars boil down complex decisions to this-or-that utilitarianism. (In an accident, is it better to kill two elderly people than one younger person?) AI doesn’t process reality with the same breadth that humans do.
In Bloom, there are large-scale photographs of flowers, which are colored and edited by an algorithm that’s been trained—via machine learning—in part by viewing other images of flowers. The images, therefore appear almost like original photographs but have been tinged by AI, leading one to wonder where reality ends and a computer’s vision begins. In Opposing Geometries, which opens Friday at the Carnegie, Paglen similarly created composite photographs of historical figures, including gauzy composites of Simone de Beauvoir and novelist Samuel Beckett, by mixing images tagged as them by facial recognition programs. The result is quasi-realistic depictions with an abstracted blur, as in a Gerhard Richter painting. There are also photographs of famous landscapes, such as Yosemite and the Black Canyon of the Gunnison National Park, which Paglen photographed with an analog camera, digitized and then overlaid with lines and circles to demonstrate how artificial intelligence might attempt to “read” the image.
The result of all of these works is an unsettling mimesis. AI can replicate and, in certain cases like predictive policing or surveillance, determine the course of our lives. But as Paglen’s eerie photographs show, AI struggles to understand our precise reality. “It’s about the fact that algorithms at their heart are made by people who, like every human being, are prone to mistakes and preconceptions and biases,” says Dan Leers, curator of photography at the Carnegie and the curator of Opposing Geometries. “They need to fit the world into very neat and fairly rigid categories that don’t actually apply to the organic and chaotic nature of life.”
In January 2019, on a brisk morning in the Kreuzberg neighborhood of Berlin, Paglen sat in his art studio there working on a project and watched in shock as files he was accessing on a server maintained by Stanford University began to mysteriously disappear.
Paglen and Crawford had been working on a project using a dataset called ImageNet, created years earlier at Stanford and Princeton, which was composed of millions of photos pulled from the Internet, including about 1.2 million of people. The images were being used for improving “computer vision,” a field of AI that trains machines to recognize and classify objects, including faces and the emotions and psychological characteristics therein. Using this training data from ImageNet, Paglen and Crawford were creating their own publicly available program called ImageNet Roulette, where anyone could upload a photograph of themselves and in seconds see how the AI algorithm would categorize them.
When ImageNet Roulette was released last September, many of the results were racist and sexist, with images of women and nonwhite people being described by the AI algorithm with negative words like “wrongdoer” or “offender” with greater frequency than those of white men. On that day in January, Paglen says, it seemed someone wasn’t happy with him using the data to make ImageNet Roulette, which would eventually go viral and make the often discriminatory and spurious nature of AI categorization public.
“It was a really intense moment to be watching this thing that you had been critiquing being pulled off the Internet in real time,” Paglen says. “Apparently it was super-urgent that it needed to be deleted because somebody was sitting in California at three in the morning deleting all this shit off of the servers and this website.” Paglen assumes an administrator at Stanford had been “given instructions to take it offline.” A representative for Stanford could not be reached for comment.
One critique of Paglen’s work on AI is that he omits its potential positive benefits. “When the narrative turns to ‘AI is necessarily evil; AI has to use these classification systems’—that’s where I start objecting,” says Christopher Manning, director of the Stanford Artificial Intelligence Laboratory. “AI is also helping people and making their lives freer and better, and we’re going to see more of that as time goes by.” But, Manning adds of Paglen’s work on AI, “I honestly do think he is right: People were naive and neglecting these issues for a long time.”
Another criticism is that Paglen’s art itself could be said to have been co-opted by these systems. One of the paradoxes of his market is that, as his work becomes better-known and more desirable, many of the people who can still afford to collect it might also be the kinds of affluent technological and entrepreneurial overlords in the crosshairs of his critique. Marc Glimcher, CEO of Pace, however, believes Paglen’s art is so socially influential that it exists largely outside of market considerations.
“I’m totally into his market and for having people figure out how to understand how to collect Trevor Paglen’s work,” says Glimcher, whose gallery signed Paglen in March. “But I also think it’s important to point out that the market has had the spotlight long enough in the art world. It’s so marginally interesting, but it has become the full story for so long that it’s nice to have an artist where that’s not the story.”
Amid the coronavirus pandemic, technology, particularly AI, is only further bolstering its centrality as we’re compelled to socialize via the Internet more than ever before. Paglen says it’s Sisyphean to try to totally escape AI, because it’s so intrinsic to daily technology, though he does take some precautions. He uses the more private web browser Tor when he’s searching health symptoms, and the encrypted messaging service Signal for more delicate conversations. And he doesn’t upload photographs of his family to the Internet. “I do not actually have the choice to not use a smartphone,” he says. “I do not actually have the choice to not use Google products or Amazon products. I mean that in a literal way. I cannot do my job. I cannot make a living. I cannot exist in the world without being a part of these systems.”
Late one August afternoon, after he’s returned to California and is driving through an area of Marin decimated by forest fires, he wonders whether art can still effect change, whether the march of AI is the foregone conclusion it’s often claimed to be.
“Art helps change common sense,” he says. “You cannot tell me that artists like Gran Fury and Gregg Horowitz did not change the way that we think about gender and sexuality. You can’t tell me artists like Martha Rosler and Jenny Holzer didn’t change images of women. Art can be weird and [can] be questionable: It can pose conundrums that are useful to think through—even though they don’t have answers.”