Being Numerous Page 11
That even the disfigured corpse of a child was not sufficient to move the white gaze from its habitual cold calculation is evident daily and in a myriad of ways, not least the fact that this painting exists at all. In brief: the painting should not be acceptable to anyone who cares or pretends to care about Black people because it is not acceptable for a white person to transmute Black suffering into profit and fun, though the practice has been normalized for a long time … Through his mother’s courage, Till was made available to Black people as an inspiration and warning. Non-Black people must accept that they will never embody and cannot understand this gesture: the evidence of their collective lack of understanding is that Black people go on dying at the hands of white supremacists.
It was a point well made, but lost on the commentators, and the Whitney, who jumped to the defense of Shutz (and white free speech). It should be well established by now that the mere reproduction of the spectacle of black suffering does not challenge the system under which this suffering is normalized; cops with body-cams still kill black people with impunity. Public outrage also followed (white) poet Kenneth Goldsmith’s edited recital of Michael Brown’s autopsy report and an art exhibit depicting Brown’s corpse with a mannequin under a sheet in a Chicago gallery. Anger arose precisely because Brown’s body, like Till’s in Shutz’s painting, was treated as a pure commodity alienated from Brown-the-person. The only thing perhaps more offensive than those artistic offerings is the truth they speak to—Brown’s body was already a media spectacle and already dehumanized by white supremacy.
The difference between the artists’ use of this horrible truth and the response of the Ferguson protesters and Black Lives Matter activists following Brown’s death, or Mamie Till’s act of resistance many decades before, was that the art doubled down on the spectacle-as-commodity. In contrast, the activism engaged in what NYU professor of media culture and communication Nicholas Mirzoeff calls “persistent looking,” an act of defiance against historically racist technologies of visuality. Mirzoeff highlights, for example, the repetition of the chants “Hands Up, Don’t Shoot!” and “I can’t breathe!” as powerful repeat refusals to simply move beyond the deaths of Brown and Eric Garner. These protest rituals are calls to stay in the moment. “Die-in” demonstrations and the repetition of deceased people’s names—tactics with their origins in AIDS crisis protest—similarly attempt to refuse that we move along. In The Appearance of Black Lives Matter (2017) Mirzoeff wrote of the “Hands Up, Don’t Shoot!” chant and physical action:
It concentrates our attention on the vital moment (in the sense of living as well as essential) before the definitive violence. Those witnessing the action feel in that repeated present their choices for the future. The action prevents the media from its usual call for closure, healing, and moving on. Protesters choose to remain in that moment that is not singular, but has already been repeated. For Michael Brown, there was no choice. But when protesters reenact, they are making choices.
If Mirzoeff’s appreciation of this chant seems overblown, it’s worth recalling the speed with which the widely circulated, intolerable image of Alan Kurdi fell from the forefront of the public conscience and with it vanished the public’s concern for refugees. In the first half of 2018, nearly three years after Alan’s death, 1,130 people perished trying to cross the sea from North Africa to Europe. The ability of an image to produce outrage is one thing, to sustain it is another in a media environment in which 500 million tweets and 700 million Snapchats are posted every day. Turning affective outrage into effective resistance is a greater challenge still. Mirzoeff himself noted in a 2016 interview, “One can see that a single image, a single iteration, however powerful that image is, until it is taken up as part of a collective project, can’t sustain the change we want to see longer than for a short period of time.” As such, he highlights the strategic value of “persistent looking” evoked by direct actions in which activists place their bodies in spaces, streets and intersections, in which the demand is repeated, like Mamie Till’s, to not look away.
Such direct action is not only an invocation of public grief for those who have been killed; it is an insistence on the presupposition that a life mattered and was grievable before it needed grieving.
10
Being Numerous
One photo from the sometime-halcyon days of Occupy Wall Street has come to haunt me. The image, which was used as the cover for the second issue of Tidal, Occupy’s theory journal, at first glance seems to capture a trenchant insurrectionary tableau. A massive mob of protesters appears on the cusp of breaking down a fence, held up by a measly line of riot cops defending the emptiness of Duarte Square, a drab expanse of concrete in downtown Manhattan. Look closer, though, and a different scene comes into focus: no more than a scattered handful of protesters are actually pushing against the fence. The rest of the crowd, pressed tightly against each other, hold smartphones aloft, recording each other recording each other for the (assumed) viewers at home. The fence of Duarte Square was barely breached that December day in 2011.
Two years after that picture was taken, whistle-blower Edward Snowden’s National Security Agency leaks asserted totalized surveillance as an undeniable fact of the American now. Years later still, it’s hard to remember that it was ever a revelation that we are data bundles in government dragnets. That Occupy photo—in which a desire for insurrectionary action was paired with advanced technocapital’s surveillance-control apparatus—telegraphed a fraught dynamic that in the years since has become impossible to ignore, but all too easy to forget.
The photo captured the near knee-jerk proclivity many participants in mass protests have developed to recount every action live over social media, with the idea that this was inherently bold and radical; taking the narrative of protest into our own hands, our own broadcast devices, refusing reliance on traditional media institutions. Regardless of where you stand on the question of whether social media platforms have helped, hindered, or otherwise shaped protest movements (or all of the above), the Tidal image took on a different valence in the years following the Snowden revelations. For the smart-phones in that photograph were not only a hindrance to the crowd’s purported effort to swarm Duarte Square; they were surveillance devices.
This much became undeniable by 2014 (at the latest): The devices and platforms we rely upon—to communicate, gather information and build solidarity—offer us up as ripe for constant surveillance. The surveillance state could not be upheld without its readily trackable denizens. To sidestep our tacit complicity in this would be to fail to recognize how deep our participation in our own surveillability runs—it’s how we live.
Reflecting on that time, I can’t remember what it was like for those thoughts to feel like something new and in need of saying. Surveillance is a condition of social life, given the ubiquity of social media; it seems almost quaint (and this is no good thing) that just a few years ago we were shocked to learn the extent of our mass surveillance state. This is, of course, how we live. But the NSA leaks were, at the time, a revelation. They shed light on a fearsome nexus between the government, communications and tech giants. And beyond this, they offered a lesson in the challenges of fighting a system of control in which we are complicit.
There was a certain folly, but also a commendable optimism, in the immediate, outraged responses to Snowden’s leaks. Journalists and activists sought an object, a vessel, a villain. Who is to blame? Where are the bad guys? How do we fight back? There were obvious culprits in need of censure: whether it was then–director of national intelligence James Clapper, then–NSA director Keith Alexander, Google, AT&T, or the PRISM data collection and surveillance program, we looked to blame someone or something we could isolate and locate. Politicians’ and activists’ efforts centered on top-down NSA reform and demands for tech giants to be more transparent. As such, they missed the nuance and gravity of what was at stake.
It was a prime moment for bipartisan pantomime. Democrat and Republican law
makers came together in performative outrage to demand an end to the NSA’s bulk collection of Americans’ communications data; signed into law in June 2015, the preposterously named USA Freedom Act gained traction primarily on this point. It proposed some limits on bulk data collection of Americans’ communications, but it also restored some of the worst provisions of George W. Bush’s 2001 Patriot Act. The Obama White House assembled advisory committees who duly issued lengthy reports and promised more reviews to come. Perhaps worst of all, scrambling for position as the “good guys,” tech leviathans, including Google and Facebook, pushed publicly for greater transparency. It seems laughable now, after Facebook’s opaque policies and products may have helped sway an election; nevertheless, every week brings a new promise of “transparency” from the great clouds of Silicon Valley.
The fight for bold executive and legislative reform of state surveillance came to little. For months during 2013 and 2014, we talked about the NSA. And then we didn’t. Government agencies are still using programs like PRISM, launched in 2007, which authorizes the NSA to demand vast reserves of stored data, in concert with pretty much every major Silicon Valley company, including access to our private communications without warrants. None among the programs revealed in Snowden’s trove, which incurred such public outrage at the time, has really stopped. The corporate-government surveillance nexus is going nowhere; the best these reform efforts had to offer is a surveillance state with mildly different contours.
By focusing on legible seats of power, activist groups and outraged political players largely sidestepped the question of how surveilled subjects uphold—cannot but uphold—their position as surveilled.
A lot of discussions about government surveillance were framed counterfactually: whether we would have consented to our current level of mass surveillance had we known what we were signing up for as digital denizens. In 2014, James Clapper admitted that the NSA should have been more open with the public about the ubiquitous hoarding of their communications. But, he doubled down: “If the program had been publicly introduced in the wake of the 9/11 attacks, most Americans would probably have supported it,” he said. Clapper couldn’t help but resort to a perverse conditional logic in which the public would have consented to what they could not, in fact, consent to. His post hoc assertion that the public would have agreed to mass government surveillance, had they been given advanced warning, is untestable—we can’t go back to that moment. As Ben Wizner, legal adviser to Snowden and the director of the American Civil Liberties Union’s Speech, Privacy and Technology Project, commented in response to Clapper, “Whether we would have consented to that at the time will never be known.” We have not consented to our own constant surveillance, even if the way we live has produced it.
Since 2014, conversations about surveillance through techno-capitalism have shifted away from a focus on unconstitutional government spy programs, and toward questions about how, and to what corporate and political ends, tech giants extract and use our data. Or, more precisely, how these corporations use (and produce) us qua data; the data, after all, is not ours. This discursive shift makes sense; to focus primarily on the NSA incorrectly frames contemporary surveillance as a problem of unwanted, oppressive government scrutiny. This is, no doubt, a problem—as anyone summarily placed on a no-fly list would attest. But the government programs Snowden revealed operate over a terrain where mass (and mutual) surveillance is already the norm, the baseline, of social participation: the offering-up of ourselves, as surveillable subjects, through most every online interaction, all organized by a tiny number of vastly powerful corporations.
Writer and theorist Rob Horning summarized the problem well in a 2016 note published in the New Inquiry:
Being watched qualifies us for the more specific forms of recognition that build our reputation and establish our economic viability. But the attention we experience as support and opportunity is also the data that sustains surveillance systems. We become complicit in surveillance’s productivity, tracking ourselves and others, recognizing each other within spaces of capture. We want to be seen and want to control how we are seen, but we accept that one can come only at the expense of the other.
The extent to which we truly “want” and “accept” these conditions is moot. While we are unquestionably active participants in upholding a state of surveillance, to suggest that we are therefore consenting would be to overstate our choice in the matter. We are not all inherently reliant, as a point of economic necessity, on surveillance-enabling devices and interfaces (although many workers, like Uber drivers and TaskRabbit cleaners are). However, participation in surveillance systems is inescapable for those who abide by the social and economic spirit of the now, because these Silicon Valley–owned networks and interfaces have become the stage on which the social, the intimate and the commercial—even the political and the revolutionary—are enacted.
Paul Virilio, one of the most prescient thinkers of how technology (re)orders the world, talked about accidents. Each technology, from the moment of its invention, carries its own accident (its potential for accident)—it introduces a type of accident, a scale of disaster, into the world that had not existed before. The invention of a technology contains the invention of that technology’s severest accident, as well as its minor potential downsides and failings. As Virilio put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution … Every technology carries its own negativity, which is invented at the same time as technical progress.” The fact that we presume innovative advancement without damage is hubris worthy of Icarus. As Virilio saw it, technology containing its own accident is “so obvious that being obliged to repeat it shows the extent to which we are alienated by the propaganda of progress.”
Virilio’s framework rejects a dim binary in which we must deem our current and developing technologies “good” or “bad.” Rather, it demands a constant ethical calculus: we must ask of a technological possibility what potential accidents it contains, and whether they are tolerable. It’s not always knowable, but it is always askable. And better asked sooner, rather than later. As Virilio pointed out in 1995, high-speed trains were made possible because older rail technologies had produced a form of traffic control that allowed trains to go faster and faster without risking disastrous collisions. The accident had been considered. But, “there is no traffic control system for today’s information technology,” he wrote then, and again, more recently: “We still don’t know what a virtual accident looks like.” But he knew this accident would be “integral and globally constituted”; such is the totality of online networks’ enmeshment with everyday life, all at instantaneous speed. “We are pressed, pressed on each other / We will be told at once / Of anything that happens”—so the poet George Oppen foresaw in his great 1968 work Of Being Numerous.
For its developers, advertisers and the government, surveillance is not an accident of social media. For many users—those seeking only a network of communication and recognition, plus a few well-targeted ads—surveillance here is a most tolerable accident of networked technologies. However, for some of us (and it should be more of us—I know I should feel this way more often, when instead I click on the recommended ad), it is an intolerable accident that we wish we could have foreseen. When I joined Facebook as an undergraduate student in order to keep in touch with school friends, I didn’t even think about it. And here, again, consent is moot: there is a small, elite regime of expertise when it comes to the techno-scientific knowledge necessary to analyze, before too late, the accident toward which we, as users, are hurtling. But of the Silicon Valley scions, it is fair to say: you knew, or should have known.
Enough guilt-ridden former Facebook and Apple programmers have come forward with tepid mea culpas about designing features aimed at triggering addictive behaviors and user reliance. Like a casino manager admitting that they keep out natural light and serve fr
ee coffee all night for a reason. And, as Horning also noted, there is a pleasure in giving over to algorithmically organized experiences, through feeds and curated ads and suggestions, which cater to and deliberate (for us) our desires: “Such platforms teach users helplessness. Staging information overload deliberately helps with the lessons. The point is to make the surrender pleasurable.”
Our engagement with the devices of the surveillance state goes deeper than the technological tools we use—indeed these are not simply tools, but apparatuses. In “What Is an Apparatus?,” Italian philosopher Giorgio Agamben argues that “ever since Homo sapiens first appeared, there have been apparatuses, but we could say that today there is not even a single instant in which the life of individuals is not modeled, contaminated, or controlled by some apparatus.” For Agamben, an apparatus is not simply a technological device, but “literally anything that has in some way the capacity to capture, orient, determine, intercept, model, control, or secure the gestures, behaviors, opinions, or discourses of living beings.” As such, a language is an apparatus as much as an iPhone. He wrote of his “implacable hatred” for cell phones and his desire to destroy them all and punish their users. But then he noted that this is not the right solution.
The “apparatus” cannot simply be isolated in the device or the interface—say, the smartphone or the website—because apparatuses are shaped by, and shape, the subjects that use them. Destroying the apparatus would entail destroying, in some ways, the subjects who create and are in turn created by it. There’s no denying that the apparatuses by which we have become surveillable subjects are also systems through which we have become our current selves, tout court, through social media and trackable online communication—working, dating, shopping, networking, archived, ephemeral and legible selves and (crucially) communities. A mass Luddite movement to smash all smartphones, laptops, GPS devices, and so on would ignore the fact that it is no mere accident of history that millions of us have chosen, albeit via an overdetermined “choice,” to live with and through these devices.