This talk was given on February 5, 2020, at Columbia University’s Italian Academy for Advanced Studies in America, during the Academy’s annual symposium marking Holocaust Remembrance Day. The event was titled “Misinformation, Media Manipulation, and Antisemitism.” Many of the ideas I discussed were developed with my colleagues Jameel Jaffer, Ramya Krishnan, and Carrie DeCell, and the talk drew on a lecture given by Jameel on October 18, 2018.

 

Good evening. Thank you so much to Barbara for inviting me. It’s a privilege to be with you tonight to mark Holocaust Remembrance Day, and to discuss the sobering topic of antisemitism in the digital age.

Let me start with a disclaimer. I am an observer of the continued spread in our society and in our world of antisemitism. But I am not an expert in it (and for that reason, I am grateful for the earlier presentations and have learned from them). My specialty instead is in free speech, and in the particularly robust protection of free speech afforded by the First Amendment to the United States Constitution.

The free-speech tradition in the United States has a long and complicated relationship with antisemitism and other forms of hate. The most famous episode in that drama is probably familiar to you—the case of the neo-Nazi marchers in Skokie, Illinois. If you’re not familiar with it, it still today reflects a major fault line in public sentiment toward free speech. In 1977 and 1978, Frank Collin, the neo-Nazi leader of the National Socialist Party of America, tried to organize a march of his fellow neo-Nazis in the Chicago suburb of the Village of Skokie. Skokie had (and today still has) a very large Jewish population, and in the 1970s it was home, in particular, to many survivors of the Holocaust. You can imagine the feeling of that community toward the neo-Nazis’ planned march.

The Village of Skokie responded by enacting legislation designed to shut the march down, including a bill outlawing the dissemination of speech inciting racial or religious hatred. In the lawsuits that followed, the American Civil Liberties Union (where I used to work) represented Frank Collin in challenging Skokie’s new ordinances and defending Collin’s right to march. The courts ultimately sided with Collin, holding that he had a constitutional right to lead a non-violent march of neo-Nazis through the streets of Skokie.

The planned march in Skokie never actually took place, but the case has come to symbolize the American approach toward hateful speech. With very few and very narrow exceptions, the First Amendment to our Constitution has been interpreted to protect hateful speech. There are many reasons that the Supreme Court and free speech advocates have offered in defense of the American approach. Among them are (1) the concern that a government empowered to silence hateful speech would turn its censors on traditionally marginalized voices, (2) the theory that hateful views are better countered through exposure and engagement than suppression, (3) the argument that the expression of hateful ideologies gives us important insights into the adherents of those ideologies, and (4) the hope that modeling tolerance of hateful individuals will, in time and perhaps across generations, desensitize us to their views and defang their assaults.

Much could be said, of course, in support of or in opposition to these defenses of the American free-speech tradition. There are thoughtful critiques of our capacious right of expression that highlight its substantial costs, and thoughtful justifications of it that are clear-eyed about those costs. And there are of course many democracies that outlaw hate speech but that most of us would still consider to be fundamentally democratic. But tonight, I want to steer clear of that debate—over the necessity or wisdom in a democracy of allowing hateful speech.

The important observation for my purposes is that, for decades, we’ve lived with the robust protection of free speech and, over those decades, developed strategies for how—in a society that allows hateful speech—to do our best to contain it. We respond to hateful demonstrations with powerful counter-demonstrations. We meet public messages of hate with near-universal condemnation. Sociologists and other researchers study the effectiveness of these and other tactics. And we iterate, improving over time, or so we hope, our collective effort to marginalize or eradicate hateful views.

This is not a raging success story, and I don’t mean to pretend otherwise. Hate persists in our society, as it does in every other. It demeans its targets, and implicates us all, often driving unique perspectives and voices out of the public conversation. And in this particular moment in our country’s history, hate has regained very vocal propagators in positions of great power.

But, at least up until the digital age, we seemed to understand the basic contours of our system of speech and counter-speech, of protest and counter-protest, of hate and condemnation. And we understood those contours in part because speech in the pre-digital age—and the rules that governed its delivery and spread—were more or less visible. We could study public discourse and reason about the influences on it. We had a general sense of the distribution of major publications, of major radio and television transmissions, and of major advertising campaigns. To say what is obvious, the pre-digital speech environment was decidedly more analog, and so the influences on what we could say and what would be heard seemed decidedly less obscure.

This account glosses over some complexities and nuance, to be sure, and I expect that experts in media and communications would point out flaws in some of my premises. I’m sure all would agree, though, that whatever we thought we understood about public discourse in the pre-digital age was upended entirely by social media.

What’s different? Why do we have such a hard time understanding public discourse and its pathologies today, in the digital age? The answer, at least in part, is that more and more, our exercise of free speech takes place within black boxes controlled by social media companies.

Let’s begin with the obvious point that social media is now one of the most important venues—if not the most important venue—for public discourse today. Take Facebook. Two billion people around the world and 200 million in the United States use Facebook to get their news, debate policy, join movements, and connect with friends and family. Facebook’s platform is the means by which human relationships are formed and maintained, by which we attempt to learn from or educate others about ourselves and the world. Facebook is not just a place to exchange media in a social manner; Facebook is an entirely new social medium. It is the literal substrate of many of our social interactions.

But Facebook and other social media services are not neutral substrates for our speech. They are not just platforms that carry our messages to their intended recipients, agnostic as to the speakers, the listeners, and the messages. They concern themselves with all three.

Let’s stick with Facebook as our example. Facebook moderates the speech on its platform in at least three ways. First, it has rules about who can speak (or listen) on the platform. They’re fairly permissive rules—just about anyone can sign up for a Facebook account—but if you violate them, Facebook might kick you off. This is what happened last year to the extremist provocateurs Milo Yiannopoulos and Alex Jones, among others.

Second, Facebook decides what kind of speech is permitted on its platform. It has community standards forbidding hate speech, bullying, harassment, and the like. If Facebook detects this speech on its site, it will generally delete or suppress the offending posts. As of about a year ago, some 15,000 content moderators worked to implement Facebook’s community standards, manually analyzing post after post flagged for review. (Once it is up and running, Facebook’s new Oversight Board will rule on appeals and referrals from these kinds of moderation decisions.)

Finally, Facebook structures the speech that remains on its platform. It decides whether and how its users can interact, and it decides whether and when to amplify or suppress each form of speech posted on the platform. This is the most significant control Facebook exercises over the public discourse that it hosts, and it’s worth reflecting on just how significant it is.

In the physical world—in this room for instance—the rules that determine how we can interact and whether we’ll hear each other are, for the most part, the laws of physics. If I speak quietly, you won’t hear me. If someone in the seat next to you screams, you’ll hear them and not me. You allocate your attention by, for example, physically showing up here and not somewhere else. There’s little mystery in how these rules work.

On the social media platforms, though, it is generally the companies that decide who hears what. In a 2018 interview, the director of analytics for Facebook’s News Feed estimated that, every day, the average user’s News Feed is populated with about 2,000 stories. But the average user doesn’t scroll through all of those 2,000 stories—they scroll through only about 200. That’s 10%. And the essential point is this: Facebook decides which 10% of the stories in your feed you see, because Facebook decides the order in which they appear.

Now, imagine that I delivered this presentation, not by talking into this microphone, but by typing into Facebook. Whether you ever saw my posts in your News Feed would be up to Facebook to decide. The laws of physics would not govern the space between us. Facebook would. Which is why earlier I referred to Facebook as not just a neutral carrier of our speech, but as an entirely new medium for social interaction—a social medium—with rules and design decisions that shape our conversations in the digital world just as the laws of physics shape them in the physical world.

This form of control over public discourse—amplification and suppression—is the most potent that Facebook and other social media companies wield, but it is also the least-well understood.

Facebook decides which information you see and the order in which you see it in reliance on an array of ever-changing algorithms. These are the black boxes of the social media platforms. These algorithms are opaque—even to Facebook—because they rely on a form of black-box computation called “machine learning,” in which the algorithms train themselves to achieve goals set by their human programmers. In the case of Facebook, the machines are generally programmed to maximize user engagement—to show you whatever will make you click another time, stay a little longer, and come back for more.

This kind of machine learning can produce unintended effects. In the past three years, many have wondered whether Facebook’s algorithms have deepened political divisions and promoted the spread of hate, misinformation, and propaganda. Do Facebook’s algorithms show some ads to progressives and others to conservatives? Do they prioritize salacious conspiracy theories above genuine reporting? In trying to maximize user engagement, do Facebook’s algorithms maximize user outrage?

The answers to these questions and related ones are indispensable to understanding the ways in which the social media companies are shaping public discourse and facilitating the spread of harmful speech—including antisemitic hate. The answers are especially vital if we hope to disrupt the cycle of hate and division and propaganda that Facebook’s algorithms appear to fuel.

Unfortunately, we are far from answering these questions.

Not even the social media companies have the answers we need. Part of the difficulty is that the machine-learning algorithms the companies rely on, to decide which news stories or posts you’ll see and which you won’t, are exceedingly difficult to study directly. These algorithms work by processing training data to develop internal models on which to base future predictions. These internal models are obscure, though, and their effects on public discourse can’t readily be discerned just by looking at them.

The more prevalent method of probing machine-learning processes is to study inputs and outputs. If we know, for example, how Facebook sorts the News Feeds of two individuals who are similar in most relevant respects (with the exception of, say, their races or religions or political commitments), then we might gain a better understanding of what’s going on in the algorithmic black boxes, even if we can’t actually see inside.

To its credit, Facebook at one point committed to enabling research of this type through a partnership with the Social Science Research Council, but that partnership appears to have dissolved, after Facebook’s failure—more than eighteen months after the project’s launch—to deliver the data it had promised to researchers.

Even before its dissolution, however, Facebook’s partnership with the Social Science Research Council was at best a partial solution. It focused narrowly, at least in its initial scope, on research related to election integrity. It accepted requests for access to data by researchers, but not by journalists. And it contemplated only credentialed research, rather than fully independent investigations of the platform.

There are many journalists and researchers—hundreds, actually—that do in fact independently investigate the social media platforms and their often invisible influence on public discourse. But their work is hobbled by the terms of service of the major platforms, which prohibit the sort of digital investigations needed to study the platforms at scale.

Most significantly, all of the major social media platforms prohibit the collection of public data from their sites using automated means. For instance, if researchers wanted to study the role of public officials in spreading hate through their social media channels, a reasonable first step would be to collect the public postings and advertisements of those officials from their social media accounts. They could collect that data manually, but the companies’ terms of service would bar them from collecting it automatically, using computer code. This prohibition effectively prevents that sort of research, because it is impossible to study trends, patterns, and information flows without collecting information at scale, and it’s practically impossible to collect information at scale without collecting it digitally.

Some social media platforms, including Facebook, also prohibit the creation of temporary research accounts. These kinds of accounts would allow researchers to study Facebook’s machine-learning amplification and suppression of speech, by crafting controlled inputs and then studying the outputs. A researcher could, for example, create two accounts with profiles identical in every way except their apparent race, and then study how that isolated difference affects Facebook’s sorting of the accounts’ news feeds, or any of Facebook’s other machine-learning–driven recommendations.

These prohibitions in the social media companies’ terms of service suppress research and journalism crucial to understanding the ways in which the companies are shaping society. Journalists and researchers who violate these terms risk legal liability. Most directly, they risk being sued for breach of contract. But they also risk civil and even criminal liability under the Computer Fraud and Abuse Act. That law was originally enacted in 1984 to make malicious hacking illegal, but it has been interpreted by the Justice Department, by some of the social media companies, and by some courts to prohibit violations of a website’s terms of service. On this interpretation, it would be a crime for journalists and researchers to study the social media platforms using the basic tools of digital investigation. Journalists and researchers are understandably chilled by that possibility. Some have forgone or curtailed investigations of social media platforms for fear of liability. And some have been told by the companies to discontinue digital investigations that the companies said violated their terms.

It’s important to recognize that the social media companies may have very good reasons for banning the use of these sorts of investigative tools generally. Companies that make it easy for millions or billions of people to share sensitive information online should do what they can to prevent other companies or malicious actors from scooping up that information for their own, private purposes. And at a time when foreign governments are attempting to distort our political discourse by sowing distrust online, it’s understandable that Facebook wants to ensure that its users can trust that the other users they interact with are real people.

But because the social media companies’ policies do not distinguish between nefarious uses of these tools and salutary ones, the policies apply equally to foreign state meddling in our elections, and to public-interest research designed to expose that foreign state meddling.

The reality is that it’s possible to distinguish between these good and bad uses of the basic tools of digital investigation. We, at the Knight First Amendment Institute, proposed that Facebook do just that, by amending its terms of service to create a “safe harbor” for journalism and research that would serve the public interest, even if it violated Facebook’s restrictions on automated collection and research accounts. More than two hundred researchers from around the world endorsed the proposal, noting “There is extraordinary public interest in understanding the influence of social media platforms on society. Yet Facebook’s prohibition on the use of basic digital tools obstructs much of the work that could be done to understand that influence.”

Unfortunately, Facebook has rejected our proposal, choosing to continue to disallow journalism and research that would benefit the public.

And so this is the current state of affairs: the social media companies have enabled new and rich forms of social interaction, but they are also shaping those social interactions in ways we do not understand with effects on society that we grasp only in part. Their algorithmic amplification and suppression of speech appears to be deepening our political divisions, fueling hatred, and proliferating antisocial ideologies. The evidence of this is still incomplete, however, as are the explanations of precisely how and why it may be occurring—and what changes the companies or Congress could make to combat it.

These obstacles to understanding are also obstacles to action. When it became clear in the 1970s that neo-Nazis had a constitutional right to march in the streets of Skokie, the resistance to their hateful message mobilized, and the rules of engagement between demonstrators and counter-demonstrators were clear. It was a contest of ideas playing out on familiar terrain. Today, as misinformation and hate flourish online, the terrain and the rules that apply to it are, at best, unfamiliar and, at worst, tilted in favor of the sensational and the divisive.

If we’re serious about addressing the pathologies of our new social medium, we need to better understand the rules that govern it.