I was delighted to be asked by the Nathanson Center and Or’ Emet Fund at York University’s Osgoode Hall Law School to deliver the 2018 Or’ Emet Lecture. In the lecture I delivered a couple of weeks ago, I focused on questions arising from the privatization of the public square. In particular, I asked whether the law should afford special protection to journalism and research focused on the social media platforms—a question that my Knight Institute colleagues and I have spent a lot of time thinking about over the last few months. Many of the ideas I discussed in the lecture were developed with Alex Abdo, Carrie DeCell, and Ramya Krishnan—and Ramya worked with me on the lecture itself. If readers have reactions to the lecture, I’d love to hear them. I’m at jameel.jaffer@knightcolumbia.org.

Or’ Emet Lecture
Osgoode Hall Law School
Oct. 18, 2018

A few months ago, the Guardian published a remarkable story revealing that a Cambridge University researcher had harvested as many as 50 million Facebook profiles for Cambridge Analytica, a data analytics firm headed at the time by Steve Bannon, one of Donald Trump’s key advisors. The researcher, Aleksandr Kogan, collected the profiles with an app called “thisisyourdigitallife.” Through the app, he paid Facebook users small amounts of money to take personality tests and to consent to the collection of their data for academic purposes. Kogan then turned the profiles over to Cambridge Analytica, which used them, the Guardian said, to “predict and influence American voters’ choices at the ballot box.”

The Guardian’s story relied in significant part on the account of Christopher Wylie, a Canadian researcher who had helped establish Cambridge Analytica, and who later worked with Kogan on the Facebook project. Wylie told the Guardian, “We exploited Facebook to harvest millions of people’s profiles [and] built models to exploit what we knew about them and target their inner demons.” “That was [what Cambridge Analytica] was built on,” he said.

Most of you probably remember the Guardian’s story. You may not be familiar, though, with what happened the day before it was published. As the Guardian’s editors were readying their story for print, their lawyers received a letter from Facebook. The letter threatened a lawsuit if the Guardian went ahead with the story. Facebook knew the story would provoke disbelief and outrage and perhaps even a regulatory response, so it tried to quash it with the threat of a lawsuit.

There’s nothing unusual, unfortunately, about powerful actors threatening litigation to preempt unflattering news coverage. Intimidating missives of the kind that Facebook’s lawyers sent to the Guardian are a staple of the media sphere in the United Kingdom and in the United States—and here in Canada, too, I’m sure. But Facebook isn’t just any powerful actor. It’s one of the largest corporations in the world. It has more than 2 billion users, almost 200 million in the United States. And through its human and algorithmic decision-making, it has immense influence on how its users engage with one another and with the communities around them.

It’s no accident that stories about political polarization, filter bubbles, election integrity, the spread of disinformation online, voter suppression in Brazil, mob violence in Sri Lanka, and even ethnic cleansing in Myanmar have Facebook as their common denominator. Facebook has a powerful, if often invisible, influence on what we would once have called the public square. And through that influence, Facebook affects societies all over the world.

What are the mechanisms of this influence? In a new article, the legal scholar Kate Klonick argues that the social media platforms should be thought of as “systems of governance,” because they’re now the principal regulators of speech that takes place online. Through their control of the new public square, the platforms are exercising power we ordinarily associate with state actors.

One facet of this power—the facet that Klonick explores—is sometimes called “content moderation,” by which we usually mean the determination of which speech should be permitted on these privately controlled platforms, and which shouldn’t be. Facebook, for example, routinely removes user content that shows graphic violence or nudity, as well as hate speech (as Facebook defines it), and speech glorifying terrorism. The other major platforms have similar policies. There’s an ongoing debate about the ways in which the companies are exercising this power—whether they’re taking down too much speech, or too little, or perhaps even both.

In an essay published over the summer by the Hoover Institution, Daphne Keller observes that people from across the political spectrum are convinced that the platforms are silencing speech for the wrong reasons. This debate is important because increasingly it’s the platforms, rather than governments, that delineate the outer limits of public discourse—and also because the platforms’ power to censor isn’t subject to constitutional or regulatory restraint.

But content moderation, at least in the narrow sense of that phrase, hardly begins to describe the platforms’ influence. The social media companies’ more fundamental power over public discourse is reflected not principally in their decisions about which speech to exclude from their platforms, but in their decisions about how to organize and structure the speech that remains. They dictate which kinds of interactions are possible and which aren’t, which kinds of speech will be made more prominent and which will be suppressed, which communities will be facilitated and which will be foreclosed. If the new public square were an ocean, Facebook would control not only which fish got to swim in it but also the temperature and salinity of the water, the force and direction of the currents, and the ebb and flow of the tides.

But despite the singular role that the platforms now play, our collective understanding of them is limited, and we sometimes struggle even to describe what they are. When is a platform a publisher? When is it a common carrier? For the past year, the Knight Institute has been litigating a First Amendment challenge to President Trump’s practice of blocking critics from his Twitter account. The case turns on the question of whether the President’s account, @realDonaldTrump, is a public forum. The litigation has been a competition of analogies. Is the president’s notorious Twitter account like a town hall or a park, or is it more like a radio station or telegraph? Heather Whitney, a philosopher and legal theorist, has written a fascinating paper about whether the social media companies are properly thought of as editors. Her paper is further evidence that we’re still trying to figure out what the platforms are, and which legal labels to attach to them.

The U.S. Supreme Court recently decided an important case involving the First Amendment right to access social media. The decision was unanimous, but Justice Kennedy, who wrote the Court’s opinion, and Justice Alito, who concurred in it, disagreed about whether the platforms in their entirety should be characterized as public forums. At some level, both Justices seemed to understand that the conceptual vocabulary available to them was inadequate. Justice Kennedy wrote: “While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.” He continued: “The forces and directions of the Internet are so new, so protean, and so far-reaching that courts must be conscious that what they say today might be obsolete tomorrow.”

If our collective understanding of the platforms is limited, it’s in large part because the social media companies have guarded so jealously the information that would help us understand them. Over the last few years, they’ve begun to share information about instances in which governments compel them to remove user-generated content, or compel them to turn over users’ sensitive data. They’ve begun to share information about the limits they themselves impose on the kinds of content that users can post on their platforms. Just a few months ago, Facebook released the internal guidelines it uses to enforce its Community Standards—a release that marked what the Electronic Frontier Foundation described as a “sea change” in Facebook’s reporting practices. These disclosures are commendable, but they’re also overdue and incomplete, as EFF also observed. Perhaps under new pressure from the public, and from regulators around the world, the companies can be compelled to reveal more.

Let’s also recognize, though, that one reason the companies aren’t more transparent is that they themselves don’t fully understand the decisions they’re making. Sometimes this is by choice. The Guardian journalists who broke the Cambridge Analytica story spoke to an engineer who had once been in charge of policing third-party app developers at Facebook. The engineer said he’d tried to warn Facebook about the growing black market for user data. Facebook wasn’t interested in hearing about it, he said. He told the Guardian: “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening. They felt that it was better not to know.”

It would be a mistake to think that public ignorance in this context is simply a result of the companies’ refusal to release information. The information we most need, the companies don’t have. The companies are engaged in a massive social experiment with no precedent in human history. They rely increasingly on continuously evolving, machine-generated algorithms whose complexity defies understanding. They don’t understand their own platforms, and still less do they understand their platforms’ broader social and political implications.

When Facebook was asked about the reach of disinformation in the months preceding the 2016 presidential election, it first said that 10 million people had seen political ads posted by accounts linked to the Russian Internet Research Agency. Jonathan Albright, a researcher at Columbia’s Journalism School, then reported that six Russia-linked pages alone had reached 19 million people. Earlier this year, Facebook conceded that content posted by Russia-linked accounts had reached as many as 126 million people. If Mark Zuckerberg’s recent congressional testimony showed anything, it’s that Facebook is a black box even to Facebook. Zuckerberg himself has sometimes acknowledged this. In a recent interview, he conceded, “It’s tough to be transparent when we don’t first have a full understanding of … the state of some of the systems.”

Against this background, we should recognize that independent journalism and research about the social media platforms is extraordinarily valuable, and that obstacles to this kind of journalism and research are especially problematic. To the extent these obstacles impede us from understanding the forces that invisibly shape public discourse, they’re best thought of as obstacles to self-government. We should view them, I think, in much the same way we’d view laws preventing us from reading federal statutes, or attending congressional hearings, or accessing judicial opinions. They impede us from understanding the forces that govern us.
 

Last-ditch legal efforts to block stories before they go to print—like the letter that Facebook’s lawyers sent to the Guardian—aren’t by any means the only obstacles worth worrying about in this context. Perhaps the most significant impediments to journalism and research about the platforms arise from the companies’ terms of service, which bar journalists and researchers from using digital tools that are indispensable to the study of the platforms at scale.

Most significantly, all of the major social media companies bar users from collecting information from their platforms through automated means. Journalists and researchers can collect this information manually, but most of the platforms forbid them from collecting it using computer code. The prohibition is significant because it’s impossible to study trends, patterns, and information flows without collecting information at scale, and it’s practically impossible to collect information at scale without collecting it digitally. The effect of the prohibition is to make some of the most important journalism and research off limits. Some platforms, including Facebook, also prohibit the use of temporary research accounts—the kinds of accounts that could allow journalists and researchers to probe the platforms’ algorithms, and to better study issues of discrimination and disinformation. This prohibition, too, prevents journalists and researchers from doing the work we need them to do.

These impediments are substantial in themselves, but, in the United States, they’re made more so by a federal statute called the Computer Fraud and Abuse Act. The U.S. Justice Department understands that statute to impose civil and criminal penalties on those who violate the platforms’ terms of service. On the Justice Department’s reading, the law makes it a crime for a journalist or researcher to study the platforms using basic digital tools. The very existence of the law discourages journalists and researchers from undertaking projects that are manifestly in the public interest—projects focused on, for example, the spread of disinformation and junk news, political polarization, and unlawful discrimination in advertising. Anyone who undertakes these projects does so under the threat of legal action by the Justice Department and the platforms. When journalists and researchers do undertake these projects, they often modify their investigations to avoid violating terms of service, even if doing so makes their work less valuable to the public. In some cases, the fear of liability leads them to abandon projects altogether.

I want to acknowledge right away that the social media companies may have good reasons for generally prohibiting the use of these digital tools on their platforms. Facebook’s prohibition against automated collection is presumably meant in part to impede the ability of commercial competitors, data aggregators, and others to collect, use, and misuse the data that Facebook’s users post publicly. Facebook’s prohibition against the use of fake accounts reflects, in part, an effort to ensure that users can feel confident that other users they interact with are real people. Intentionally or not, though, Facebook is also impeding journalists’ and researchers’ ability to study, understand, and report about the platform.

It’s difficult to study a digital machine like Facebook without the use of digital tools. It’s like trying to study the ocean without leaving the shore.
 

I want to return to a point I made earlier—that we should think of independent journalism and research about the platforms as especially valuable, and that we should think of obstacles to these things as especially problematic.

Half a century ago the U.S. Supreme Court decided New York Times v. Sullivan, the landmark case that established crucial First Amendment protections that American publishers rely on even today. The case, as some of you may know, arose out of an advertisement in the New York Times that solicited contributions for “The Committee to Defend Martin Luther King and the Struggle for Freedom in the South.” The ad accused certain public officials in the American south of harassing civil rights activists and using violence to suppress peaceful protests. L.B. Sullivan, the public safety commissioner of Montgomery, Alabama, filed a libel suit again four of the clergyman who had signed the ad, and against the New York Times, which had published it. The case was tried in Alabama and the jury awarded $500,000 in damages, but the U.S. Supreme Court reversed.

In his opinion for the unanimous Court, Justice Brennan explained that the First Amendment was intended first and foremost to ensure the freedom of public debate on what he called “public questions.” Quoting an earlier case, he wrote, “The maintenance of the opportunity for free political discussion to the end that government may be responsive to the will of the people … is a fundamental principle of our constitutional system.” And in his opinion’s most celebrated passage, he invoked the United States’ “profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open.”

Justice Brennan was focused in particular on speech critical of government officials, because the case before him involved precisely that kind of speech. But the key insight behind his opinion has broader implications. The insight is that the First Amendment was intended most of all to protect the speech necessary to self-government. Or, as Harry Kalven Jr. put it in a now-famous essay published only months after the New York Times case was decided, the “central meaning” of the First Amendment is to protect “the speech without which democracy cannot function.”

Now, it hardly needs to be said that there’s a large distance between the question that was presented to the Court in the New York Times case and the questions I started off with today. But it seems to me that in an era in which the social media companies control the public square, in an era in which these companies are in a very real sense our governors, journalism and research focused on these companies implicates the very core of the First Amendment. Journalism and research that help us better understand the forces that shape public discourse, and that help us hold accountable the powerful actors that control those forces, is speech essential to self-government. It’s what the First Amendment is for.
 

I’d like to offer a few preliminary and tentative thoughts about what the companies, the courts, and legislatures could do to better protect the kind of journalism and research I’ve been describing, and more generally to ensure that the public has the information it needs in order to understand the new public square and hold accountable the companies that control it. If we were committed to these goals, what kinds of policy reforms might we ask for?

We might ask, first, that the companies be more transparent about the ways in which they’re shaping public discourse. David Kaye, the United Nations Special Rapporteur for Free Expression, issued a report several months ago urging the companies to disclose “radically more information about the nature of their rulemaking and enforcement concerning expression on their platforms.” In a report filed with the UN Human Rights Council, Kaye recommended that companies issue public opinions when they remove content from their platforms so that users and others can better understand why the content is being taken down, and so that they can challenge the companies’ decisions when those decisions are unjustified. These seem like good ideas to me. As Kaye says, “in public regulation of speech, we expect some form of accountability when authorities get it wrong; we should see that happening online, too.”

We need the platforms to be transparent, though, not only about what speech they’re taking down but about how they’re shaping the speech they’re not taking down. What kinds of speech and associations are they privileging and what forms does this privilege take? What kinds of speech and associations are they marginalizing, and what forms does this marginalization take? Kaye’s report had a different focus, but he highlighted the need for transparency about a specific form of content-curation. “If companies are ranking content on social media feeds based on interactions between users,” he wrote, “they should explain the data collected about such interactions and how this informs the ranking criteria.” The more general point is that the companies should be more transparent about how they’re shaping public discourse. The companies should be more forthcoming about the forces at work in the new public square.

Here’s a second possible avenue for reform. We could ask the social media companies not to enforce their terms of service against those who use digital tools on their platforms in the service of bona fide journalism and research. Again, many of the questions that are most urgent right now are ones the companies themselves aren’t able to answer. The companies should be facilitating, not obstructing, the work of journalists, researchers, and others who might be able to provide answers the companies can’t. Even if the companies have legitimate commercial, privacy, or security reasons for generally prohibiting the use of certain digital investigative tools, it should be possible for them to create a kind of carve-out from their terms of service for public interest journalism and research—a carve-out or “safe harbor” that expands the space for these activities while protecting the privacy of users and the integrity of the platforms.

Incidentally, the data privacy regulation that went into effect over the summer in Europe supplies a conceptual framework for exactly this kind of safe harbor. As a general matter, the new regulation places significant restrictions on the use of digital tools to collect, analyze, and disseminate information obtained from social media platforms. But the regulation also encourages individual countries to exempt journalism and research from these general restrictions. A handful of countries, including the United Kingdom, have already recognized exemptions of this kind. European privacy legislation, in other words, distinguishes between those who use digital tools for private or nefarious purposes and journalists who use those tools to inform the public about matters of public concern. The companies’ terms of service should be similarly discerning. The companies can protect user privacy and platform integrity without categorically prohibiting digital journalism and research that is overwhelmingly in the public interest.

A third possibility, if the companies turn out to be unreceptive to the second. We could ask courts to refuse to enforce the companies’ terms of service against those who responsibly use digital tools in the service of bona fide journalistic and research projects. As a general matter, courts enforce contracts as written, and as a general matter they should enforce terms of service, too. But there are contexts in which courts decline to enforce contractual provisions that conflict with public policy. Often these cases involve contractual terms whose enforcement would disable democratic institutions or processes—for example, where a contract would prohibit a person from running for office, or from criticizing public officials, or from disclosing information of overriding public importance.

These cases are surely relevant here. For reasons I’ve already explained, journalism and research focused on the social media companies is of special democratic importance because of the unique role that these companies play in shaping public discourse. Terms of service that impede this kind of journalism and research are in tension with our commitment to self-government because, again, they impede us from understanding the forces that profoundly shape our interactions, our communities, and our democracy.

And here, finally, is a fourth possibility. We could encourage the U.S. Congress to amend the Computer Fraud and Abuse Act so that digital journalists and researchers can do their work without the risk of incurring civil and criminal penalties. Journalists and researchers who are investigating questions that implicate the very core of the First Amendment’s concern shouldn’t have to operate under the threat of legal sanctions.

Let me close by just bringing us back to the Guardian story I began with. That story, about Cambridge Analytica, is in part a reminder that the platforms are entirely justified to worry about the ways in which their users’ data can be exploited by third parties. It should also be a reminder, though, of how reliant we are on the work of independent journalists and researchers. If we want to understand the platforms, if we want to understand the new public square, journalism and research about the platforms is crucial, and it deserves special solicitude under our law. Whatever free speech means in the digital age, it should encompass robust protections for this kind of public interest journalism and research.

Thank you to the Nathanson Center and the Or’ Emet Fund for inviting me, and thanks to all of you for being here.