Introducing Free Speech Futures

Introducing Free Speech Futures

The Knight Institute's second essay series asks leading scholars to think beyond existing First Amendment doctrine to imagine what freedom of speech could be in our current moment and our future. 

Freedom of speech stands at a crossroads. This much has become familiar learning. It is well understood that threats to free speech do not come solely from governments. We know that the media no longer acts as a reliable intermediary between information and those who would consume it. We know that social media platforms find themselves having to constantly negotiate between their role constituting the online public square and their status as lucrative for-profit businesses. It is clear to anyone paying attention that unregulated speech can be more difficult to hear; we might not wish to drink from a fire hose even if we’re thirsty. And it is obvious that these issues are not simply matters of private rights but also reach deep into geopolitics, global economic markets, and human freedom on a broad scale. The last decade has witnessed an explosion of sophisticated thinking and writing about these matters. It may seem as if there is little left to say.

The series that follows shows that impression to be mistaken. The prompt was an unusual one. Freedom of speech is of course structured not just by social media platforms and their users, or even by the web of statutes, regulations, and contracts that govern their behavior, but also by an ever-present constitutional regime. The First Amendment to the U.S. Constitution famously provides that “Congress shall make no law abridging the freedom of speech,” which is broad enough on its own, but its reach extends well beyond the text of the Constitution. The First Amendment has been interpreted to protect the commercial speech of businesses, including purely mercenary marketing practices, about as robustly as it protects the street-corner preacher’s take on national politics. It has been interpreted to constrain the government’s ability to require platforms to restrict user speech on the basis of its content. The breadth of its canopy has structured campus life, leading to confrontations between unpopular speakers and those who would protest them, each side arming itself in the language of free speech.

This series puts First Amendment doctrine on the backburner. Or perhaps better stated, it denaturalizes First Amendment doctrine by returning, once more, to first principles. We have asked authors, and we are asking their readers, to try to conceptualize the challenges we face within the current and future information ecosystem without obsessing over the First Amendment as it has come to be understood by courts. This may seem a quixotic exercise, but it has a model. Conservative legal scholars in the 1980s held a different vision of constitutional law writ large than was then dominant at the Supreme Court. They did not argue, in the main, that their vision was consistent with or supported by modern doctrine, in the way of a good common law lawyer extending the cases in his or her preferred direction. Rather, they simply asserted that existing doctrine was not the law. There was a “lost” Constitution, a Constitution “in exile,” that, once discovered, could rebalance the law along preferred lines. Once conservative judges began populating the courts in greater numbers, they had an off-the-rack alternative to the common wisdom.

Thinking in these terms is not a concession to practicality or an admission of defeat in the courts. It is rather, an affirmative recognition that constitutional law has always been dynamic, adjusting over time to the demands of mobilized social movements, technological change, and political imperatives. The First Amendment is no different. For example, the conventional wisdom has become that First Amendment doctrine doesn’t abide laws that discriminate on the basis of the content of speech, and yet as late as 1952, in Beauharnais v. Illinois, the Supreme Court upheld a “group libel” statute, what would today be called a hate speech law. In the span of four decades, the so-called “commercial speech doctrine” evolved from providing nearly minimal to nearly absolute protection for speech designed to sell goods and services. The law around electioneering speech has yo-yoed considerably in recent decades. Lawyers and activists within the conservative legal movement have understood what their predecessors within the civil rights movement did as well: that constitutional law is not a random walk but rather is charted by human beings making intentional decisions about how to move the law.

And so imagine that you care deeply about freedom of speech. You might care for any number of (non-mutually exclusive) reasons. Perhaps you believe freedom of speech is vital to democratic citizenship or to the formation of a democratic culture. Perhaps you think personal expression is an element of personhood and therefore a precondition for human freedom. Perhaps you think freedom of speech is instrumental as much (or more) for listeners as for speakers, to aid in the search for truth or to promote a culture of tolerance. Imagine, though, that in addition to caring deeply about freedom of speech you also find other values compelling. Perhaps they are the very same values to which you view free speech as relevant or instrumental, but you think unregulated speech may threaten as much as facilitate them. Or maybe in addition to freedom of expression, you also believe strongly in substantive equality, or civility, or economic justice. Instead of fitting your web of value preferences into First Amendment doctrine, what if you engineered First Amendment doctrine to respond to those values: What would freedom of speech look like? Whom, or what, would it protect, privilege, or target? What would platform regulation look like? And what, if anything, are current models overlooking?

Some of the essays in the series take on the first, more conceptual set of questions. The first contribution, Jeremy Waldron’s A Raucous First Amendment, takes seriously the notion that free speech should be, in a word, “free.” We take as axiomatic the constitutional ideal that governments should not punish speakers or prevent their speech without an exceptionally good reason, but Waldron wants to focus our attention on freedom in a different sense: speech, he says, should be untamed, wild, boisterous, and raucous. The ideal speech environment is less the choreographed addresses we observe from an empty House of Representatives chamber on C-SPAN, and closer to the unscripted and intemperate interruptions of the British MPs every Wednesday during Prime Minister’s Question Time in the House of Commons. Freedom of speech is not about speaking, Waldron emphasizes, but about engagement, and so a speaker who complains of being heckled by his audience generally must appeal to values outside of free speech itself.

There is an important lesson here for, among other things, controversies over campus speech. Freedom of speech is not the right to hold the floor; a provocative speaker invited to share his or her views with students should not—in the name of free speech—expect an orderly reception. At the same time, to the extent freedom of speech as an ideal of political morality is undisciplined in the way Waldron celebrates, its capacity to contribute to goals of truth-seeking or political equality may be compromised. The voices most worth hearing may lack the resources, the temperament, or the wherewithal to shout down their opponents. We are seeing rule by the boisterous all around us; if disfavoring it is wrong, we may not want to be right.

Such is the focus of Mary Anne Franks’s essay, The Free Speech Black Hole. Franks worries that the constitutional ideals of content neutrality and viewpoint neutrality that have come to characterize modern freedom of speech doctrine migrate too casually to the very different domain of platform regulation of online content. For Franks, freedom of speech doesn’t—and shouldn’t—necessarily mean freedom from censorship by platforms themselves, even on the basis of one’s views and even if platforms are powerful spaces for the exchange of ideas. Emphasizing that free speech has historically benefited the powerful to the exclusion of women, minorities, and the poor, Franks argues that the market power of social media platforms doesn’t mean, as some argue, that the same rules should apply to them as apply to the state. To the contrary, it means that they have the leverage and the freedom to experiment with more mindful, more egalitarian modes of speech regulation.

There is a tension here. The state action doctrine is an artifact of the very free speech doctrine that Franks wishes to criticize as protective of white male hegemony. Many historic victories for the cause of equality have come from courts that had the creativity and courage to pierce the veil of private action—from striking down enforcement of racially restrictive covenants in Shelley v. Kraemer to invalidating the transfer, in trust, of a segregated park to city officials in Evans v. Newton to disallowing trespass prosecutions against sit-in protestors in Bell v. Maryland and other cases. And so the equality argument for imploring Facebook, say, to act as a more responsible speech regulator should not, perhaps, rest on its status as a private company but rather on more subversive arguments against content-neutrality more generally. How much censorship should a just First Amendment tolerate?

Tim Wu also questions the assumption that the First Amendment should view all claimants equally. In his essay, Beyond First Amendment Lochnerism, Wu takes aim at the use of the First Amendment to take a second bite at the apple after a loss in the political process. Wu’s paradigm case is Sorrell v. IMS Health, in which the Supreme Court struck down a Vermont law that restricted the transfer of physician prescription information to data miners who wanted to transfer the information to pharmaceutical marketers. Drug companies vigorously opposed the law during legislative debates but, Wu argues, they lost fair and square. For Wu, where speech sits outside the First Amendment’s sacrosanct core of political expression, courts should apply a lighter touch to laws passed over the objection of well-represented parties.

Wu’s innovative proposal self-consciously evokes the constitutional theorist John Hart Ely, who argued that U.S. courts are best deployed to police breakdowns in the democratic process rather than to uphold substantive values. As he recognizes, the devil is in the details. How should judges assess when a litigant was well-represented in the political process? What kinds of legal interventions threaten core political values as opposed to other kinds of speech? And would Wu’s approach apply across the spectrum, to other well-resourced litigants such as, say, the ACLU, universities, or criminal defense advocacy organizations?

Wu and Franks are both asking important questions about how the law of the First Amendment should respond to how power is actually structured and exercised in the twenty-first century. In Keeping the New Governors Accountable, Victoria Baranetsky explores similar questions as they relate to transparency. We tend to take for granted that there should be some degree of public access to the mechanics of government decisions. The Freedom of Information Act is premised on that view, as are rights of public access to criminal trials. There is no similar right of access to the decisional algorithms and decisionmaking processes of media platforms and technology companies even though these companies wield enormous power over public life. They not only regulate public discourse but also, for example, structure financial transactions, assist in surveillance, and create sentencing algorithms. Drawing on John Dewey’s work on what later scholars have dubbed “technological transparency”—which emphasizes the importance to self-governance of knowing how technology functions—Baranetsky advocates mechanisms for accessing the internal processes of technology companies, at least when they interact with criminal justice or engage in other important public functions.

It certainly seems problematic for someone, say, to be incarcerated based on an algorithm the particulars of which remain opaque even to the judge, much less the condemned. Predictive justice algorithms can also reinforce racial or gender bias in disturbing ways. Still, one might question how far Baranetsky’s proposal can go. Trade secrets are important to innovation, just as deliberative secrecy may, in public settings, be important both to protecting sensitive information and to successfully negotiating among a complex set of interests. Transparency can gum up good government and good technology as much as it can expose pathological actors or unlawful conduct. How should we strike the right balance?

As Mike Ananny observes in Probably Speech, Maybe Free, the answer to this kind of question exposes the nature of probability as an underexplored logic of speech systems. Our comfort in relying on algorithms, whether to identify customers or recidivists, whether to deliver news or search results, in recognizing child pornography or a human face, depends on assessments of how probability relates to financial success, to the use of public power, and to human welfare. When are we comfortable relying on probability? What are the distributive effects of false positives and false negatives? Who gets to make the assessment, and who gets to challenge it?

The problem of freedom of speech in the digital age is a problem of scale—speech is enormously consequential, but it is offered, targeted, and countered on a scale too vast for human processing. Facebook’s armies of content moderators are a twentieth-century solution to a twenty-first century problem, the equivalent of the calvary charging at aerial drones on horseback. Probability is a logic for taming scale, but what comes next? Ananny uncovers a rabbit hole and offers a sense of how far down it goes. But, like probability itself, his essay is diagnostic. As much as probability must be understood as a logic of speech systems, and must be interrogated to make them better, relying on probability is, in itself, neither good nor bad. It is indeed so ubiquitous that there is a danger that a deep exploration into probability becomes a “just so” inquiry with little prescriptive purchase.

That said, one prominent and obvious use of probability by social media platforms is in their business models, which rely largely on targeted advertising supported by data collection from users. These practices of course compromise user privacy to an extent as part of the bargain we make with the platforms we use, but, as Jeff Gary and Ashkan Soltani emphasize in their essay, First Things First: Online Advertising Practices and Their Effects on Platform Speech, the business model affects substantive content as well. The advertisements and other content platforms deliver to users are designed to generate engagement, and engagement is hardly content-neutral. The old adage that “if it bleeds, it leads” applies as much to the news, commercial products, and other targeted content on social media as to traditional tabloids. And so it will be difficult to police the proliferation of “fake news” or hateful content without directly addressing a business model that relies on this content to turn a profit (even if it does so entirely probabilistically, and therefore without conscious “intent”). Targeted content also encourages the proliferation of “filter bubbles” that some believe contribute to political polarization and radicalization.

Gary and Soltani argue that traditional, “back end” forms of content moderation are doomed to fail. “Blacklisting” troublesome content providers won’t work in practice if those providers generate the engagements central to the platforms’ business. Direct content moderation faces the familiar challenges of trying to use artificial intelligence to make highly qualitative, contextual judgments at scale. Better, Gary and Soltani say, for Congress to enact privacy reforms (or the FTC to enact regulations) that restrict data collection or restrict the use of certain data to target content.

One wonders if any effort to separate platforms from their business models by fiat is quixotic. Although there is some evidence that producing targeted content isn’t really worth the candle, there’s a lot of risk in pursuing regulatory reform that relies on making this case to industry. Mike Masnick’s essay, Protocols Not Platforms: A Technological Approach to Free Speech, offers an approach to content moderation that attempts to align the platforms’ incentives with the social welfare. Masnick would return the internet to its roots in protocols, which are instructions, such as HTTP (HyperText Transfer Protocol) or SMTP (Simple Mail Transfer Protocol) that can be tailored for users to create a compatible interface. HTTP is of course the basis for the World Wide Web. Much of what we today think of as the internet is accessed through private platforms such as Facebook and Twitter that are controlled by a single entity. This has a streamlining effect and has enabled the platforms to collect large amounts of data that they can then monetize, but it makes content regulation a nightmare. Masnick argues that a shift to more open protocols would engender competition for the kinds of feeds that are free of unwanted content. Dangerous or hateful content could be siloed into corners of the internet where it could do less harm, to the benefit of users.

A protocols approach has much to recommend it. Many, including Facebook’s co-founder Chris Hughes, have argued that Facebook has “sacrifice[d] security and civility for clicks,” and should be broken up into smaller entities. Switching to a protocol-dominated internet could be a way to preserve the one-stop advantages platforms offer—and which an antitrust model jeopardizes—while minimizing the risks associated with massive data collection and monopolistic control, a universe in which a single individual controls Facebook, WhatsApp, and Instagram. Still, some caution is warranted. As Masnick recognizes, there is a risk that a protocols model exacerbates the filter bubble that has been blamed, in part, for so much political disintegration. One might argue that some of the internet’s darker practices such as doxxing, revenge porn, and violent threats are even more dangerous or problematic if they happen without the target’s knowledge. Moreover, we shouldn’t underestimate the capacity of existing tech behemoths to streamline and monetize protocols just as effectively as they dominate the internet as is. Getting from here to there may no longer be feasible.

The final essay in the series ends, then, on a note of caution. Focusing on the special problem of “bot” regulation, Jamie Lee Williams worries that in the rush to fight the last war, we might overreach and forget the core values the First Amendment protects. While acknowledging that bots spread dangerous disinformation and can lead to real-world violence, Williams emphasizes that automated content generators are often benign or beneficial. Her main worry is that in the rush to adapt freedom of speech to the modern age, by for example denying “bots” the same protections we would give to “human speech,” we end up with censorship. She is especially wary of a model based on the Digital Millenium Copyright Act, which would require platforms to investigate user complaints within a limited time period to determine whether to take down alleged bots. The concern is that, as in the copyright space, the incentive to take down an alleged bot will be too strong if failing to do so places the platform in legal jeopardy. Williams would preserve the demanding but not fatal legal standard of “intermediate scrutiny” for efforts at bot labeling.

Williams’ essay can be taken as a broader caution that applies to the essay series more generally. The series adds to a wealth of innovative thinking about how citizens and their governments should respond to the many challenges posed by the modern speech environment. But it is too early, the essay suggests, to pen a requiem for the First Amendment. Free speech’s future will, no doubt, retain much of its past.

 

© 2019, Jamal Greene.

 

Cite as: Jamal Greene, Introducing Free Speech Futures, 19-01 Knight First Amend. Inst. (Aug. 21, 2019), https://knightcolumbia.org/content/introducing-free-speech-futures [https://perma.cc/P4FH-UKCZ].

Sorrell v. IMS Health, Inc., 564 U.S. 552 (2011).

U.S. v. Playboy Ent. Group, Inc., 529 U.S. 803 (2000).

343 U.S. 250 (1952).

Alexander Meiklejohn, The First Amendment Is an Absolute, Sup. Ct. L. Rev. 245 (1961); Robert Post, Democracy, Expertise, and Freedom: A First Amendment Jurisprudence for the Modern State (2012).

Seana Shiffrin, A Thinker-Based Approach to Freedom of Speech, 27 Const. Commentary 283 (2011).

Abrams v. U.S., 250 U.S. 616, 630 (1919) (Holmes, J., dissenting); Lee Bollinger, The Tolerant Society: Freedom of Speech and Extremist Speech in America (1986).

334 U.S. 1 (1948); 382 U.S. 296 (1966); 378 U.S. 226 (1964).

564 U.S. 552 (2011).

See Adam Liptak, Sent to Prison by a Software Program’s Secret Algorithms, N.Y. Times (May 1, 2017), https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html.

See David Pozen, Transparency’s Ideological Drift, 128 Yale L. J. 100 (2018).

Chris Hughes, It’s Time to Break Up Facebook, N.Y. Times (May 9, 2019), https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html.

Jamal Greene is the Dwight Professor of Law at Columbia Law School and was the Knight Institute's second senior visiting research scholar in 2018-2019.