Is “jawboning” a First Amendment issue or a state action issue? Sometimes known as “proxy censorship,” jawboning is a technique of pressuring private entities to take some kind of action that will affect another’s speech. To understand the constitutional implications of jawboning, and the rules that ought to apply, we need to consider how it is framed.
First, if we think that the main problem with jawboning is that informal pressure produces discriminatory decision making about user speech, we might turn to the First Amendment test set forth in Bantam Books v. Sullivan. Bantam Books holds that a “system of informal censorship,” enforced by coercion rather than legal sanction, violates the First Amendment. Under Bantam Books and its progeny, jawboning claims typically focus on discrete government efforts to pressure an entity to take action that infringes on a third party’s expressive rights. For example, a federal advisory committee might urge a chain of convenience stores to stop stocking Penthouse magazine, a state attorney general might write to a department store warning that stocking Pride merchandise potentially violates state obscenity law, or a state insurance regulator might ask regulated entities to evaluate the “reputational risks that may arise from their dealings with the NRA or similar gun promotion organizations.” As Evelyn Douek rightly points out, most of the existing jawboning cases “focus on individual utterances and acts of government actors, and attempt to trace their impact on individual instances of speech.”
Alternatively, if we think that the primary problem with jawboning is that the state avoids accountability by relying on a system of informal, extralegal contacts, we might turn to the state action approach. The “close nexus” test for state action—initially set forth in Blum v. Yaretsky and relied on by the Fifth Circuit in Missouri v. Biden—tries to articulate when those informal relationships might create constitutional concerns. The test asks whether there is a sufficiently “close nexus between the parties that the government is practically responsible for the [nominally private] challenged decision.”
These two perspectives frame two competing approaches to resolving jawboning claims. Sometimes, though not always, the two approaches will yield different results. In particular, Bantam Books explicitly provided that government agencies need not “renounce all informal contacts” or “private consultation” with intermediaries to avoid First Amendment liability. But under the “close nexus” approach, the kinds of informal contacts and private consultations that Bantam Books explicitly anticipated might cumulatively create a “close nexus” that gives rise to liability.
The rise of the information economy strains this approach to jawboning. Internet firms have developed robust, complex, and dynamic rules and procedures for governing user speech. These rules and procedures often include both formal and informal mechanisms for repeated and routine collaboration with law enforcement and government agencies. To take just one example: For years, platforms have actively solicited government input into their content moderation practices, in part by partnering with law enforcement components known as “internet referral units” (IRUs). IRUs monitor the internet for illicit material and then use internal platform reporting tools to flag that information for company review. Some companies, including Google, YouTube, TikTok, and Facebook, voluntarily partner with IRUs and other government agencies as “trusted flaggers” whose requests get expedited or otherwise favorable treatment.
Under Bantam Books, these relationships are probably the kinds of “informal contacts” that create no First Amendment issues. At the same time, these entrenched and ongoing relationships permit state actors to leverage the infrastructure of private ordering in service of their own policy preferences and make it difficult to cleanly delineate between “private” decision-making and government pressure. Consider O’Handley v. Weber, in which the Ninth Circuit concluded that “Twitter's interactions with state officials did not transform the company's enforcement of its content-moderation policy into state action.” In analyzing the state action question, the Ninth Circuit found it dispositive that “Twitter acted in accordance with its own content-moderation policy,” a privately developed and imposed agreement between the company and its users, rather than acting as a “private enforcer” of government policy. Despite extensive interactions between California officials and Twitter employees, the policy’s formally independent and private character operated to short-circuit constitutional inquiry. In O’Handley, even an astounding rate of compliance with government requests—98 percent!—was insufficient to undermine Twitter’s independence.
What should we make of repeated contact between savvy, sophisticated entities like Twitter, Facebook and Google and powerful government actors like federal and state officials? In an atmosphere of entrenched, ongoing, and pervasive public-private cooperation, the approach developed in Blum starts to look both more familiar and more appealing. When a company enters into a contract with the state, it is typically straightforward to impute constitutional liability to it: it acts under color of state law. But, as in the context of user speech, when a company has systemic, longstanding, and informal contacts with law enforcement, it is harder to identify where the appropriate line between “private” decision-making and public functions lies.
The rise of informal and collaborative governance mechanisms also make Bantam Books look increasingly old-fashioned. As a result of privatization, outsourcing, and the rise of new, informal governance arrangements, private firms have grown increasingly embedded in public governance. Advocates of what is known in regulatory circles as “new governance” or “collaborative governance” stress its potential as a “dynamic, reflexive, and flexible regime” that is more open to political participation and input than top-down regulatory models have traditionally permitted. But the entwinement of private governance with public actors makes it genuinely difficult to differentiate between circumstances when a platform acts based on its own rules and when it might be acting as a handmaiden of the state. This problem is unique neither to content moderation issues nor to tech itself.
New governance arrangements have also sometimes permitted industry standards and private governance arrangements to displace public regulation and law as sources of authority and constraint. For decades, scholars of constitutional law and regulatory theory observing these dynamics have been troubled by the state action doctrine’s rigid, wooden distinctions between “private” and “public” in a regulatory environment increasingly characterized by privatization, collaboration and informality. The state’s reliance on private actors to carry out basic governance tasks raises concerns about the loss of accountability and transparency when public functions are privatized or outsourced.
Compounding concerns about accountability and private control, platform-government relations extend far beyond any isolated decision on a piece of user-generated content and beyond the realm of content moderation policy writ large. Collaboration and cooperation are particularly common between the tech industry and law enforcement, national security, and immigration agencies. Consider, for example, how the tech industry works with government agencies involved with the detection and eradication of child sexual abuse material (CSAM). Many tech firms voluntarily monitor user content for CSAM using hash-matching databases. Though the scanning itself is voluntarily, when a firm detects a match, federal law requires it to report it to the National Center for Missing and Exploited Children (NCMEC), a private organization funded by the government that, pursuant to statute, coordinates with law enforcement agencies that investigate and prosecute crime. Similarly, firms’ decisions to provide open access to Application Programming Interfaces and feeds of data have fueled law enforcement surveillance of social media; platforms’ aggregation of location data has proven to be a ripe target for government surveillance demands. Still others have engineered informal collaborations with law enforcement: For example, Amazon has worked with local police departments to help craft public relations campaigns to convince homeowners to purchase Ring surveillance cameras.
The reality is that, across many domains, modern law enforcement and governance have come to rely on data and information from platforms. In this broader regulatory context, the classic First Amendment test developed under Bantam Books is likely both too limited and too difficult to administer. In NRA v. Vullo and Kennedy v. Warren, respectively, the Second and Ninth Circuits adopted a four-factor test to determine when pressure crosses the line into unconstitutional coercion, analyzing the government actor’s “word choice and tone,” whether the government actor has “regulatory authority,” whether the speech was “perceived as a threat,” and “whether the speech refers to adverse consequences.” But Bantam Books, as well as Vullo and Warren, anticipate fairly arms-length relationships between the government actors and the recipients of their threats. As I hope I’ve shown above, this arms-length illusion is hard to sustain in an environment of widespread, continuous collaborative governance.
When government actors jawbone, particularly behind closed doors, they can avoid accountability: They forego the legal, political, and reputational costs of their chosen policy. For its part, the state action approach recognizes this reality and remedies it by imposing constitutional obligations. The reality, however, is that finding state action also has serious costs that the Fifth Circuit largely ignored. Most significant, platforms may simply choose not to cooperate voluntarily with state actors if they are saddled with constitutional obligations. In order to secure the genuine benefits of collaborative governance and cooperation, legislation and policymaking would be necessary. Though mandating cooperation on transparent and accountable terms would be a win for all interests, the political and legal hurdles can hardly be overstated.
Hannah Bloch-Wehba is an associate professor of Law at Texas A&M University School of Law who teaches and writes on law and technology.