On July 4, Judge Terry A. Doughty issued a sweeping preliminary injunction restricting government officials’ communications with social media platforms. The court’s order in Missouri v. Biden is extremely broad. It bars the executive branch from engaging with social media platforms on almost any content-moderation related issues, with some exceptions related to criminal conduct, national security, foreign attempts to influence elections, and similar risks to public safety. While the court’s opinion reads as a free speech paean, it fails to provide a coherent explanation of how the government’s actions–a series of public statements and private communications between federal officials and the platforms that span the Biden and Trump presidencies–violate the First Amendment. There are lines that could potentially be drawn to guide these interactions, but this opinion doesn’t provide them. Instead, it hamstrings the government’s ability to communicate and act on pressing issues of public importance without providing any real guidance in identifying the supposed coercion that infringes the Constitution.
Challenges to Government “Jawboning”
This case is the latest in a series of court challenges against alleged government “jawboning,” a term that describes informal efforts by government officials to persuade or pressure private entities, in this case, related to speech. Jawboning sometimes carries with it an express or implied threat of regulation or other adverse consequences if the entities don’t adhere to the government’s requests.
But not all government speech directed at private parties raises constitutional concerns, nor should it. The government needs to speak–including to private actors–in order to govern. Courts have recognized this interest through the government-speech doctrine, which gives government officials the latitude to decide which of the many statues to display in a public park, or, to use a hypothetical cited by the Supreme Court, to create and distribute millions of pro-war posters during World War II without an obligation to “balance the message” with millions of anti-war posters. Imagine if the government couldn’t reach out to a (non-satirical) newspaper that published an inaccurate story declaring that the president was giving out free puppies to every U.S. household. It surely can’t be a violation of the First Amendment for the government to call out this falsehood.
At the same time, the government shouldn’t be able to circumvent First Amendment protections by using informal means to coerce platforms to suppress or remove speech the government doesn’t like. For instance, in Bantam Books v. Sullivan, the Supreme Court held that a state commission violated the Constitution when it sent notices to booksellers threatening prosecution unless they removed books it deemed offensive and obscene from circulation. The Court found that the letters were “informal censorship” intended to intimidate rather than persuade the booksellers to comply with the commission’s request. Likewise, in Backpage v. Dart, the Seventh Circuit held that a sheriff’s efforts to shut down a website’s adult section also violated the First Amendment. The sheriff had sent letters to Visa and Mastercard demanding that they cease and desist allowing their credit cards to be used to place any ads on the site. The letters referenced potential criminal liability under a federal money-laundering statute, and implied that the companies could be prosecuted for their role in processing payments associated with illegal sexual activities. These decisions make intuitive sense: the government shouldn’t be able to use threats to strong arm third parties into stifling protected expression.
The difficult question is when government efforts to persuade speech intermediaries (third parties who publish or distribute the speech of others, like websites or bookstores) similarly cross this line into constitutionally impermissible coercion. Recently, the Second and the Ninth Circuits took steps to answer that question, looking to factors like the government actor’s word choice and tone, the presence of regulatory authority, the recipient’s perception of the government’s activities, and references by the government actor to adverse consequences for non-compliance. But this list is far from exhaustive, and some of its factors are ambiguous. For instance, should a threat be more or less concerning to us when the official making the threat lacks regulatory authority? Should the recipient’s perception of the government’s action matter, if this test measures how coercive a communication is, rather than its effects? If anything, these decisions make clear that many difficult line-drawing questions remain.
Ultimately, Judge Doughty’s opinion does nothing to answer these difficult questions. It describes a variety of activities that allegedly contravene the First Amendment including:
- Meetings and communications between government officials and platform employees to discuss the platforms’ enforcement of content moderation policies related to COVID-19 (e.g., meetings between State Department leaders and social media companies to discuss “tools and techniques” to stop the spread of disinformation, and for the platforms and the state department to compare prevalent foreign propaganda and information they were seeing on the sites);
- The use of “trusted partner” portals created by the platforms, which sought government assistance in identifying violations of the platforms’ content policies (e.g., Twitter offering to enroll Center for Disease Control (CDC) officials in a portal it had created to identify inaccurate content);
- Public and private statements ranging from general condemnation of the platforms’ failure to adequately address misinformation (e.g., White House Press Secretary Jen Psaki’s statement that the administration wanted “every platform to continue doing more to call out” “mis- and disinformation while also uplifting accurate information”) to private emails identifying specific accounts and posts that spread inaccurate information the government wanted removed (e.g., after Twitter reached out to the CDC to ask if an account purporting to be Anthony Fauci was “real or not,” a National Institutes of Health official responded “Fake/Imposter handle. PLEASE REMOVE!!!”).
But the court’s mere recitation of this laundry list of communications between government officials and platform employees doesn’t provide the platforms or elected officials with any guidance on what the limits of their interactions should be. To be sure, some of the interactions described in the opinion do seem troubling. It’s not clear, for example, what legitimate interest the Biden administration had in pressuring Twitter to take down a parody account purporting to belong to one of President Biden’s relatives. (Note: it’s not completely clear from the record whether the content at issue was strictly parody, or if it involved impersonation, which is a violation of many platforms’ own, independently-developed policies.)
But other entries on Judge Doughty’s list don’t seem nearly as troubling on their face; to the contrary, they seem merely to reflect responsible efforts on the part of private platforms to ensure, in a period of confusion, that their content-moderation practices were defensible in light of a global pandemic. For instance, when COVID vaccines became publicly available, Facebook content-moderation officials reached out to CDC employees to understand whether posts about the vaccines were accurate–particularly given the enormous reach of vaccine-skeptical content on the platform–as part of the site’s misinformation policy efforts. (It’s also true that around this time the Biden administration spoke out about the prevalence of inaccurate COVID information, both publicly and in communications with the platforms.) Because Facebook had a policy of removing false claims that could lead to harm, it asked the CDC for guidance on whether specific health claims circulating on the site were false: e.g. that “spike proteins in the COVID-19 vaccines are dangerous.” The CDC provided answers to the questions Facebook posed, which in turn allowed the platform to decide what posts violated its policy. The platform’s outreach seems less consistent with government coercion and more consistent with a platform perceiving the government as a reliable source of health information that could be used to enforce its policies, then relying on that source.
Grappling with the Difficult First Amendment Questions
A serious exploration of the First Amendment questions raised by interactions between online platforms and the government could get courts and the public closer to determining when government speech crosses the line into coercion. For instance, should it matter how specific the government’s request is? A court might treat general statements–White House Press Secretary Psaki’s statement that “the president’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19 vaccinations and elections”–differently than specific requests–Deputy Assistant to the President and Director of Digital Strategy Andrew Flaherty’s communication that Twitter “please remove this account immediately.”
The targeting of a specific user or specific post might be viewed as especially troubling; are courts, officials, and everyday constituents comfortable with the government telling a company to “remove this specific post” or “delete this particular user’s account?” Is it better if a government statement regarding platform behavior is made publicly, where it’s more visible to the electorate and carries the risk of political repercussions, but also potentially more impactful given its ability to shape public perception of the platforms?
Or should private communications between government officials and platforms draw more scrutiny, since they are more likely to leave users in the dark about why their content was removed or flagged, and insulate the government from public backlash? If the government encourages platforms to remove content that violates the platforms’ own, independently created policies (e.g. Google’s various misinformation policies) are its communications coercive? In many of the incidents identified in the suit, officials pointed out violations of the platforms’ existing COVID-19 misinformation and civic integrity policies or sought clarification on those policies. Often, the platforms voluntarily solicited that government input. Can those interactions really be coercive? What if a platform acted incredibly quickly to take down posts that the government characterized as misleading or false? Would that undercut the idea that the platform was following its own policies or exercising its own editorial judgment? Judge Doughty’s opinion sweeps broadly and leaves all these difficult, but critically important, questions unanswered.
There’s also some evidence in the opinion that the free speech implications of the government’s interactions with the platforms might not be as dire as the court suggests. The platforms often rejected requests from the government to take action regarding particular content. In the election disinformation context, platforms complied with 50 percent of takedown requests from the FBI and 35 percent of requests made by the Election Integrity Partnership (a non-partisan group led by the Stanford Internet Observatory and the University of Washington Center for an Informed Public). These statistics suggest, at least, that the platforms were exercising independent judgment.
Moreover, Judge Doughty implicitly acknowledges that government communications with platforms regarding content moderation issues are necessary in some contexts. His order provides exceptions for “notifying social-media companies of national security threats” as well as “foreign attempts to influence elections,” even “informing social-media companies of threats that threaten the public safety,” and “exercising permissible public government speech.” Yet the accompanying opinion fails to explain why the injunction wouldn’t allow many of the interactions Judge Doughty sees as problematic–like identifying COVID misinformation that would “threaten the public safety.”
The interactions at the heart of Missouri v. Biden implicate many different speech interests: those of the platforms, independent entities researching misinformation, the government, and millions of platform users. A thoughtful First Amendment analysis would articulate a principled way of distinguishing legitimate government speech from illegitimate government coercion, a challenge that Judge Doughty’s opinion doesn’t meet. Predictably, the government immediately appealed the decision, and a panel of the Fifth Circuit has issued a temporary stay of the injunction while the appeal proceeds. Now, it’s up to the Fifth Circuit judges to grapple with these questions.
Mayze Teitler is a legal fellow at the Knight First Amendment Institute.