“[I]ts power seems inescapable—but then, so did the divine right of kings.”

— Ursula K. Le Guin

Social media companies are in Congress’s sights. In May 2016, in the wake of allegations that Facebook workers had suppressed pro-conservative viewpoints and links while injecting liberal stories into the newly introduced Trending Topics section, Senator John Thune sent a letter to Mark Zuckerberg demanding, among other things, a copy of the company’s guidelines for choosing Trending Topics, a list of all news stories removed or injected into Trending Topics, and information about what steps the company would take to “hold the responsible individuals accountable. Facebook complied, with Zuckerberg himself meeting with lawmakers.

During the recent hearings before the Senate and House intelligence committees on Russian interference in the 2016 presidential campaign, Senator Dianne Feinstein told the general counsels of Facebook, Google, and Twitter—whose CEOs were conspicuously absent—“You bear this responsibility. You’ve created these platforms. And now they’re being misused. And you have to be the ones to do something about it. Or we will.” Despite intensive lobbying efforts by these companies, both individually and through their collective trade association, legislation imposing new restrictions on how they operate is, “[f]or the first time in years, . . . being discussed seriously in Washington.” As one reporter put it, “In 2008, it was Wall Street bankers. In 2017, tech workers are the world’s villain.”

That Bay Area tech companies are having something of a PR crisis is clear. And in the rough and tumble of politics, that these companies would meet with and appease legislators is no great surprise. But if Congress does decide to get tough, how credible and wide-ranging is the regulatory threat, under current First Amendment jurisprudence?

Some prominent commentators claim that Facebook is analogous to a newspaper and that its handling of a feature like Trending Topics is analogous to a newspaper’s editorial choices. As a result, these commentators find congressional scrutiny of such matters to be constitutionally problematic. Moreover, the editorial analogy has been a remarkably effective shield for these tech companies in litigation. In a series of lower court cases, Google and others have argued that their decisions concerning their platforms—for example, what sites to list (or delist) and in what order, who can buy ads and where to place them, and what users to block or permanently ban—are analogous to the editorial decisions of publishers. And like editorial decisions, they argue, these decisions are protected “speech” under the First Amendment. While mostly wielded against small-fry, often pro se plaintiffs, courts have tended to accept this analogy wholesale.

Large consequences hinge on whether the various choices companies like Facebook and Google make are indeed analogous to editorial “speech.” The answer will partly determine whether and how the state can respond to current challenges ranging from the proliferation of fake news to high levels of market concentration to the lack of ad transparency. Furthermore, algorithmic discrimination and the discrimination facilitated by these platforms’ structures affect people’s lives today and no doubt will continue to do so. But if these algorithms and outputs are analogous to the decisions the New York Times makes on what to publish, then attempts to extend antidiscrimination laws to deal with such discrimination will face an onslaught of potentially insuperable constitutional challenges. In short, these companies’ deployment of the editorial analogy in the First Amendment context poses a major hurdle to government intervention.

Whether, or to what extent, the editorial analogy should work as a shield against looming legislation and litigation for companies like Facebook and Google is something this historical moment demands we carefully consider. My primary aim in this paper is to do just that. I will engage critically with, and ultimately raise questions about, the near-automatic application of the editorial analogy. The core takeaways are these: (1) we should be cognizant of the inherent limitations of analogical reasoning generally and of the editorial analogy specifically; (2) whether these companies’ various outputs should receive coverage as First Amendment “speech” is far from clear, both descriptively and normatively; (3) the proposition that regulations compelling these companies to add content (disclaimers, links to competitors, and so on) compel the companies to speak is also far from clear; and, finally and most crucially, (4) given the limits of analogical reasoning, our future debates about First Amendment coverage should focus less on analogy and more on what actually matters—the normative commitments that undergird free speech theory and how our choices either help or hinder their manifestations.

To that end, I start by reviewing some of the cases in which the editorial analogy has been successfully deployed. Next, I lay the groundwork for rethinking the editorial analogy—first, by analyzing its internal weaknesses, and second, by raising other potentially compelling analogical frames. Each new analogy raises far knottier questions than I can address here, so I will briefly mention only a few, ending with the analogy brought to life by the Court’s recent language in Packingham. There, the Court, either strategically or recklessly, “equate[d] the entirety of the internet with public streets and parks” and declared it “clear” that “cyberspace” and “social media in particular” are now “the most important places (in a spatial sense) for the exchange of views.” The Court found social media to be “the modern public square” and stated that it is a “fundamental principle of the First Amendment . . . that all persons have access” to such a forum. This language casts doubt on whether the editorial analogy will be successful going forward. Its reliance on highly abstract characterizations also serves as a lesson. We should address First Amendment coverage questions through the lens of normative theory and not through a collection of ill-suited analogies.

The Editorial Analogy in Litigation

Zhang v. Baidu.com, Inc. is the case in which a lower court has most fully explained why, in its view, the editorial analogy applies to a search engine’s outputs. The plaintiffs, New York residents and self-described “promoters of democracy in China,” alleged that Baidu, the dominant Chinese search engine, intentionally delisted their pro-democracy websites from its search results in the United States at the behest of the Chinese government. And in so doing, they further alleged, Baidu violated their First Amendment rights. Baidu replied that its listing decisions were its protected speech. The Southern District of New York agreed, finding that “First Amendment jurisprudence all but compels the conclusion that Plaintiffs’ suit must be dismissed.” With no attention paid to the claim that Baidu was acting on behalf of the Chinese government, the court saw the relevant precedent as Miami Herald Publishing Co. v. Tornillo. There, the U.S. Supreme Court found unconstitutional a statute that required newspapers to provide political candidates a right of reply to critical editorials. The court in Baidu also saw Hurley v. Irish-American Gay, Lesbian, and Bisexual Group of Boston as an extension of Tornillo, equally applicable to Baidu. In Hurley, the Court ruled that requiring parade organizers to permit a pro-LGBT group to participate would entail unconstitutionally compelling the parade organizers to speak.

The Baidu court’s holding followed directly from its analogical reasoning. It saw Baidu as organizing information, which it thought sufficient to make the relevant analogy a “newspaper editor’s judgment of which . . . stories to run.” The Supreme Court previously found a newspaper’s judgment of which stories to run protected “speech” and struck down as compelled speech a requirement that it include content that went against that judgment. Thus, analogizing Baidu to a newspaper, Baidu’s judgments about which sites to list were also protected “speech” and requiring Baidu to include sites against its wishes would be unconstitutional compelled speech, too.

The editorial analogy again won out, this time for Google, in e-ventures Worldwide, LLC v. Google, Inc. e-ventures is a search engine optimization (SEO) firm. Such firms seek to improve the visibility of client websites in organic (i.e., non-paid) search results. Clients like this because the higher their websites in organic rankings, the heavier the flow of traffic to their sites, which in turn enables them to sell advertising space on their sites at higher rates. Search engine companies are not big fans of SEO firms—they see them as trying to game the system for unpaid rankings. More to the point, when SEO firms are successful, it means that companies spend a portion of their advertising budgets with the SEO firms and not with Google for paid placement in search results. As a result, a perpetual game of cat and mouse ensues. Apparently unable to tweak its search algorithm in a way it liked, Google instead manually delisted 231 websites belonging to e-ventures clients. e-ventures attempted to reach out to Google through several channels, with the hopes of getting the sites relisted, but was unsuccessful. As a result, it filed suit, at which point Google relisted the sites.

In its suit, e-ventures alleged that the delisting constituted unfair competition under the Lanham Act, tortious interference with business relations, and a violation of Florida’s Deceptive and Unfair Trade Practices Act. e-ventures also alleged that Google’s statements about its search results—that “Google search results are a reflection of the content publicly available on the web” and that “[i]t is Google’s policy not to censor search results”—were false and deceptive in light of its delisting practices. Google responded by asserting, among other things, that e-ventures’ claims were overridden by the First Amendment, as Google’s search results were its editorial judgments and opinions. While the court did not grant Google’s motion to dismiss, it ultimately agreed with Google at summary judgment that the First Amendment protects its delisting decisions. And the court did so by squarely analogizing Google to a publisher and its judgments about what to list or delist to a publisher’s decision about what to publish.

That Google’s actions were commercial and arguably anticompetitive did not matter. That Google was alleged to have made deceptive statements did not matter. On the contrary, the court expressly opined that Google’s free speech rights protect its listing and delisting decisions “whether they are fair or unfair, or motivated by profit or altruism.” The court’s conclusion that if Google’s results were speech, unfair competition laws could not apply is deeply problematic and difficult to square with the obvious fact that laws addressing unfair and deceptive advertising prohibit certain speech all the time. This conclusion underscores the editorial analogy’s powerful influence and what its successful use puts at stake.

That said, while the editorial analogy has proved potent in lower court cases, there is still time to rethink it. First, the Supreme Court has yet to weigh in. As I mentioned before and will discuss below, the Court’s most recent comments in this area come in Packingham. If we take the majority at its word, that case suggests that it is an analogy to the public square, and not to a publisher, that ought to guide First Amendment thinking about social media. Second, plaintiffs in these prior cases were much more modestly resourced than the search titans they opposed. Some plaintiffs proceeded pro se. As a practical matter, this means that lower courts have been under little pressure to interrogate the cursory analogical-reasoning rationales that favored the defendants.

But this too might change. In what Yelp’s vice president of public policy described as the “most significant enforcement event in consumer tech antitrust” since the action against Microsoft in 2000, Google was fined a record-breaking €2.4 billion by European regulators in June 2017 for abusing its market dominance by giving an illegal advantage to its own products while demoting rivals in its comparison shopping service, Google Shopping. While EU actions do not ensure any movement domestically, they can bring to light information that further tarnishes Silicon Valley’s reputation and thus contributes to the erosion of the basis for its companies’ exceptional treatment to date. Within the United States, moreover, Yelp and TripAdvisor have repeatedly argued that Google deliberately diverts users searching for their sites to Google-owned alternatives. Google has said that some of these results are the result of bugs, but its competitors argue otherwise. It is at least possible that a major (and well-funded) lawsuit in the United States—and with it, a vigorous battle over First Amendment coverage, the editorial analogy, and unfair competition laws—may yet materialize.

The Limits of the Editorial Analogy

The analogical argument works something like this: A does x and merits treatment y. B does x. Therefore, B is analogous to A, and B also merits treatment y. We can challenge arguments of this form in several ways. First, internal to the argument, we can question the relationship between doing x and getting treatment y. We cannot assume that doing x always merits treatment y. Indeed, we cannot assume that doing x has anything to do with why treatment y is merited. An example will help make this more concrete: Take the action of eating a sundae without permission. If I work at the ice cream shop from which I took that sundae, a reprimand from my employer might be merited. But say instead that I’m a professor. We likely think that it would be absurd for my employer to reprimand me for eating a sundae without permission. In both cases I did the same thing—ate a sundae without permission—but additional facts change what treatment we think that same action merits. Put simply, even when A and B have some similarities, there can be relevant dissimilarities between them that renders treatment y appropriate for one but not the other.

A second challenge, and one I would call external, is to propose a different analogy. Why analogize B to A and not B to C? Consider that newspapers (A) provide people information (x) and that requiring newspapers to provide different information (for example, a right of reply) may be struck down as compelling them to speak (merits treatment y). Search engines (B) also provide people information (x). As a result, search engines are analogous to newspapers (A), and so we might think that requiring a search engine to provide different information should similarly be struck down as compelling it to speak (merits treatment y). Now consider an alternative analogy. Law schools (C) provide information (x) by hosting and organizing recruitment fairs, to which they invite a limited number of employers. Requiring law schools to allow military recruiters into such fairs and to give them equal access to students does not compel the schools to say anything (they remain free to protest the military’s policies), so this requirement is constitutional (merits treatment z). Search engines (B) provide information (x) via their rankings, in which a limited number of sites are included. Therefore, requiring search engines to allow sites into those rankings and to give them equal access to the search engine’s users similarly does not compel the search engine to speak (it remains free to protest the competitor’s speech). Thus, that requirement is constitutional as well (merits treatment z). Treatments y and z are incompatible. Yet, we can construct analogies that call for search engines to get both. That’s a problem.

Like all analogies, the editorial analogy is vulnerable on both the internal and external front.

Internal Weaknesses of the Analogy

In a white paper paid for by Google at the same time that the Federal Trade Commission was investigating whether the company had abused its market dominance, Eugene Volokh and Donald Falk argue that Google’s organic search results are fully protected speech and, as a result, are insulated from antitrust scrutiny. Relatedly, they argue that requiring Google to change its search results (for example, by placing Yelp higher) would unconstitutionally compel Google to speak in much the same way that a right-of-reply law would unconstitutionally compel a newspaper editor to speak. In making their argument, the authors rely heavily on the editorial analogy. As they put it, companies like Google are “analogous to newspapers and book publishers” in that they both “convey a wide range of information.” They claim that search results are also analogous to editorial publications, as both involve choices about “how to rank and organize content,” “what should be presented to users,” and “what constitutes useful information.” This description of (some of) what Google does is accurate. But, crucially, these analogies do not substantiate the authors’ two claims—namely, that (1) search engines and search results merit the same treatment as publishers and editorial judgments for First Amendment purposes, and (2) requiring Google to modify its search results would compel Google to speak.

Let’s start with the first claim — that Google is analogous to a publisher because it, too, conveys a wide range of information. Now consider the application of that argument to a familiar saying: “Actions speak louder than words.” We say this because actions convey a wide range of information, often more truthful information than is conveyed through speech alone. Yet we certainly do not think that whenever people act, they are analogous to newspaper editors under the First Amendment and that their actions are therefore covered as speech. Thus, we can conclude that conveying a wide range of information is not sufficient for being treated like a publisher under the First Amendment. And given this, it straightforwardly follows that pointing out that Google conveys a wide range of information does not yet tell us whether Google should be treated like a publisher under the First Amendment.

Now consider the layout of a grocery store. There are good reasons that pharmacies are in the back, that certain brands are at eye level, and that candy is near the checkout. All those choices convey a wide range of information to consumers. Do we think that for purposes of First Amendment analysis, grocery stores are therefore analogous to publishers, because grocery stores convey a wide range of information through their organizing of products? Is the layout of the grocery store analogous to an editorial for purposes of speech coverage? My guess is most people think the answer is an obvious no.

If any individual or organization who satisfies this “conveys a wide range of information” criterion is deemed analogous to newspaper and book publishers for First Amendment purposes, then we have misunderstood how liberal political theory and free speech theory work. At the heart of liberal political theory is the idea that everyone is free to live according to their own ideals, so long as doing so does not unduly interfere with other people’s ability to do likewise. As a result, the government can only legitimately restrict people’s freedom when it is necessary to prevent harm or secure the demands of justice. The idea at the heart of liberal free speech theory is that when it comes to certain communicative acts, a commitment to individual freedom isn’t enough and must be bolstered by extra protections that make what counts as “speech” less liable to regulation than similarly harmful or unjust non-speech. This doesn’t mean that the government can willy-nilly regulate whatever it wants except for speech. It must always show the harm or injustice that results from the object of regulation. Instead, liberal free speech theory says that regulating a subset of those harms or injustices—those that come directly from “speech”—should be more difficult, even acknowledging that they are harmful or unjust. But this whole scheme presupposes that what gets covered as “speech” for this purpose is limited, a special domain of extra protection. We should remember that this special domain comes at a cost. “Free” speech isn’t truly free. When we grant “speech” coverage, we require those who are harmed or treated unjustly by that speech to absorb more of its costs. Once any entity that conveys a wide range of information is suddenly analogous to a newspaper, we have begun making what was supposed to be exceptional treatment the new rule. While some might welcome this libertarian, deregulatory move in the short run, it is not only anathema to liberal theory but also, I suspect, unlikely to yield attractive outcomes in the long run.

Volokh and Falk next say that search results are analogous to editorial publications because both involve choices about “how to rank and organize content,” “what should be presented to users,” and “what constitutes useful information.” These similarities to publishers fare no better. As I said before, every store organizes and “ranks” content through its layout. Are all store layouts now akin to editorial publications under the First Amendment? Are all stores First Amendment publishers? Again, I think the answer is no. But as an ex-Google product philosopher (and who doesn’t want that title?) points out, companies like Facebook, Google, and Twitter seek to influence users by means of various organizational and content choices in much the same way that grocery stores do by their layout and product placement.

One might respond here by saying that ranking and organizing only counts as analogous to editorial functions if what is ranked and organized is itself speech. But this is implausible. Surely Volokh and Falk think that a restaurant ranking qualifies as speech even though the underlying things ranked and organized—restaurants—are not. Thus, that the thing being ranked and organized is itself speech is not necessary for coverage. Is it sufficient?

Here is an argument for that position: A bookstore selects which books to sell. Wouldn’t we say that its selection of those books is itself speech? And if so, doesn’t that show that curating other people’s speech is necessarily speech itself? Once again, I think the answer is no. First, I hesitate to grant the premise—that we would call a bookseller’s book selections an independent instance of protected speech. I say this because in cases where the state has banned the sale of protected speech, the Court has invoked either the First Amendment rights of speech creators or would-be speech buyers. When sellers challenge these bans, they point to the First Amendment rights of those other parties. Take Brown v. Entertainment Merchants Association, where the Court struck down a law banning the sale of violent video games. Although its opinion was admittedly not a paragon of clarity, the Court in Brown considered the First Amendment rights of game creators and children buyers. Nowhere did the Court consider whether the ban might violate the speech rights of video game sellers. Second, and more fundamentally, even if a bookseller’s choice of which books to sell counts as speech, that still does not show that (1) every time an entity curates third-party speech that curation is itself speech, nor does it show what might ultimately be more crucial—namely, that (2) like the newspaper in Tornillo, requiring a modification of that curation constitutes compelled speech. I have already gone over the reason for (1). To see (2), consider the military recruitment case Rumsfeld v. Forum for Academic and Institutional Rights (FAIR).

In FAIR, a federal statute required law schools to provide military recruiters the same access to students as that given to other recruiters or lose funding. A group of law schools argued that requiring them to include the military in their fairs would send students the message that the schools endorsed the military’s “don’t ask, don’t tell” policy, which they did not. As a result, the schools argued that the requirement constituted unconstitutional compelled speech. The Court disagreed, holding that requiring law schools to give military recruiters equal access and even sending out scheduling emails to students on behalf of the military recruiters did not compel the law schools to speak at all. As the Court saw it, “schools are not speaking when they host interviews and recruiting receptions.” Even more, the Court thought some of the schools’ compelled-speech claims “trivialize[d] the freedom protected” in its prior compelled-speech cases. Given the Court’s ruling in FAIR, and even granting that the curation of third-party speech is itself speech, it is not the case that requiring an entity to include speech it dislikes within its curation necessarily entails compelling that entity to speak.

A final move someone might suggest to rehabilitate the Volokh and Falk position entails looking at the restaurant ranking differently—it doesn’t rank and organize restaurants but instead information about those restaurants. And so, any entity that makes such rankings is in the business of ranking and organizing information and is relevantly analogous to a publisher making editorial selections. Two points here. First, I find it difficult to characterize a restaurant ranking as the organization of information about restaurants. It seems more natural to say that it is a ranking of restaurants that also generates information (which restaurants are best and which are worst). Second, as already noted, virtually any activity that involves the creation of information entails some curatorial decisions. Unless we are willing to say that every such activity warrants constitutional protection, we must concede that the fact that newspaper editors and search engines both engage in the curation of information is not sufficient for finding the latter analogous, for First Amendment purposes, to the former.

Potentially Relevant Dissimilarities

While often unnoticed, the extent to which we find analogical reasoning convincing is based not only on relevant similarities but also on the absence of relevant dissimilarities. And as many have already pointed out, there are significant and arguably relevant dissimilarities between the outputs of tech companies like Facebook, Google, and Twitter, on the one hand, and newspapers, on the other.

To make this point about the importance of dissimilarity more concrete, consider the development of oil and gas rights in the United States. Courts were faced with the question of whether land owners had property rights to oil and gas reservoirs that lay underneath their land. Reasoning by analogy, early American courts were “captured” by an analogy to the law of capture. If you capture a wild animal while you’re on your own property, it’s yours. Therefore, analogously, so long as you take out the gas and oil while you’re on your own property, it’s also yours. But of course, while in the grip of this analogy, courts failed to see the relevant dissimilarities between hunting wild animals and extracting oil and gas that made the analogy, and thus the application of the law of capture to oil and gas, problematic. For starters, such a rule incentivized landowners to over-drill so as to extract as much oil and gas as possible before their neighbors could do the same. Eventually we figured out that sometimes the dissimilarities are more important than the similarities and changed the rule.

Returning to editorial publications and tech company outputs, some scholars have argued that the use of algorithms creates a relevant dissimilarity. As Oren Bracha and Frank Pasquale have put it, we should distinguish between dialogical and functional expression and only give First Amendment coverage to the former. The rough idea is that dialogical expression is perceived by the audience as something with which it can agree or disagree, criticize or support, argue for or against. In contrast, functional expression, while not clearly defined, is expression that the audience does not perceive as speech to which it can respond in these ways. Bracha and Pasquale argue that algorithmically generated search outputs are functional because users do not perceive rankings as expression with which they can dialogically engage.

Volokh and Falk object to claims that algorithms and their outputs are not speech, pointing in part to the fact that algorithms are written by humans and result from engineers’ judgments. However, if we instead put them in conversation with Bracha and Pasquale, they might argue that audiences do perceive these outputs as judgments with which they can critically engage—just consider the public outcry over certain rankings and what does or does not trend. Even if we accept the dialogical/functional methodology, it seems that both sides are only partially right. Bracha and Pasquale are wrong to suggest that algorithmically encoded curation is necessarily functional. As others have suggested, we can conjure up some cases of algorithmic operations that look dialogical. This undermines the claim that the algorithm is what makes Facebook’s and Google’s curation non-speech.

Yet all of this is consistent with the plausible view, contra Volokh and Falk, that in light of how these companies portray themselves and their outputs to the public, outputs like search results, lists of what is trending, and newsfeed fodder are not understood by most members of the public as dialogical expression on the order of the content a newspaper publishes. While newspapers generally stand behind their content, Google, Facebook, and Twitter have all explicitly disavowed the substance of their results. Newspapers also (and unsurprisingly) hold themselves out as editors, whereas these tech companies do everything they can to run from that categorization. It strikes me that selling themselves to the public in this way does lessen users’ perception that their outputs are dialogical. I doubt many people enter a search query into Google and think, “I now know Google’s views on my query.” And part of the reason for this may well be that these companies expressly tell users not to think the results are their speech (even as they claim the opposite in litigation). Self-presentation as not-a-speaker has another important consequence: Users may not perceive requirements that these companies alter their results as tantamount to compelling the companies to speak.

To see why the public might not perceive these algorithmic outputs as the speech of these companies, let’s turn to a few specific examples.

Google’s Position: Not a Speaker

We can start with the controversy over Google’s autocomplete function. As most reading this will be aware, when you start typing a search query into Google’s search box, Google automatically makes suggestions for how the query should be completed. These suggestions, which are generated algorithmically, depend on several variables, including what you are typing, what you have previously searched for, what others have searched for, and what is currently trending. In 2016, users noticed that when they typed “are Jews” or “are women,” Google suggested “evil” to complete the query. Similarly, when users typed “are Muslims,” Google suggested “bad.” In 2011, when a certain Italian citizen’s name was typed into Google’s search box, autocomplete suggestions included the Italian words for “con man” and “fraud.” The individual then sued Google for defamation and won.

If we really think the outputs of Google’s algorithms are its speech, this defamation suit makes sense. But Google argued the opposite. In its statement after losing the suit in Italian court, Google said, “We believe that Google should not be held liable for terms that appear in autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself.” If you go to Google’s support pages today and look under “Search using autocomplete,” you will see the following: “Note: Search predictions aren’t the answer to your search. They’re also not statements by other people or Google about your search terms.” We should pause to reflect on this. Google is not simply saying that the views of those it ranks are not its speech. More than that, it expressly disavows as its own speech the very rankings and algorithmic outputs it claims in litigation to be its editorial speech.

There are, in fact, numerous situations in which Google disavows as its speech the very rankings that commentators like Volokh and Falk argue are both its speech and analogous to the speech of editorial publications. Stuart Benjamin describes a case in which Google’s top result for the term “Jew” was an anti-Semitic site called “Jew Watch.” When civil rights groups pressured Google to delist the site, Google instead posted a note stating that its results rely on “algorithms using thousands of factors to calculate a page’s relevance to a given query” and that they don’t reflect “the beliefs and preferences of those who work at Google.” Google thus presented itself as a conduit for the speech of others—not so different from how Google saw internet service providers (ISPs) as conduits, at least when I worked there. Now consider Tornillo, where the newspaper was so intimately tied to the content it published that a mere right of reply was thought to compel the newspaper to speak. The difference between Tornillo and Google’s situation is clear. Google’s point is that its search-related outputs aren’t its speech at all.

Google most recently and explicitly eschewed the editorial analogy in its testimony before the Senate Judiciary Subcommittee on Crime and Terrorism in October of last year. It is worth reproducing in full the relevant dialogue between Louisiana Senator John Kennedy and Richard Salgado, Google’s law enforcement and information security director:

Kennedy: Are you a media—let me ask Google this, to be fair. Are you a media company, or a neutral technology platform?Salgado: We’re the technology platform, primarily.
Kennedy: That’s what I thought you’d say. You don’t think you’re one of the largest, the largest newspapers in 92 countries?
Salgado: We’re not a newspaper. We’re a platform for sharing of information that can include news from sources such as newspapers.
Kennedy: Isn’t that what newspapers do?
Salgado: This is a platform from which news can be read from news sources.

If we are stuck making First Amendment coverage determinations by analogy, we might want to look beyond the analogy Google explicitly rejected in its congressional testimony.

Facebook’s Position: Not a Speaker

The history of Facebook’s Trending News and the recent controversy surrounding how its architecture facilitates—indeed, encourages—the proliferation of inflammatory and weaponized misinformation and propaganda provide further examples of a company that deliberately disclaims its curatorial products as its speech and itself as editor.

Facebook launched Trending News in January 2014. By this time, Twitter had established itself as the go-to social media site for breaking news and minute-by-minute coverage of live events. As a result, Twitter could “gobble up enormous amounts of engagement during TV premieres, award shows, sport matches, and world news events.” Twitter also successfully commercialized its Trending Topics feature, selling lucrative advertising space in the form of “promoted trends.” Facebook’s Trending News was viewed as the company’s attempt to emulate and compete with Twitter in this commercial space.

By the summer of 2014, Facebook was already facing criticism for its lack of serious news, both in Trending News and its main news feeds. The civil unrest in Ferguson was considered the year’s “most important domestic news story,” and while Twitter was hailed for its second-by-second coverage of Ferguson, there was scant evidence of the conflict on Facebook, which instead seemed dominated by the ALS ice bucket challenge. Some observers conjectured that Facebook’s feed algorithms were to blame. At one point, a senior Facebook employee said that the company was “actually working on it,” but uncertainty about the nature of the problem and Facebook’s response remained.  Should we understand the lack of Ferguson coverage in people’s feeds as the editorial decision of Facebook? Did Facebook see the lack of Ferguson coverage as its own speech? After all, according to Volokh and Falk, that absence was clearly the result of algorithmic construction choices, which in turn reflected the judgments of the company’s engineers. And Facebook was criticized for its algorithm’s design, which basically hid controversial content and showed users more universally agreeable content, because the latter is what “keeps people coming back.” But once again, and unsurprisingly, this is not how Facebook saw it. Facebook did not see the resulting absence of Ferguson coverage as its own speech, let alone the product of a deliberate decision akin to the choices made by a newspaper to write about or neglect that same topic. Nor does Facebook’s recognition that it needed to respond to the controversy by tweaking its algorithm, which it did, necessarily suggest that the lack of Ferguson coverage in Facebook feeds was an editorial judgment.

As this episode underscored, Facebook straightforwardly does not see itself as an editor or its curation as its speech. Instead, in a Q&A session, Zuckerberg, much like Google, characterized Facebook as more analogous to a neutral conduit or tool that enables the speech of others:

What we’re trying to do is make it so that every single person in the world has a voice and a channel and can share their opinions, or research, or facts that they’ve come across, and can broadcast that out to their friends and family and people who follow them and want to hear what they have to say.
... . We view it as our job to ... giv[e] everyone the richest tools to communicate and share what’s important to them.

This innocent-conduit-for-the-speech-of-others framing is not inconsistent with the facts that, by 2014, Facebook was the primary driver of traffic to most of the top news websites and that, by 2017, 45 percent of U.S. adults were getting at least some of their news from Facebook. Facebook has become “to the news business what Amazon is to book publishing—a behemoth that provides access to hundreds of millions of consumers and wields enormous power.” Nevertheless, Greg Marra, the engineer who oversees Facebook’s News Feed algorithm, said in an interview that he and his team “explicitly view ourselves as not editors. . . . We don’t want to have editorial judgment over the content,” because users are in the best position to decide what they want to see.

Facebook’s response to the 2016 controversy surrounding the curation of Trending Topics further drives home the editorial disanalogy. Back in 2014, Facebook said that its Trending Topics articles were ranked by an algorithm based on metrics like popularity and timeliness. Until the publication of a story by Recode in August 2015, there appears to have been no awareness that this was not the whole truth. That story suggested that Facebook’s workers had some hand in shaping Trending Topics content—not by selecting which articles appeared (“that’s done automatically by the algorithm”) but by writing headlines. But in two explosive pieces on the tech news site Gizmodo in May 2016, Michael Nunez reported that the involvement of workers went much further: Material appearing in Trending News was curated by Facebook contractors who, in addition to writing headlines, selected which topics trended and which sites they linked to. These contractors reported that they were told to link to stories from preferred outlets like the New York Times; that they had a prerogative, which they regularly exercised, to blacklist topics that weren’t covered by multiple traditional news sources or that concerned Facebook itself; and that they were told not to publicize that they were working for Facebook, presumably because the company “wanted to keep the magic about how trending topics work a secret.” Contractors subsequently reported that they had also injected stories about topics like Black Lives Matter into Trending News at the behest of management, who thought certain topics should be trending regardless of algorithmic metrics. Most controversially, the contractors reported that pro-conservative stories were regularly excluded from Trending News, not at management’s instruction, but on account of left-leaning colleagues using their prerogative to blacklist topics. Based on these reports, Nunez argued that Facebook wanted to “foster the illusion of a bias-free news ranking process” and that Facebook was obscuring its workers’ involvement because it “risk[ed] losing its image as a non-partisan player in the media industry” rather than “an inherently flawed curator.” In Nunez’s view, Facebook worked like a newsroom, expressing the views of its staff in its reporting, in “stark contrast” to the company’s depiction of Trending News as merely “topics that have recently become popular on Facebook” or “a neutral pipeline for distributing content.”

This did not sit well with Republicans. Within hours of Nunez’s second report, Republican National Committee Chairman Reince Priebus demanded that Facebook “answer for conservative censorship.” A post on the GOP’s official blog argued (presciently) that “Facebook has the power to greatly influence the presidential election” and objected to its platform “being used to silence viewpoints and stories that don’t fit someone else’s agenda.” Shortly thereafter, Senate Commerce Committee Chairman John Thune—a leading critic of the Federal Communications Commission’s fairness doctrine until it was officially repealed (after years of non-enforcement) in 2011 —notified Facebook that his committee was exploring a consumer protection investigation. In his words:

If Facebook presents its Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular political viewpoints, Facebook’s assertion that it maintains a platform for people and perspectives from across the political spectrum misleads the public.

Thune gave Facebook fourteen days to provide details of its guidelines for preventing the suppression of political views, the training it provided workers in relation to those guidelines, and its methods for monitoring compliance. Despite the view of lawyers who thought that Facebook could (and perhaps should) invoke the editorial analogy and reject Thune’s demands on First Amendment grounds, the company responded to Thune, explained its practices, and shared its internal Trending Topics review guidelines. Facebook’s senior leaders also met with a number of top Republican leaders to reassure them that it was an impartial platform. In its letter to Senator Thune, Facebook said that it found “no evidence of systematic political bias” but couldn’t rule out occasional biased judgment by its curators. It also identified, and pledged to reform, two parts of its process for generating Trending News. First, it would end its practice of boosting topics being covered by preferred major media players like BBC News, CNN, Fox News, and the New York Times (a change, looking back, that we might wish Facebook had not made). Second, the company stated that it would “take prompt remedial actions” should it find evidence of “improper actions taken on the basis of political bias.”

Facebook’s response to this issue, in the following months and amid a contentious U.S. election cycle, was to replace the Trending News curatorial team with engineers who had a more mechanical role in approving stories generated by the Trending News algorithm. These engineers, as one writer poetically put it, would be “the algorithm’s janitors. Per its revised guidelines, Facebook removed its own headlines and summaries, and all featured news stories, including their accompanying excerpt, became algorithmically generated, based on “spikes in conversation.” The only non-algorithmic effect on content was when reviewers found clear mistakes—such as duplicate topics, posts about non-news, and posts about fictional events—and when they separated topics that had been automatically clustered under a single heading by the algorithm. Before approving a topic, reviewers also confirmed that each topic contained at least three recently posted articles or five recently published posts, reviewed the keywords associated with the topic, nominated related topics, and set the topic location and category (for example, business, sports, or politics). From this point on, the source of posted articles no longer had a bearing on whether a topic would appear in Trending News.

Facebook thus changed its practices to become more “neutral,” however amorphous the concept. The company wanted to make clear that its rankings were not its speech. Recall that in the FAIR case, the Court thought that requiring law schools to include military recruiters was not compelled speech, as even “high school students can appreciate the difference between speech a school sponsors and speech the school permits because legally required to do so, pursuant to an equal access policy.” Facebook is asking users to do this very same thing—to appreciate that what is trending is not Facebook’s speech, even though it is on its platform.

Unfortunately, the more Facebook went out of its way to not be an editor, the more its Trending News algorithm was, as various news outlets characterized it, a “disaster,” an algorithm “go[ne] crazy.” A few days after the change, Megyn Kelly was trending with a false headline: “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary.” At the same time, four Washington Post journalists ran an experiment on their personal Facebook accounts to look at the sorts of stories in Trending News and uncovered that, from August 31 to September 22, there were “five trending stories that were indisputably fake and three that were profoundly inaccurate.” Throughout all of this, Zuckerberg did not reconsider his prior insistence that Facebook is “a tech company, not a media company.” For better or worse, it is hard to imagine Facebook trying harder to distance itself from both the editorial analogy and any claim that what showed up in Trending News was its speech. Even in the wake of Trump’s win, when “everyone from President Obama to the Pope . . . raised concerns about fake news and the potential impact on both political life and innocent individuals,” Zuckerberg reiterated that he and his company “do not want to be arbiters of truth ourselves, but instead [want to] rely on our community and trusted third parties.”

When we diagnose what went wrong with regard to fake news, we need not conclude that Facebook made the mistake of trying to be too neutral. Instead, we can realize that our (and their) previous conception of what “neutrality” entailed—not privileging certain news sources and treating all sources of “news” the same—was wrong. Facebook, and the rest of us, learned that treating fake news sites on a par with the Wall Street Journal and the New York Times is saying something very not “neutral” about how we should treat information from those sites. Just recently, Facebook announced that it will once again rank news sources, but this time it plans to do so based on user evaluations of those sources. We can debate this method as well, but it represents yet another attempt by Facebook to figure out what “neutral” means and then do it.

Finally, like Google, Facebook and Twitter were asked during recent congressional hearings how they “respond to . . . the growing concerns that [they] and other Silicon Valley companies are putting a thumb on the scale of political debate and shifting it in ways consistent with the political views of [their] employees?” Facebook General Counsel Colin Stretch replied, “Senator, again, we think of ourselves as a platform of all ideas—for all ideas and we aspire to that.” Stretch then discussed training given to prevent bias in its employees, saying, “We want to make sure that people’s own biases are not brought to bear in how we manage the platform.” Responding to the same question, Sean Edgett of Twitter insisted that “our goal and . . . one of our fundamental principles at the company is to remain impartial.”

Whatever the analogical similarities these companies share with publishers, these companies see the analogical dissimilarities as more salient. Given this, it is hard to see why we should extend First Amendment coverage to the choices they make about how to run their platforms. And perhaps more significantly, these companies’ self-conception dramatically weakens the claim that requirements to change their outputs would unconstitutionally compel them to speak.

Competing Analogies

In addition to delving into some internal weaknesses of the editorial analogy, we can cast further doubt on its near-automatic acceptance by raising rival analogical frameworks that either (1) suggest that these companies’ judgments should not receive free speech coverage or (2) assume some coverage but suggest ways that government regulation would not count as compelling these companies to speak. The ISP-like conduit analogy has already been discussed extensively by others (and briefly by me above), so here I will mention three other sets: shopping malls or law schools, fiduciaries or public trustees, and public forums or public squares. My goal here is not to convince you of one analogy above the rest but instead to show the limitations and (often unstated) normative judgments inherent in making First Amendment coverage determinations via analogy at all.

Shopping Mall or Law School

In PruneYard Shopping Center v. Robins, the appellees, a group of high school students, set up a stand to gather signatures for a petition in a privately owned shopping center. Security guards forced the students to leave; the students sued, claiming a right to solicit signatures on the premises under the California Constitution. The California Supreme Court ruled in their favor, but the PruneYard Shopping Center appealed, claiming a violation of its speech rights under the Federal Constitution. Most interestingly for our purposes, in its briefs PruneYard cited Tornillo to argue analogically. That is, PruneYard argued that requiring it to allow the students to petition was analogous to compelling newspapers to publish replies by political candidates they criticize. Now, we can see that there are some similarities between a shopping center and a newspaper—for example, both decide what to present to consumers, and both convey information to those consumers by means of their curatorial decisions (i.e., they share the same similarities Volokh and Falk identified between newspapers and Google). But crucially, the U.S. Supreme Court did not think those similarities were salient. Instead, the Court took a different, and better, methodological approach. It looked at the reasoning underlying Tornillo to see whether that same reasoning was applicable to a shopping mall. As the PruneYard Court saw it, the state cannot force newspapers to publish right-of-reply articles because doing so would deter editors “from publishing controversial political statements” and thus limit the “vigor” and “variety” of public debate. But such concerns did not apply in the case of a shopping center, and so the analogy did not hold sway. The Court ruled that PruneYard’s First Amendment rights were not infringed by the students’ state-given rights of expression and petition on its property. Indeed, the Court did not think allowing the students to petition compelled PruneYard to speak at all.

The Court again discussed and rejected the Tornillo analogy in FAIR. While the law schools argued that requiring them to treat military and non-military recruiters alike unconstitutionally compels them to speak—to send a message about their views on a military policy with which they disagreed—the Court thought otherwise. Unlike a newspaper engaging in First Amendment–protected activity in choosing which editorials to run, the Court held that “schools are not speaking when they host interviews and recruiting receptions.”

We can analogize both the PruneYard Shopping Center and the law schools to Facebook Trending and Google Search in a way that has prima facie appeal. Like PruneYard and the schools, neither Facebook nor Google is literally a newspaper. Both companies’ platforms, like the shopping center, are generally accessible to all. Like the shopping center’s selecting which retailers to lease space to and the law schools’ selecting which employers to participate in their recruitment fairs, Facebook and Google make curatorial decisions. As I have discussed at length above, Facebook and Google can and do publicly dissociate themselves from the views expressed by people who speak through their platforms and from the products of their own curatorial efforts (such as a particular ranking). The Supreme Court thought it important that PruneYard and the law schools were capable of doing the same. Thus, if we reason by this analogy, Facebook and Google are also not compelled to speak when required to let others speak on their platform.

Analogous to PruneYard, it is also not obvious that regulations preventing Facebook, Google, and Twitter from making certain curatorial and architectural choices—for example, from delisting competitors’ sites or refusing their ads, deactivating user live streams at the behest of police with no judicial oversight, striking deals with record labels to preemptively block the upload of certain user videos, or relying on monetization models that encourage addictive behaviors and the development of polarized epistemic bubbles that in turn facilitate the viral spread of fake news and propaganda —would limit the vigor or variety of public debate. Indeed, it’s important to remember that even if, like in PruneYard, the state can force these private actors to permit third-party speech in ways that do not require the companies themselves to speak, the First Amendment rights of users remain. The government could not have silenced the high school petitioners in PruneYard, and the same can be said for political dissent on Facebook.

In short, we can plausibly analogize Facebook, Google, and Twitter to the shopping center in PruneYard or the law schools in FAIR, instead of to the newspaper in Tornillo. And when we do, certain regulations don’t look constitutionally problematic after all.

Fiduciary or Public Trustee

An alternative analogical approach conceives of major tech companies as information fiduciaries. Tim Wu raises a similar idea when he asks whether new laws and regulations should “requir[e] that major speech platforms behave as public trustees, with general duties to police fake users, remove propaganda robots, and promote a robust speech environment surrounding matters of public concern.” As Wu points out, such a move would require a reorientation of the First Amendment so as to renew the concern the Court evinced for the speech rights of listeners (or users) in cases like Red Lion Broadcasting Co. v. FCC.

While this analogy may seem unlikely to be adopted in practice, such a move accords with the Court’s recognition in Packingham of cyberspace as “the most important place” for the exchange of views. In the recent congressional hearings with social media companies, it was also clear that all the participants were operating on a background assumption that while dealing with problems like those generated by Russian interference in the election, these companies had to be mindful of First Amendment principles. At one point, Senator Dick Durbin remarked, “Now take the word Russian out of it. A Facebook account that promotes anti-immigrant, anti-refugee sentiment in the United States. I don’t know if you would characterize that as vile. I sure would.” Pursuing this concern, Senator Durbin asked, “How are you going to sort this out, consistent with the basic values of this country when it comes to freedom of expression?”

If we thought of these companies as the same as any other private company, the idea that their solutions need to be consistent with the First Amendment would seem confused. Under existing doctrine, the tech companies don’t need to comply with the First Amendment, nor concern themselves with the First Amendment rights of users, because they aren’t engaged in state action. But even putting aside a finding of state action, members of the government, ordinary citizens, and the companies themselves do seem to see the companies as having a fiduciary-type role, given the importance of their platforms as spaces of public debate.

Further movement toward a public trustee role was also essentially called for by a shareholder proposal filed with Facebook and Twitter by Arjuna Capital (an activist investment firm) and the New York State Common Retirement Fund (the nation’s third-largest public pension fund). And Zuckerberg embraced a public trustee model in his 2018 annual self-challenge and Yom Kippur atonement. Zuckerberg did not commit to turning Facebook into a better newspaper editor; he suggested that the company would “assume the responsibilities implied by [its] power,” much like a public trustee would. And while these latter two are Zuckerberg’s personal commitments, as Facebook’s CEO and a controlling shareholder, he has fiduciary duties of his own to think about.

Like the editorial analogy, analogizing these companies to fiduciaries or public trustees is prima facie plausible. Indeed, even more so than in the case of the editorial analogy, pretty much all of the relevant parties act (at least outside of litigation) as if something like this were the case today. If these companies were analogized to fiduciaries for purposes of First Amendment law ,then as with lawyers and doctors, case law supports the regulation of their fiduciary-related choices, even assuming those choices are speech.

Company Town or Public Forum

When considering the company town or limited public forum analogy, we should distinguish two distinct positions: (1) the social media sites themselves are like company towns or create limited public forums such that when the company bans or delists someone, there are First Amendment implications; and (2) government officials who communicate to the public through their pages on these privately owned platforms can violate users’ First Amendment rights by banning the users or deleting their comments.

Up until recently, courts have rejected the first and been uncertain about the second. As all lawyers know, for the First Amendment to apply, there must be state action. And rarely does a private actor’s power rise to that level. But historical moments—and the nature of emerging threats—matter. As Eric Goldman observes, “We can’t ignore that there is such skepticism towards internet companies’ consolidation of power.” Goldman was focused on antitrust, but the point generalizes. If we combine this skepticism with the Court’s broad language in Packingham, the once off-the-wall theory that these companies should count as state actors for First Amendment purposes is starting to look a bit more on the table. And indeed, both the language of Packingham and its public square analogy have made appearances in recent suits by users alleging that social media companies violated their First Amendment rights. More than that, they have already appeared in court opinions concerning the same. It seems possible that the Court has signaled a willingness to return to an earlier and more capacious reading of the state action doctrine.

The second question concerns whether government officials’ pages on private social platforms can amount to limited public forums under the First Amendment. And while certain cases suggesting an affirmative answer predate Packingham, Packingham has already been used to bolster that conclusion. Most obviously, the Knight First Amendment Institute itself has argued, citing Packingham, that Trump’s @realDonaldTrump Twitter account is a designated public forum and that his banishment of seven Twitter users violates their First Amendment rights.

As for the company town or limited public forum analogy, there are two strands of state action doctrine worth mentioning here. The first concerns public function and the second entanglement. And we can make out analogies to cases in both.

The classic public function case is Marsh v. Alabama, which involved a company town. As happened not infrequently in the early 1900s, companies would build “towns” and then have their workers live and buy within them. Often, companies would use a claim of private property to prohibit certain individuals, particularly union organizers, from entering the town, bringing out the police in the event of any trespass. In Marsh, it was not a union organizer but a Jehovah’s Witness who was arrested for trespass while distributing religious literature on the company-owned sidewalk. The Court held that the company’s actions constituted state action, because the entire company town had “all the characteristics of any other American town,” save for the fact that it was privately owned. The company executed a public function, and that meant it could be treated as a state actor for constitutional purposes.

So when it comes to Facebook, Google, and Twitter, what counts as a “public function”? As the history of the state action doctrine attests, the Court has changed its mind on this very issue. In Amalgamated Food Employees Union Local 590 v. Logan Valley Plaza, for instance, the Court held that so long as union picketers used a private shopping center in a manner and purpose “generally consonant” with the use the owners had intended, they could not be banned from it consistent with the First Amendment. In the Court’s view, the shopping center was “clearly the functionally equivalent of the business district . . . involved in Marsh.” And “because the shopping center serve[d] as the community business block and [was] freely accessible and open to the people in the area and those passing through, the State [could] not delegate the power, through the use of its trespass laws, wholly to exclude those members of the public wishing to exercise their First Amendment rights on the premises.” If Logan Valley Plaza were still good law, it would seem that the platforms run by Facebook, Google, and Twitter could easily be analogized to the plaza, and users and advertisers would have First Amendment claims against these private companies.

But Logan Valley Plaza was overruled in Hudgens v. NLRB. There, the Court thought itself bound by its earlier decision in Lloyd Corporation v. Tanner, which held that a shopping center did enough to make clear that it was not dedicated to public use, so that members of the public had no First Amendment right to distribute handbills protesting the Vietnam War. In Hudgens, the Court said it was its “institutional duty . . . to follow until changed the law as it now is” and thought the rationale in Logan Valley Plaza could not survive Lloyd. Hudgens re-read Marsh as standing for something narrower: namely, that private entities that are the functional equivalent of a municipality cannot, consistent with the First Amendment, wholly restrict the speech of others on their property.

From these precedents, two questions naturally arise. First, and reasoning analogically, we can ask whether platforms such as those run by Facebook, Google, and Twitter are more like municipalities or more like shopping centers. Because I see these platforms as sufficiently different from both (and because I am skeptical of analogical reasoning in this space generally), this framing of the issue strikes me as unattractive. Alternatively, we might instead ask whether a majority of the current Court is open to finding a public forum well before a company has created the equivalent of an entire town. The language in Packingham supports an affirmative answer.

Again, in Packingham the Court “equate[d] the entirety of the internet with public streets and parks” and declared it “clear [that] cyberspace . . . and social media in particular” are “the most important places (in a spatial sense) for the exchange of views.” It found social media “the modern public square” and suggested it is “[a] fundamental principle of the First Amendment . . . that all persons have access” to it. This might be read as analogizing social media to the company towns of the past. If these spaces are the “modern public square,” they are clearly taking on important government functions.

One might reply—as these companies always do—that users are just a click away from going somewhere else. Two thoughts about this. First, this reply only highlights how open to the public these platforms are. And since Hudgens, when courts have tried to make sense of when private property becomes a public forum, they find relevant whether the site has been dedicated to public use. If people can seamlessly move between social media sites, it may be easier to find these sites dedicated to public use. Like walking into a park or entering a shopping mall, it is true that you agree to follow some basic rules upon entry, but overall such barriers are low. The emphasis that leading social media companies placed on openness and non-bias in their recent congressional testimony buttresses this point. Second, we know that such freedom of online movement would only exist if the costs of switching platforms were zero or close to it. But we (and they) know that this is not true, given, among other things, network effects, switching costs, and first-mover advantages. Moreover, and as the more analogically inclined have put it, even if you do switch, it tends to be a move from one online feudal lord (such as Google) to another (such as Facebook). Like moving from company town to company town, moving from one online feudal lord to another does not obviously diminish the sense in which either engages in the functional equivalent of state action.

A separate strand of cases within the “murky waters of the state action doctrine” concerns government entanglement. This is considered the “category of exceptions that has produced—and continues to produce—the most confusion.” Given this, how the Court will evolve the doctrine going forward is anybody’s guess. With that said, and putting aside cases concerning state action via judicial enforcement of private contractual agreements (Shelley v. Kraemer being the apex of this ), the Court has previously found state action when “[t]he State so far insinuated itself into a position of interdependence with” a private non-state actor “that it must be recognized as a joint participant in the challenged activity.” Relatedly, in Evans v. Newton the Court said that “[c]onduct that is formally ‘private’ may become so entwined with governmental policies or so impregnated with a governmental character as to become subject to the constitutional limitations placed upon state action.”

The government-like character of the leading tech companies has been acknowledged by the companies themselves. Almost a decade ago, Zuckerberg opined, “In a lot of ways Facebook is more like a government than a traditional company. We have this large community of people, and more than other technology companies we’re really setting policies.” But governments also hold substantial power over these companies, often in ways invisible to the public. Take government “requests” for data, without judicial oversight. It isn’t hard to see what is technically a private decision by companies like Facebook (to hand over user data to the government) as so entwined with the government that finding state action would be reasonable. Or take the pervasive—and, in most of academia, deeply under-appreciated—informal pressures that governments put on these platforms to regulate certain content: a technique sometimes called “jawboning.” The recent congressional hearings and various letters from congressional committees to these companies underscore how responsive these companies are to the concerns and recommendations of U.S. government officials, even where the government’s legal authority to demand such responsiveness is unclear. If members of the public were more aware of all the ways that the U.S. government works with and makes “requests” of these companies, I suspect findings of state action would be more forthcoming.

The Takeaway

As with the editorial analogy, other proposed analogies highlight certain facts while obscuring others. Yet all these analogies have prima facie purchase. When it comes to programs that organize, rank, and transmit third-party communication to users, some of what they do is similar, in some respects, to some of what publishers or editors do; some of what they do is similar, in some respects, to what fiduciaries do; and some of their functions are similar, in some respects, to what shopping malls and law schools do; and some of what they do makes them look analogous to public squares or to state actors. The question that everything hinges on is this: Which similarities and dissimilarities are the ones that matter from the point of view of free speech principles?

In the First Amendment context, to invoke the compelled speech doctrine and cite Tornillo as the relevant precedent, based on the mere fact that both search engines and newspapers rank and organize content, is to beg this question instead of properly addressing it. In asking which similarities and dissimilarities matter from the perspective of free speech principles, we are posing a question the answer to which cannot but reside in normative considerations. Analogical methods that respond to questions of free speech coverage by noting similarities between different types of communication, without examining these underlying normative concerns, are at best limited and at worst misleading. The limits of analogical reasoning help explain why some find the concept of “similarity” nearly useless. Indeed, the very use of analogical reasoning in law remains contested, with some finding it to be the “cornerstone of common law reasoning” while others see it as “mere window-dressing, without normative force.” As I have suggested elsewhere, if analogical reasoning is to be useful at all, we may need to distinguish between types of analogy and recognize the limited value of each.

The above point is focused on the threshold question of First Amendment coverage. There also remains an enormous amount of uncertainty concerning how these different framings, if adopted, would play out in practice. Take the fiduciary analogy. Determining to whom these companies would owe fiduciary obligations is far less clear than some acknowledge. Even among domestic users, interests will conflict, as we see in debates about these companies’ policies concerning hate speech and on university campuses when the need for open debate runs up against the need for safe spaces. Similarly, while finding these companies to be analogous to public squares or company towns might be straightforward in some respects, it is worth noting that neither government officials nor a majority of users seem to want these companies to be confined by the First Amendment. Returning to hate speech, it remains protected under the First Amendment, yet there has been a steady stream of controversies surrounding the failure of these platforms to remove hate speech and the users who engage in it. Users expect a level of content moderation that would likely be unachievable by a platform constrained by the First Amendment. Even more than this, applying the First Amendment would likely mean that each of these companies’ community standard guidelines are unconstitutional. If the state can’t eject you from the public square for saying something, these companies wouldn’t be able to do so either.

If the First Amendment rights of users were deployed to overturn content moderation as we know it, I suspect these platforms would witness a mass exodus. If I may analogize a bit myself, there is something to be said for the Nintendo way, where systems are more closed and curated. Such systems often end up creating more value for users (and persisting longer) than alternatives like Sega or MySpace, which try to be too many things to too many people at once, with minimal quality control. If the First Amendment really did apply to today’s tech giants, it’s not clear to me that they could avoid the latter’s fate.

Normative Beginnings

Instead of focusing on plausible analogies, we need to think through the normative theories undergirding the free speech principle and which of them, singular or plural, we want to privilege when making First Amendment coverage determinations. Here I will only mention two major contenders—democratic participation theory and thinker-based theory—and leave it to readers to decide whether these theories or others are what ought to be privileged at this historical moment.

Democratic ideals are invoked by many influential First Amendment scholars to explain and defend U.S. free speech doctrine. Building on this tradition, the democratic participation theory of free speech says that speech must be protected in order to ensure “the opportunity for individuals to participate in the speech by which we govern ourselves.” How do we decide what counts as “speech” using democratic participation as our normative reference point? We cannot construe the ideal too broadly, such that all parts of social life are part of the project of self-government, for in encompassing everything, the ideal would prioritize nothing. Instead, the ideal of democratic participation requires us to conceptually divide society into two domains: public life, where we act as citizens cooperating in collective self-governance, and private life, where we act independently in the service of our own projects. For free speech principles grounded in democratic participation, “speech” denotes whatever forms of communication are integral to collective self-governance. Of course, there will be complications at the margins, but the basic implications of the democratic participation theory are discernible all the same. Free speech principles are not meant to immunize all communication against legitimate regulatory aims. They are meant to support the project of collective self-government by safeguarding the communicative conduct that is essential to that project’s realization.

With those clarifications in place, the pertinent question for our purposes is which sorts of ostensible “speech”—be it algorithmic outputs in the form of rankings, listing decisions, trending topics, and so on—help the project of democratic self-government and which do not? At this moment, we can certainly appreciate how troll armies, fake accounts, and bots can be anathema to these projects. The economic decisions that companies like Google make in determining which ads to run or whether to privilege their own products against rivals like Yelp and TripAdvisor are, as I said, commercial and need not be seen as worth protecting as “speech” for the sake of democratic self-governance, at least across the board. That’s not to say that these decisions should necessarily be regulated but instead to show why, under democratic participation theory, they could be, without running afoul of the First Amendment.

The “thinker-based” theory, recently developed by Seana Shiffrin, identifies “the individual agent’s interest in the protection of the free development and operation of her mind” as the normative keystone of free speech. Whereas other theories situate the value of the thinker in relation to extrinsic ideals or desiderata, this theory identifies a direct and non-contingent link between the value of mental autonomy and the justification for the protected status of communicative conduct. Again, however, not all communication is privileged under such a theory. If we prioritize the “fundamental function of allowing an agent to transmit . . . the contents of her mind to others and to externalize her mental content,” then we will need to have special protections for people sharing all of this “content” with others. This is part of what makes Shiffrin’s theory distinctive: The expression of thoughts about politics and government does not occupy an exalted position relative to the expression of thoughts about everyday life. But crucially, what is especially protected in this theory is not communication as such but the communication of the thought of individuals. And this will tend to assign a less privileged status to much commercial communication. So when we revisit our key questions—whether programs that synthesize, organize, rank, and transmit third-party communication to users are implicated in “the fundamental function of allowing an agent to transmit the contents of her mind to others”—the diagnosis is mixed, as in the previous case.

One interesting consequence of the thinker-based theory is that, unlike the democratic participation theory, it suggests that facilitation of everyday online chatter by search engines and social networks may be as much a part of the case for protecting (some of) their operations as their role in facilitating political discourse. But as with the democratic participation theory, much of what these programs do—including running ads and allowing for the creation of bot armies and the spread of fake and inflammatory news—will likely fall outside the scope of free speech coverage by the lights of this normative approach.

Concluding Thoughts

In debates over tech companies and free speech coverage, neither the gravity of the policy stakes nor the complexity of the things being compared has dampened the willingness of courts and scholars to use tenuous analogies in charting the way forward. Most everybody seems to agree that search engines and social media platforms should be covered by principles of a free press, if and to the extent that the reasons underlying our protection of the press apply to them. But the point of this paper is that casual analogical methods—observing that both types of things “convey a wide range of information” or “rank and organize content”—do not tell us whether or to what extent they do. There are multiple plausible analogies that might be used, each with different First Amendment implications, and none tells us whether the normative considerations underlying free speech coverage for the one apply to the other. But if those normative considerations are inapplicable, the reason to extend coverage disappears.

 


+ The following borrows from and builds on prior work, including Heather Whitney, Does the Packingham Ruling Presage Greater Government Control over Search Results? Or Less?, Tech. & Marketing L. Blog (June 22, 2017), http://blog.ericgoldman.org/archives/2017/06/does-the-packingham-ruling-presage-greater-government-control-over-search-results-or-less-guest-blog-post.htm; and Heather M. Whitney & Robert Mark Simpson, Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning, in Free Speech in the Digital Age (Susan Brison & Kath Gelber eds., forthcoming 2018). For helpful feedback, my sincerest thanks to Adam Shmarya Lovett, Chris Franco, Daniel Viehoff, David Pozen, Eric Goldman, Jameel Jaffer, Jane Friedman, Katie Fallow, Neil Martin, Robert Hopkins, and Robert Mark Simpson. Additional thanks to David Pozen, who also served as editor for this paper, and to Knight First Amendment Institute interns Joseph Catalanotto and Sam Matthews for editorial assistance.

© 2018, Heather Whitney.

 

Cite as: Heather Whitney, Search Engines, Social Media, and the Editorial Analogy, 18-01 Knight First Amend. Inst. (Feb. 27, 2018), https://knightcolumbia.org/content/search-engines-social-media-and-editorial-analogy [https://perma.cc/Q2U5-DU3X].