Internet giants like Google and Facebook have accumulated unprecedented scale and power, and this has spurred many academic papers strategizing ways to regulatorily “fix” them. Heather Whitney’s paper, Search Engines, Social Media, and the Editorial Analogy, fits in that genre. The paper questions whether Google and Facebook are best analogized to traditional publishers like newspapers. If the analogy does not fit, the paper suggests, Google and Facebook may not qualify for full, or perhaps any, First Amendment protection. After undermining the “editorial analogy,” the paper suggests several potential alternative analogies that might let regulators “fix” Google and Facebook.          

I offer two responses to Whitney’s paper. First, the paper’s parsing of analogies doesn’t resolve the constitutional questions, because it is so clear that Google’s and Facebook’s activities qualify for First Amendment protection that no analogies are required. Second, the paper seeks to enable regulation that would be a net loss for all of us.

Google and Facebook Engage in Speech and Press Activities (No Analogy Required)

Whitney’s paper never precisely defines the “editorial analogy” that it deconstructs. This passage from the paper provides one of several examples of the analogy: “Facebook is analogous to a newspaper and . . . its handling of a feature like Trending Topics is analogous to a newspaper’s editorial choices.” This analogy, if apt, should mean that Facebook (and Google) qualify for First Amendment protection just like newspapers do. Whitney’s paper questions this argument in several ways, including by distinguishing Google’s and Facebook’s practices from those of newspapers.

However, Google and Facebook qualify for First Amendment protection without needing to rely on the editorial analogy or any other analogy. The First Amendment expressly protects “freedom of speech [and] of the press,” and Google and Facebook clearly engage in both speech and press activities when they republish third-party content.

The First Amendment’s references to “speech” and “press” have many definitions, but the paper focuses on editorial activities, so that deserves a closer look. Zeran v. America Online, Inc. provides an exemplary definition. The Zeran court said that “a publisher’s traditional editorial functions” include “deciding whether to publish, withdraw, postpone or alter content.” That’s exactly what Google’s search engine and Facebook’s newsfeed do.

First, both Google’s search engine and Facebook’s newsfeed decide what third-party content to publish. They implement their publication decisions using automated screens to filter out third-party content that their human editors have deemed unsuitable. Google and Facebook sort and prioritize the remaining third-party content using complex algorithms reflecting human-established editorial decisions, and then they publish the content to their readers.

Second, Google’s search engine and Facebook’s newsfeed frequently withdraw previously published third-party content using a combination of automated removal tools and human decisions.

Even though some of these operations are automated, both Google and Facebook rely on humans to make all publication and withdrawal editorial decisions. For example, Google refines its search results using the feedback of “search quality evaluators” who apply 160 pages of editorial guidelines. Furthermore, Google employees withdraw content from its search database through manual bans or downgrades (such as what happened in the e-ventures case discussed in Whitney’s paper). Facebook relies on over 10,000 “safety and security” human editors to apply extensive and very detailed editorial guidelines to decide whether to keep publishing or withdraw third-party content. And as Whitney’s paper documents, Facebook has a robust editorial procedure for its Trending Topics feature.

Thus, we don’t need to compare Google and Facebook to newspapers, grocery stores, malls, parade organizers, law school career fairs, doctors, or anything else to conclude that the publication and withdrawal of third-party content constitute traditional editorial functions. We can resolve the constitutional question via the plain meaning of the Constitution’s words.

That makes it irrelevant how many times Google and Facebook have publicly disclaimed editorial control over their databases of third-party content. Google and Facebook also routinely say the opposite (which the paper doesn’t recount), and their inconsistent statements reflect the disparate audiences for their messages.

If the Paper Is Right, Where’s the Win?

Let’s assume that the paper’s analysis is right and that, having rejected the editorial analogy, courts decide that Google and Facebook should receive only limited or no First Amendment protection. The paper does not really explore the regulatory implications of this possibility. One of two alternative scenarios would likely occur.

One scenario is that Google and Facebook would be classified as “neutral conduits.” In that case, they would lack discretion to moderate content, so they would have to carry all content, including content from spammers, fraudsters, bots, foreign election manipulators, and so on.

To avoid this unfavorable legal characterization, Google and Facebook might revamp their services to qualify for First Amendment treatment. However, if revamping requires human pre-review of all content and the acceptance of full liability for all published content, many of their existing services would not be tenable. (Imagine Facebook where human editors must review and approve all user status updates before publication, or Google where human editors must review and approve every search listing before incorporating it into the search database.) Otherwise, if they try to function as neutral conduits, Google and Facebook quickly would be overwhelmed by harmful and useless content, and that surely would spur the “mass exodus” of users mentioned by the paper.

An alternative scenario is that regulators would have virtually unlimited discretion to tell Google and Facebook how to run their services. Remember, if the First Amendment does not apply, regulators can impose obligations on Google and Facebook that are motivated by anti-speech, anti-technology, or even authoritarian objectives. The cumulative effect of these newly imposed regulations would likely make Google and Facebook functionally unusable, which would also lead to a “mass exodus” of users.

Thus, without First Amendment protection, it seems to me that the two most likely scenarios both result in the functional destruction of Google and Facebook. I know some might cheer that outcome, but I would not. Google, Facebook, and other user-generated-content services have created enormous social value that enriches our lives many times an hour. Allowing regulators to destroy that value is something we should aggressively resist.

Conclusion

I understand the widespread suspicion and fear of the power held by internet giants like Google and Facebook. I too am skeptical of any institution that has so much power. Plus, Google and Facebook have made many unforced gaffes and errors that erode our trust.

However, these are not good reasons to clip the First Amendment’s wings. The rights to free speech and press are cherished civil liberties, and they have been essential components of our republic’s centuries-long successes. Whatever issues we have with Google and Facebook, the robust application of the First Amendment is almost certainly part of the solution. Whether it is based on the editorial analogy, the plain language of the Constitution, or some other ground, the courts have recognized that fundamental point, and I hope legislators will too.

© 2018, Eric Goldman.

 

Cite as: Eric Goldman, Of Course the First Amendment Protects Google and Facebook (and It's Not a Close Question), 18-01.a Knight First Amend. Inst. (Feb. 26, 2018), https://knightcolumbia.org/content/course-first-amendment-protects-google-and-facebook-and-its-not-close-question [https://perma.cc/2MJR-6PUU].