Section 230 is law for a world of mistakes. Its fundamental premise is that everyone makes mistakes. Platforms make mistakes about which user-generated content is legal; courts make mistakes about how hard platforms are trying to remove illegal content; platforms and victims make mistakes about what courts will decide. Section 230’s rule of blanket immunity absolves platforms for their filtering mistakes, and keeps everyone away from inquiries where they are likely to make legal mistakes.

Because mistakes are central to Section 230, to understand Section 230 policy one must worry about mistakes, who makes them, and when. It is not enough to point to the bad speech that stays online because of Section 230, or to the good speech that platforms also host. One must also ask when these two types of speech might be mistaken for each other. More than anything else, it is this question that should inform discussions about how broad Section 230 should be. In its current form, Section 230 reflects a judgment that in general mistakes about good and bad content are so pervasive that immunity for platforms is justified. Any proposal to create exceptions to Section 230 should be based on a judgment that in a specific setting these mistakes can be reduced to an acceptable level.

To be more precise, Felix Wu persuasively argues that the best theory of Section 230 is collateral censorship: Without immunity "a (private) intermediary suppresses the speech of others in order to avoid liability that otherwise might be imposed on it as a result of that speech." A platform facing liability will predictably protect itself by removing too much content: It throws the baby out with the bathwater. Section 230 takes away the risk of liability. As a result, platforms will fail to remove some content that is legally actionable: The baby stays, and so does some dirty bathwater. Section 230 reflects a policy judgment that babies are more important than bathwater.

This is an error costs argument. Its premise is that the mistakes caused by liability are worse than the mistakes caused by immunity. Given the immense scale of the internet and the immense value of the good content on it, there are good reasons to think that the argument is often correct. But the argument can fail in one of two ways. First, where good filtering is possible, liability will be not lead to much collateral censorship because platforms will rarely mistake good content for bad. Second, where bad content dominates good (in terms of its volume and in terms of the harms it causes), then collateral censorship is still better than the alternative.

Courts and plaintiffs also make mistakes. If the standard for platforms’ liability is anything other than blanket immunity, courts must decide whether platforms have met that standard, and plaintiffs must predict what courts will decide. Any mistakes will be costly for platforms: Paying judgments in cases you lose and paying lawyers in cases you win both get expensive quickly. If those costs get too high, we are back to collateral censorship, because platforms will overfilter rather than take their chances in court. So for any exception to immunity, the standard of liability must be clear so platforms on the right side of the line can win their cases cheaply and reliably.

It should be clear by now that I approach the question of whether and how to restrict Section 230 rather differently than Professor Sylvain does. Something like his analysis is necessary to the case for a narrower immunity, but it is not sufficient. He argues clearly and powerfully that what I have bloodlessly referred to as "bad" or "illegal" content is both widely present on major platforms and often quite harmful. Nonconsensual pornography is awful for its victims; some Airbnb hosts discriminate illegally; Facebook enables advertisers to target ads by race.

But Professor Sylvain never quite engages with what I see as the crucial question: How crisply is it possible to define these categories? Instead of talking about filtering, he talks about platforms. He argues that even if platforms were once passive intermediaries relaying user content, today they actively “sort and repurpose the user content and data they collect.” This distinction between active and passive intermediaries is intuitively appealing. But the best normative case for intermediary immunity — collateral censorship — has never really rested on intermediary passivity. Even a "passive" intermediary has still "acted" by providing a platform that is a but-for cause of the harm.

Courts and commentators sometimes talk about platform passivity as a justification for immunity. But this is best understood as a shorthand for the argument that a truly "passive" intermediary typically lacks the knowledge about specific harmful content that it would need to make reliable filtering decisions. For example: A user posts a defamatory screed to YouTube about an ex-spouse's neglectful parenting. The user knows the allegations are false, but this is not something YouTube can know without detailed investigation. It lacks such knowledge not because it was passive rather than active, but because it is missing a crucial piece of information.

Not all activity is created equal. "Sorting," for example, may be automated but not in a way that yields specific knowledge about the meaning of content, let alone whether it is illegal. Sorting on the basis of video length, a user’s Likes of previous ads, or a guest’s past bookings does not tell a platform anything about whether a new video is pornographic, a new ad is hateful, or a new host is rejecting minority guests. Take nonconsensual pornography. The "pornography" half of the definition is probably something that many platforms can detect at scale — certainly this is true for the sort of platforms that let users tag videos by the race of participants and the sexual acts they perform. But the "nonconsensual" half will almost always depend on facts not in evidence, no matter how intensively the platform categorizes videos and analyzes user engagement with them. The active pornography platform and the passive one are equally able (or equally unable) to identify nonconsensual pornography. That the platform filters content along one dimension is only circumstantial evidence that it is capable of filtering along another.

Facebook's ad platform may be different because some of the categories it exposed to advertisers were so transparent. A "Demographic" category for an "affinity" of "African American (US)," for example, is fairly obviously a strong proxy for racial identity, even if the "affinity" is supposedly based only on what links a user has Liked. But even here, things are not always clear cut. Being able to target an ad to a person who has expressed interest in "how to burn jews" sounds damning, but the list of categories this was drawn from was generated algorithmically based on "what users explicitly share with Facebook and what they implicitly convey through their online activity." The difference is subtle but significant. It is likely that no one at Facebook even realized it was offering a "how to burn jews" category until ProPublica reported on it, whereas it seems more likely that Facebook employees knew about a category as large and prominent as "African American (US)" but failed to appreciate its legal and ethical dangers. So now the question becomes, what would Facebook need to do to detect and exclude not just "how to burn jews" but everything else of equal odiousness, and what would the rate of false positives be?

Experience with Facebook’s voluntary attempts to restrict "hate speech" by users does not provide reason for optimism. According to its internal guidelines, "Poor black people should still sit at the back of the bus" is acceptable (it targets only a subset of a protected group), but "White men are assholes" is not (cursing is considered an attack). Speech-hosting platforms have proven repeatedly incompetent at understanding speech in context, unable to distinguish criticism, parody, and reporting from endorsement. This is a double cause for despair. First, even well-intentioned platforms run by supposedly smart and hard-working people blunder again and again in ways that are shameful or worse. And second, any threat of liability that might clean up some of the worst abuses would likely also curtail a good deal of speech trying to counter those abuses.

There is a close and often neglected connection between the proper scope of Section 230 and the underlying substantive law. Section 230 carveouts are easier to justify when the underlying law is clearer. It is no accident that the heartland of Section 230 is defamation: It is a doctrinal swamp where cases often turn on subtle nuances of meaning. And it is no accident that copyright law is exempted from Section 230: Fair use may be messy in the extreme, but the prima facie question of whether a particular piece of content is or is not a nearly identical copy of a particular copyrighted work is something a platform can delegate to a hashing algorithm. Similarly, federal child pornography laws — not subject to Section 230 — are in practice enforced against platforms by asking them to take action only when a hashing algorithm detects an already-known item, or when they acquire specific knowledge about a specific piece of content.

It is reasonable to ask proponents of a narrower Section 230 to explain not just why it should be narrower but how. I look forward to future work from Professor Sylvain and others that delves into the substantive law of nonconsensual pornography, civil rights violations, harassment, and the other mountains of garbage washing up on the internet’s polluted shores, and that explains in more detail how to distinguish the good from the bad quickly, cheaply, reliably and at scale.


My thanks to Aislinn Black, Eric Goldman, Kate Klonick, David Pozen, and Vitaly Shmatikov for their comments. This essay may be freely reused under the terms of the Creative Commons Attribution 4.0 International license, https://creativecommons.org/licenses/by/4.0.

© 2018, James Grimmelman. 

 

Cite as: James Grimmelman, To Err Is Platform, 18-02.a Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/err-platform [https://perma.cc/BK3Q-ZPK4].