The idea of breaking up big tech impresses with its audaciousness. The last time the federal government broke up a successful commercial business was in 1982 when the government entered into a consent decree with AT&T that resulted in the creation of the “Baby Bells.” Almost 40 years later, the episode is still remembered as a high—or low—point for government intervention into the economy, depending on one’s perspective. Were the federal government to do what Senator Elizabeth Warren, among other voices on the left, argue that it should—namely, break up the large internet companies, like Amazon, Facebook, and Google, that today provide the platforms for a tremendous range of economic, social, and political activity in the United States—it would exercise a kind of government power that has very rarely been used in this country. Moreover, the result would undoubtedly represent a significant change in the economic relationships that govern important segments of the U.S. economy. That is, indeed, the goal.

One cannot therefore fault those who propose breaking up big tech for a lack of ambition. If anything, the opposite is true: The proposal is so ambitious that it may be difficult for many to take seriously. This is not to say that one should not take it seriously. As the last few decades have clearly demonstrated, ideas can go from “off the wall” to “on the wall” very quickly, and not only when it comes to constitutional argument. Nevertheless, there is no question that achieving the breakup of big tech will be a difficult task to accomplish, particularly given the significant amounts of money pouring into Washington from the internet companies that are the targets of Warren as well as a host of other thinkers.

It is also the case, however, that merely breaking up big tech—however bold, however significant in its economic repercussions—is very unlikely to solve many of the problems that critics associate with the emergence of what we might call the “platform public sphere.” This is because many of those problems—for example, the problem of what Shoshanna Zuboff calls “surveillance capitalism,” the problem of political disinformation, or the problems caused by often-anonymous threatening and harassing speech online —are not ultimately the consequence of economic concentration. And yet it is economic concentration—and for the most part, only economic concentration—that the antitrust tool of divestiture is designed to combat.

This is intentional. It must be remembered that one of the reasons that antitrust has been a favored method of economic regulation in the United States is precisely because it represents a limited intervention into the private sphere. As Daniel Crane notes, a primary motivation for enacting the Sherman Antitrust Act was to diffuse political pressure for more radical and interventionist forms of economic regulation. Antitrust was an alternative to nationalization—to the regulatory mechanisms associated, that is, with socialism and communism. Even in the early 20th century, when the federal government interpreted its antitrust powers much more aggressively than it has in recent decades, the goal of antitrust regulation was not to fundamentally reshape existing market practices but to ensure that markets functioned competitively. Antitrust law consequently has little to say about business practices that are neither designed to be anticompetitive nor likely to have a substantial anticompetitive effect. There is no reason to think that the situation will be any different today.

The result is that, even if the federal government sued Apple or Facebook or Google for antitrust violations, as many have argued that it should, or went so far as to break up Facebook or Google or Amazon along the lines that Warren has suggested, its actions would not directly impact how these or other big tech companies operate their platforms—the rules of access they employ, for example, or the control they exert over speech on their platforms, or their privacy policies. Nor would more aggressive enforcement of the antitrust laws against big tech do anything about what may be the most serious threat to the quality of public discourse in the internet public sphere: namely, the slow-motion destruction of the local news industry that is taking place as advertising dollars that once went to local newspapers and magazines flow instead to Facebook and Google.

Of course, breaking up big tech might indirectly affect at least some of these problems. More competition might, for example, make the big tech companies more sensitive to consumer demands by giving their users greater bargaining power when they demand changes to the existing rules. At present, mass consumer boycotts of Amazon or Google or Facebook are hard to pull off because consumers have so few replacement options. Were these companies broken up along the lines that Warren has suggested, consumers might be better able to push them to adopt more user-protective privacy rules or to alter how they regulate violent and threatening speech.

Divestiture might also limit the political power of the big tech companies and thereby make it easier for the government to regulate them in other ways. After all, the threat that monopolies pose to the efficient operation of the market is not just an economic but a political threat. Businesses that possess concentrated economic power also tend to possess the political pull to prevent regulations that threaten their profits.

The impact of divestiture on the relative political power of the big tech companies and consumers is likely to be modest, however. This is the case for several reasons. First, network effects—the tendency of users to continue to use a network because so many other people are using that network—may mean that, even when they have a more credible choice between platforms, customers are unwilling to leave Facebook or Google or other dominant platforms. Even if network efforts do not entrench the power of the big tech platforms, it may be the case that certain harmful practices—for example, the sale of user data to advertisers—are so fundamental to the business model of companies like Facebook and Google that they remain industry standards even in a less concentrated market. It also is far from obvious that the breakup of big tech will do a great deal to diminish the political clout of big tech. Companies like Facebook, Amazon, and Google would, after all, remain large and profitable players in the internet economy, even if they were to be broken up along the lines that Warren suggests. This also means that the breakup of big tech is unlikely to do much to help the traditional news media outlets. Facebook, even if divested of WhatsApp, will likely remain a much more attractive destination for advertising dollars than the Cleveland Plain Dealer, say.

What all this means for those concerned about the quality of public discourse in the internet age is that the relevant question is not: Should Google and Amazon and Facebook be broken up? The question is instead: What other actions should Congress take to promote the health and vitality of public debate in the platform public sphere? This is not a straightforward question to answer because it requires taking account of not only the costs and benefits of different regulatory tools but also the constraints that the First Amendment imposes on Congress’s legislative power.

One of the great benefits of divestiture as a regulatory tool is that it is almost certainly constitutional. In multiple opinions, the Supreme Court has made clear that legislative efforts to promote economic competition are not only constitutionally permissible, they actually protect many of the same values and interests that the Constitution protects. It has consequently tended to take a rather expansive view of the constitutional power that federal and state regulators possess to enforce the antitrust laws, including the power to divest, or break up, anticompetitive monopolies. Although the Court has found some constitutional limits to how broadly the antitrust laws extend—and that, in particular, antitrust laws cannot be interpreted to prohibit collective efforts to petition the government or to engage in political activism—a law mandating the breakup of big tech would not come anywhere close to those limits. Such a law would impose no constraint on the ability of companies like Google and Facebook to petition the government. Nor would it prevent the big tech companies from engaging in political expression or from saying or not saying anything they liked in public. Instead, like other procompetition regulatory interventions in the marketplace, divestiture would further constitutional values by preventing companies like Google and Facebook from using their economic might to drown out other voices. It seems incredibly unlikely, as a result, that any court would find that Congress lacked the constitutional power to enact divestiture along the lines that Warren and others suggest.

The same is not necessarily true of the other regulatory tools that scholars and policymakers have proposed as a solution to the problems that plague the platform public sphere. This includes Warren’s suggestion that large internet platforms like Amazon and Google not only should be broken up, but services like Google Search also should be required to “meet a standard of fair, reasonable, and nondiscriminatory dealing with users.” Nondiscrimination obligations are a core feature of public utility regulation in the United States. By including a nondiscrimination requirement in her breakup plan, Warren is clearly signaling that she believes the concentrated power of the tech giants needs to be combatted not only by the antimonopoly tool of antitrust but also by the antimonopoly tool of public utility regulation. She is not alone in this belief. In the past few years, a number of scholars and policymakers—many of them participants in this symposium—have suggested that Facebook, Amazon, and Google be treated as public utilities and regulated accordingly.

One can well understand why. After all, public utility laws are designed to protect the public’s right of access to important goods and services—to goods and services that one must have access to if one wishes to participate fully in society. It should be obvious to all by now that the goods and services that platform companies like Google and Facebook provide are goods of this kind. It is perfectly plausible, as a result, to believe that the concerns that justify the imposition of nondiscrimination obligations on railroads and airlines and telephone companies also apply to platform providers like Google and Facebook. Doing so would certainly further many of the same interests that breaking up these companies would further. Like divestiture, it would help ensure the inclusiveness of the platform public sphere by making it harder for the big tech companies to use their economic power to squelch disfavored voices and viewpoints. Imposing on Facebook and Google and other platform providers a duty of fair, reasonable, and nondiscriminatory dealing would also provide regulators a legal hook they could use to regulate the operation of these companies in all sorts of other ways. The public utility model is, for that reason, a very attractive one to those who believe that the unregulated power of the tech giants poses a real threat not only to public discourse but to democracy more broadly. It is generative, open-ended, and dynamic.

There is, nevertheless, a serious problem with the idea of turning platform companies like Facebook and Google into public utilities and requiring them to provide nondiscriminatory access to consumers: doing so would almost certainly be considered a violation of their First Amendment rights. This is because, unlike the private companies that in the past have been considered public utilities, companies like Google and Facebook engage in pervasive editorial regulation of the speech that flows through their networks. And yet the Supreme Court has, for over 30 years now, held that private property owners who exercise editorial control over speech that takes place on their property cannot ordinarily be required to open that property to speech they dislike.

The Court has only allowed the government to require property owners to open up their property to others’ speech when there is good reason to believe that doing so is necessary to prevent the property owner from exercising bottleneck control over an important medium of communication. One could try to argue that Facebook and Google possess bottleneck power of that sort, but it would be hard to make the argument a convincing one, particularlyif these companies get broken up. Even if they don’t, the argument would be a tough sell. In Turner Broadcasting System, Inc. v. FCC—the most recent case in which the Court upheld a forced access law—the Court relied heavily on evidence that in 99 percent of communities in the United States, the local cable company possesses a total monopoly on the provision of cable service to justify a law requiring cable companies to devote a number of their cable channels to transmitting the content of local broadcast television networks. That the local broadcast television industry was, in significant parts of the country, utterly dependent for its survival upon the willingness of cable companies to carry its programming—and more specifically, upon the fact that cable operators exercised “control over most (if not all) of the television programming that is channeled into the subscriber’s home [and could] thus silence the voice of competing speakers with a mere flick of the switch”—justified, the Court asserted, the constraint the law imposed on their editorial freedom. Facebook and Google may be dominant platforms, but they do not enjoy anywhere close to this level of dominance. There is no switch they can flick that can prevent disfavored speakers from disseminating their message on other, less dominant platforms. Particularly with Justice Kavanaugh on the Court, it is extremely unlikely that any effort to impose a nondiscrimination obligation on platform companies would be upheld against the First Amendment challenge that would be virtually certain to be forthcoming.

The upshot is that neither of the two antimonopoly tools that scholars and policymakers have proposed in recent years as solutions to the economic as well as political and cultural problems created by the rise of big tech will be able to do much to improve the quality of discourse within the platform public sphere. Breaking up big tech may help spur innovation and foster competition, but it won’t do much to alter the conditions under which speech occurs. Imposing a rule of nondiscriminatory access on the big tech companies, meanwhile, would alter the conditions of at least some aspects of the platform public sphere, by making it much more difficult for platforms to kick speakers off of their platform or to deny them access in the first place. But it is almost certainly precluded by the First Amendment, at least as the First Amendment is currently understood.

This doesn’t mean that there is nothing that lawmakers can do to improve the quality of public discourse on the internet or to ensure equitable access to the platform public sphere. But it does mean that they cannot rely on antimonopoly tools to do so. So what is to be done? In the remainder of this essay, I briefly suggest three regulatory interventions that would do more to directly tackle the problems of public discourse in the internet age than the mechanism of divestiture but that would not create the constitutional problems that a nondiscriminatory access rule would create.

Newspaper Subsidies

Perhaps the easiest (although certainly not the cheapest) way that Congress could mitigate the democratic harms created by the economic and cultural dominance of the large platform companies is to subsidize other, more traditional platforms for expression—namely, local newspapers. Local newspapers obviously do not provide the same opportunity for public expression as platforms like Facebook and Google do. But they serve another important public function: They uncover and disseminate information about local events, scandals, and problems. The steady flow of advertising money from local newspapers to the platform companies has and surely will continue to contribute to the creation of a public sphere in which there is a great deal of opinion but relatively little fact.

Congress can do something about this by granting a sizable monetary subsidy for local newspapers. National newspapers like The New York Times and The Washington Post don’t need subsidizing because they have managed to transition to a subscription-based business model. But local newspapers do need help.

A subsidy to established local news providers would not solve the root problem plaguing local news: namely, the transformation of the advertising industry on which the news industry has long relied for its economic sustenance. But it would help at least slow down the bleeding until a new economic model can be found. Certainly, the history of the newspaper industry in the United States demonstrates how generative federal newspaper subsidies can be. The significant postal subsidies that Congress granted newspapers beginning in the late 19th century produced a country that, by 1820, had both more post offices and more newspapers per capita than any other nation in the world. This in turn fostered a remarkably integrated and dynamic media landscape. The government continued to subsidize mail throughout the early 20th century but, since the 1960s, has significantly decreased the size of the postal subsidies. In 2010, the Federal Trade Commission raised the possibility of increasing the size of the federal press subsidies by millions of dollars but has not moved forward on the idea since then. It easily could.

Subsidies pose no constitutional problem. They do not infringe anyone’s First Amendment rights, so long as they are applied in a viewpoint-neutral manner. And they represent a much better solution to the problems facing the local news media than the solution that large newspaper businesses like News Corp have advocated, which is to grant newspapers a temporary immunity from antitrust laws. Antitrust immunity tends to favor large industry players, for obvious reasons. It would thus do little to help the newspapers that are most at risk in a media landscape dominated by big tech. Targeted subsidies are a much better way to go, even if enacting them might be more politically difficult because of the costs they impose.

As this discussion suggests, federal media policies should take account of the harmful effects that concentrated economic power can have on the public sphere. But this concern need not always make itself felt by means of the classic tools of antimonopoly law.

Privacy Regulation

In addition to subsidizing the traditional news media, Congress could restrict what platform companies do with the information they gather about their users’ browsing, shopping, and searching habits. Limiting the platforms’ ability to store and disseminate the information that they gather about their users would not only promote individual privacy interests; it would also help ensure broad participation in the platform public sphere by preventing those who wish to visit politically unpopular sites or engage in dissident speech or association from being chilled by the fear of surveillance.

There is growing evidence that fears of online surveillance impact the willingness of users to participate in online discourse—particularly on controversial political topics. At least some users of platform services believe that there is enough of a risk of negative consequences from expressing a potentially controversial view to stay quiet. These fears are not irrational. As Zuboff and others have documented, the big tech companies have a close and complicated relationship with the institutions that make up the national security state and frequently share user data with them (not always involuntarily). Recent incidents in which journalists critical of the Trump administration’s immigration policies were stopped and questioned at the border demonstrate vividly how this data can be, and perhaps has been, used not only to investigate national security threats but to target those who criticize the government or express politically unpopular views. This is precisely the kind of state action the First Amendment was enacted to prevent, but current First Amendment doctrine makes it virtually impossible for those chilled by the threat of surveillance to bring a constitutional claim.

There is consequently a good free speech argument, as well as a good privacy argument, for strengthening the (currently very weak) laws that govern how the government acquires and uses this kind of data and when and how the companies can disseminate it. Reforms of this sort could do a lot more than divestiture to ensure that the platform public sphere is robust, diverse, and inclusive.

Nor would strengthened privacy laws cause the kind of First Amendment problems that imposing a nondiscrimination access requirement on the big tech companies would. This is because, as the Court has made clear in numerous opinions, laws that restrict the collection and dissemination of information do not violate the First Amendment when they reasonably further a substantial government interest, when they are employed in a viewpoint-neutral manner, and when the information they restrict relates to private matters—“domestic gossip,” say, or “trade secrets”—rather than to matters of public concern.

This is true notwithstanding the Court’s 2011 decision in Sorrell v. IMS Health, Inc. to strike down a Vermont law that prohibited pharmacies from selling to pharmaceutical marketers information about doctors’ prescribing habits without the doctors’ consent. Some have interpreted the decision in Sorrell to mean that all restrictions on the dissemination and sale of private information will be considered presumptively invalid or close to it. But in fact, Sorrell articulates a much narrower rule, namely that laws that restrict the dissemination of private information in order to target particular kinds of speakers must be closely scrutinized.

What this means for user data is that Congress should be able to justify relatively easily laws that restrict the uses that the big tech companies can make of it and with whom it can be shared. Such laws, after all, would clearly further the government’s substantial interests in individual privacy and in freedom of speech. The information they would regulate, meanwhile— information about what websites users search for, what kinds of shoes they like to buy, or who is in their friend network—may possess great commercial significance to advertisers, but in its discrete particularity is unlikely to be of broad “public concern.” Consequently, so long as Congress enacted a general enough privacy law—one that did not, like the law struck down in Sorrell, limit the ability of the big tech companies to disseminate the information they possessed to only certain users—the First Amendment should not constrain its powers.

There is no need, in other words, to rely on corporate self-regulation—or, alternatively, to rely on the law of fiduciary obligations—to protect user privacy and, along with it, the vitality of the platform public sphere. Fears of the First Amendment when it comes to privacy regulation have been greatly overblown, as the cases handed down since Sorrell make quite clear. The First Amendment does make it exceedingly difficult for the government to force businesses to open up their property to speech they dislike, but it does not prevent the government from requiring businesses to protect the privacy of those to whom they voluntarily agree to provide services. Presumably this is because the Court thinks of the former kind of state action as posing a much more serious threat to the operation of the marketplace of ideas than the latter. We may agree or disagree, but what it means, practically, is that Congress has a good deal of power to affect the conditions of discourse on the platform public sphere by enacting viewpoint-neutral privacy laws—laws that give individuals some degree of knowledge and control over the data that the big tech companies possess about them and the uses to which it is put.

Targeted Speech Regulations

Finally, Congress or state legislatures could make the platform public sphere a less threatening or dangerous place by enacting targeted speech regulations that make it either unlawful or expensive for the big tech companies to host threatening or harassing speech on their platforms. Legislatures have, in fact, already done so—as of this writing, the dissemination of nonconsensual pornography is a criminal offense in 46 states and the District of Columbia, and similar legislation has recently been introduced in Congress. Congress also recently limited the broad immunity that internet companies possess under Section 230 of the Communications Decency Act from liability for speech that appears on their platforms to exclude speech that promotes prostitution or sex trafficking. This revision to the scope of Section 230 may only be the first of many; in a moment when many believe that the big tech companies possess too much power, Section 230 is a popular target for legislative reform.

This kind of targeted speech regulation is of course precisely the kind of thing one might hope to avoid by engaging in more structural or “infrastructural” reform of the internet ecosystem—reforms, like the antimonopoly tools discussed earlier, that reshape the conditions under which speech occurs, rather than target that speech itself. But, as the earlier discussion makes clear, it simply may not be possible for structural reforms to solve all of the problems that plague the platform public sphere. The problems of racial hatred or sexual violence may simply be too pervasive to be solved by the mechanism of competition. Lawmakers who want to prevent (for example) the serious economic or reputational harms that the public circulation of sexually graphic images can cause may therefore have no choice but to target the speech directly. The same is true for those concerned about the problems caused by threats of violence on the internet.

Efforts to regulate speech directly will obviously raise all sorts of First Amendment questions. Laws that restrict speech because of its harmful content are typically considered presumptively invalid under the First Amendment. That presumption is rebuttable, however, if the government can demonstrate a sufficiently compelling reason for the law—and it doesn’t apply at all to unprotected speech like true threats. This explains why courts have long upheld the federal threats statute as applied to threats made on the internet. It also explains why just a few months ago, the Vermont Supreme Court upheld the state’s nonconsensual pornography law against a First Amendment challenge. There is therefore opportunity for legislators to regulate these and other kinds of harmful speech on the internet more intensively than harmful content has been regulated in the past.

This does not mean, of course, that doing so is normatively desirable. That is a far more complicated and context-specific question than can be answered in general—and certainly not in this essay. I will simply note that, when assessing it, policymakers and scholars should keep in mind not merely the benefits and harms of the speech in question but also the particular conditions under which speech on the internet occurs. One of the profound changes that the emergence of the platform public sphere has brought about is a significant democratization of the opportunity to engage in public expression. The result is to increase tremendously the range of speakers—and speech acts—that circulate publicly. This is obviously both a good and a bad thing; it energizes and empowers but it also makes possible all kinds of hateful, harassing, and demeaning speech. It also raises the costs of enforcing any criminal or, for that matter, civil regulation of speech. And it heightens the possibility—present whenever the government regulates speech—that laws intended to remove violent or harassing or derogatory speech from the internet will in fact be used to punish politically unpopular speakers, rather than the worst kinds of speech.

This suggests that whatever targeted speech regulation is enacted should be narrow in its scope, to help ensure that the government’s coercive power is wielded against the worst of the worst rather than against the politically vulnerable. What this means, in turn, is that even targeted regulation of speech will only be able to do so much to improve the conditions of discourse on the internet. It may, however, be the best that regulators can do, absent the kind of cultural and political change that creates and alters speech norms and the conditions of production on the internet.

Conclusion

There is no question that the First Amendment—particularly as it is currently understood—makes regulating the platform public sphere more challenging, even when what those regulations seek to do is the same thing the First Amendment is supposed to do: namely, create a public sphere that is “uninhibited, robust, and wide-open.” We may not like this aspect of contemporary First Amendment law but it is something that is unlikely to change any time in the near future.

What that means is that some regulatory tools—the tool of public utility regulation, for example—may be poorly suited to the challenges of our contemporary moment (challenges that are legal, as well as economic, social, and political). That may not mean we want to give up on them. Perhaps it is First Amendment law that ultimately has to change, and not our regulatory ambitions. Nevertheless, this essay has pointed to some important alternatives that may be available even notwithstanding the current, highly deregulatory approach of First Amendment law. It is important to keep them in mind if we want to have both freedom of speech and freedom to regulate.

 

Printable PDF

 

© 2020, Genevieve Lakier.

 

Cite as: Genevieve Lakier, The Limits of Antimonopoly Law as a Solution to the Problems of the Platform Public Sphere, 20-08 Knight First Amend. Inst. (Mar. 30, 2020), https://knightcolumbia.org/content/the-limits-of-antimonopoly-as-a-solution-to-the-problems-of-the-platform-public-sphere [https://perma.cc/FQ5C-9V9U].