From Google to Facebook to Amazon, tech giants now dominate both our economic landscape and structure the public sphere itself. That influence and control is felt everywhere, from the way these firms mediate the flow of information and public discourse on Facebook or YouTube to how they shape the physical and public infrastructure through “smart city” initiatives that spread Wi-Fi and data collection capabilities through parks, sidewalks, and even doorbells. As the presence of these platform firms is felt more widely, we have also begun a public reckoning with the power that these firms possess.

While much of the discussion around the online public sphere has centered on questions of content moderation and speech, there is now growing interest in responding to the concentrated power of platforms through a renewal of antimonopoly tools. Antimonopoly tools include separation by size, separation by function where there is a conflict of interest, separation by market share, laws requiring interoperability, laws prohibiting predatory pricing or inputs, and laws prohibiting tying contracts. These tools have historically been key to ensuring a democratic public sphere. But while these strategies are critical, antimonopoly regulations have historically also relied on the additional set of tools of public utility regulation, the focus of this paper. Public utility regulation has been an essential complement to antitrust and breakup strategies, as governments have in the past used regulation to enforce critical public obligations such as common carriage, nondiscrimination, rules of interoperability, and fair pricing. We argue in this paper that information platforms like Facebook, Google, and Amazon should be viewed as essential infrastructure and regulated as public utilities. This public utility regulatory approach is a critical complement to the antimonopoly tools that scholars have proposed in the context of tech platforms. It is also important to assure that information platforms serve their critical functions as the bedrock of a democratic public sphere.

The tradition of public utility regulation—one of the key tools in the antimonopoly tradition—offers a critical foundation for imagining a modern approach to governing speech and assuring a vibrant democracy in the digital age. Historically, the public utility tradition has animated much of the development of the modern regulatory state. From water to electricity to telecommunications, 19th century reformers used public utility oversight—which includes mandates for fair treatment, common carriage, and nondiscrimination as well as limits on extractive pricing and constraints on utility business models—as a key method to restrain the dangers of private power over critical shared infrastructure. Indeed, Jay Gould’s control over the flow of information and the telegraph was as much a driving force behind 19th century antimonopoly reforms as the more famous concern with economic concentration among robber barons like J. P. Morgan and Cornelius Vanderbilt. Similarly, the creation of the Federal Communications Commission (FCC) should be understood as part of this antimonopoly tradition, establishing public regulatory oversight over communications infrastructure while also explicitly giving the public a degree of democratic voice and control over the governance of that communications infrastructure. 

Public utility regulation is, in our view, consistent with and complementary to policies that might require divestiture to create more competition (as in a requirement that Facebook divest Instagram) or separate platform and commerce (as in a requirement that Google divest from Google Shopping and Google Maps). Indeed, even if the platform companies are broken up into component parts and we ensure greater competition within each market, we would still see a value in some degree of consolidation in key communications tools. The particular way in which the major players in communications infrastructure have structured their business models creates unique, novel dangers that must be addressed if we want to ensure that the public sphere remains healthy, safe, and free.

In the context of today’s information platforms, public utility concepts are of critical importance. At the heart of public utility regulation is the need to redress the power imbalance that stems from private control of essential infrastructure: If a private actor controls the terms of access to a critical good or service upon which the public depends for flourishing, that places the private actor in a position of arbitrary, dominating power over the public. Given this imbalance, regulations are needed to ensure that this infrastructure serve the public’s needs—rather than incentivizing exploitative or exclusionary uses for private profit. Some public utility regulations directly assure that public obligations and needs are met. Common carriage requirements for railroads, for example, protected against discrimination by railroads, as essential transportation infrastructure. Similarly, interoperability requirements, such as in the context of competing telecommunications companies, help ensure that customers are not locked into a private provider’s closed ecosystem and limit the coercive dangers of privately owned infrastructure. Other public utility regulations—such as price regulations and limitations on harmful or extractive business practices—in effect alter the basic business model and monetization strategies that private providers employ in the first place.

In this paper, we focus in particular on this last form of public utility regulation: targeting the business models of the information platforms themselves. Specifically, we argue for a rule that no essential infrastructure should be surveillance-based or funded by targeted ads. We argue that privately run communications infrastructure in the hands of tech companies leads to public bads: addiction, monetization, surveillance. It does not have to be that way. By limiting this dangerous business practice, we argue that information platforms can be steered to the public good once these incentives are altered.

The rest of the paper proceeds as follows. In Part I of this essay we describe what we see as the public sphere takeover and highlight the particular pathologies that arise from a privatized and digitized informational infrastructure. We then explain why targeted advertising is the key driver of these pathologies and how public utility regulation can directly target and shift these incentives. In Part II we explain how our proposal fits into the public utility tradition and the related broader antimonopoly tradition. We also suggest that these structuralist modes of regulation using the antimonopoly toolkit offers a way out of the current impasse over attempts to regulate information platforms by focusing more narrowly on speech and content moderation. 

The Big Tech Public Sphere—and Its Pathologies

Two Forms of Public Sphere Takeover

The tech giants’ control of the public sphere presents itself in two ways. First, these companies are becoming the backbone infrastructure of all communication and are structuring the rules and context in which debate, discussion, and development of ideas, contestation, and organizing happens. Two-thirds of Americans get news on social media, most prominently on Facebook and YouTube (which is owned by Google). Public debate around issues of national and local importance, including national policy, local policy, and elections at all levels, happens on the tech platforms. Americans spend nearly two hours a day on Facebook and Facebook-owned Instagram and use Messenger and Gmail for messaging, Google Search for information, and YouTube for news and opinion. Millions of people are members of Facebook groups, where they come together to discuss shared interests and plan activities.

As such, these companies have replaced, or are replacing, the role traditionally played by different communications infrastructure, including the post office, the telephone company, and the public square. The post office and telegraph and telephone services allowed for one-to-one conversations and debates, while sidewalks, streets, and community events enabled multiple people to come together and debate. Some parts of the pre-2010 public sphere were publicly run. Other parts were privately run but were distributed among many different actors: Examples include telephone companies, television stations, and cable companies. Inasmuch as media organizations played a critical role in a vibrant public sphere, the FCC did not allow newspapers and radio and TV stations to have the same owners. Each subset of the public sphere was either directly publicly governed (streets), highly regulated (phone companies), or subject to decentralization principles (media). Now two companies—Google and Facebook—play many of these functions, but they are neither decentralized nor subject to heavy regulation.

At the same time, tech giants are taking over the public sphere in a second way. They are entering into bargains with cities, states, and the country to provide, partially or fully, public goods in exchange for access to data. Google in particular has begun to take over the provision and funding of indisputably public infrastructure. For instance, Google is currently building a project in Toronto under the auspices of its Sidewalk Labs division with plans to directly provide public services such as security, transportation, and public energy systems management. In New York City, a Google-backed project is providing free Wi-Fi and plug-in stations in the form of thousands of publicly provided kiosks around the city. Google education tools are becoming so integrated into schools that 80 million educators and children around the world use them. Amazon has become the dominant cloud computing service used by city, state, and federalgovernments. In this second form of takeover, tech companies are entangling themselves in existing governmental systems, sometimes for a fee, and sometimes for free. 

These two strategies are related; left unchecked, they have a natural telos: the private, monopolistic control of all aspects of the public sphere. What that telos means, however, is unclear. In its current form, the way in which Facebook and Google construct the public sphere has largely to do with prioritizing certain information and modes of communication and deprioritizing others. Facebook, for instance, makes centralized architectural choices that shape whether you are more likely to see video or static information, whether you are more likely to see information posted by friends or by pages, which of your friends’ posts you are more likely to see, what kinds of advertisements you are more likely to see, and which news stories you are more likely to be fed. Facebook chooses whether you are likely to see news that makes you happy or sad, lonely or connected, and whether pages with more organizing asks are prioritized or deprioritized over pages that make more passive demands. 

Most recently, Facebook’s control over the architecture of the public sphere was brought into sharp focus by its announcement that it was changing its policy on fact-checking political ads. The power of that announcement was far more impactful than any recent Federal Election Commission ruling; it had a direct regulatory effect. It immediately changed the behavior of would-be advertisers, existing candidates, and political parties, both as they sought to decide how to respond to the rule and as they considered how they would respond if others took advantage of the rule.

These two functions—communications and urban infrastructure—represent critical public functions that are now increasingly controlled by tech companies. This degree of private control already introduces a power imbalance that is on its face problematic, even if we trust companies to act benevolently. But this control is rendered even more troubling given the particular profit motive and business model through which firms like Facebook and Google monetize and commodify their control over information flows: the use of targeted ads fueled by massive amounts of data collection.  

Five Features of the Ad-Based Public Sphere

On the surface, firms like Facebook, Google, and Amazon have portrayed their role as a publicly minded one, working in concert with journalism, pop culture, and cities to facilitate a new robust public sphere and “smarter” forms of public infrastructure. But the business model that underlies these professions of goodwill raises deep concerns about the degree to which these firms are in fact serving the public good.

There is a lot that is still murky about the way these information platforms operate, because unlike, say, the blueprint for the building of a bridge, the public is not privy to the platform schema, and the blueprint is constantly being re-architected. We know that the choices these companies make can have mass impact on our moods, on our elections, on how we treat each other, and on what we think and the order in which we think it. But setting aside the unknown unknowns, there are several undisputed features that are particularly problematic about the public sphere as it currently operates, features that will persist even if the companies are broken up.

First, the price of entry is surveillance. The scope and granularity of the surveillance may vary, but both Google and Facebook make the bulk of their revenue from advertising and rely on closely tracking the individual choices its users make on the platform (and off the platform as well). There is no option to use Facebook or Google without some degree of their knowledge, and the level of tracking on the platforms’ default user settings is very high. Second, the companies are incentivized to get users to spend as much time on the sites as possible, promoting the public bad of addiction, or something like it, as a means to collect more ad revenue. Third, content that is highly inflammatory, which is more likely to be shared, is prioritized by the content moderation algorithms. More inflammatory content creates more engagement, which means more time on the site, and more time on the site means increased ad revenue as well as increased ability to data-track to increase revenue even more. Fourth, none of the rules of the public sphere are stable. Facebook can change the rules at any time, as it did with its policies on fact-checking ads, without any process. Even if it wanted to bind itself, there is no contract mechanism to bind itself to stable rules that couldn’t be changed for financial reasons—or on a whim. And finally, because these tech companies rely on targeted advertising, the public spaces treat each person differently, creating the illusion of a shared space but the fact of a fissured one.

These elements of the information platforms—surveillance, time maximization, inflammatory content, instability, and individualized treatment—are antithetical to what a public sphere should be. A public space that enforced bourgeois civility, as a rule, would directly violate basic free speech principles by shutting out some of the more provocative encounters. But a public space that does the opposite—actively promotes the least civil, the most untrue, the most angry and divisive, while engaging in extensive surveillance readily weaponized to police or manipulate users—is not neutral. It is designed to destroy the thing it appears to create: a vibrant public arena in which democratic constituents can debate, participate, and coexist. 

Imagine if the primary business model of the country’s libraries depended upon extensive, ongoing surveillance because of a targeted ad-based revenue stream. The data gathered through that surveillance might be sufficient to fund the acquisition of books, the provision of media services, the salaries of librarians, and the provision of services like connections to federal programs for job assistance. Despite this apparent bargain, many of us would worry about the speech and autonomy implications of the model. We would also expect that the business model would drive the architecture of the service: The library would have an incentive to keep us there as long as possible and set up listening devices that maximized their ability to plumb the data of the users of the service. Citizens, knowing that their every turn of a book page was being tracked by the government, might be more wary of reading pamphlets critical of that government. Libraries, knowing that they were making more money off of inflammatory content, would place the most polemical and outrageous books out front, replacing rigorous history and fiction. The business model would eventually shape everything about the physical layout of the library and the rules of the space (more shouting, less whispering). Crucially, it would shape the kinds of conversations that take place not just at the library but also outside on the stepsand in the home.

This example, seemingly fanciful, is in fact a good descriptor of Facebook as it currently exists and a not-so-fantastical Google vision of a library. The engine at the root of these harms (surveillance, time maximization, inflammatory content, instability, and individualized treatment) is the monetization of data collected by information platforms through the system of paid, targeted ads. Any legal regime that seeks to remedy the public bads arising from a digital public sphere must therefore find some way to alter the basic incentives generated by an ad-based revenue model. This is the focus of the next section.

Information Platforms as Public Utilities: Targeting the Ad-Based Business Model

Public Bads and Targeted Ads

There are essentially five ways to pay for the public sphere: fines, taxes, fees, general ads, and targeted ads. Taxes are borne by the general public, fines are paid by a subset of the public, and fees for services by the subset of the public that uses the service. In each case, the costs are clear, and these revenue streams don’t distort the incentives in how the service is provided. 

Traditionally, the public sphere was funded through taxes (sidewalks, parks, schools, fora, post offices) and fees (buses, post offices). However, one important contributor to the public sphere was privately funded, through ads: the penny press papers. The New York Sun in 1833 created a sensation when it was launched, charging only one cent for every paper, because it was able to subsidize the production of the paper with ads sold to merchants who wanted to advertise. The paper’s motto, printed on every edition, was that it had two goals, to “lay before the public ... the news of the day” at an affordable price and to “offer an advantageous medium for advertisements.” The Sun spawned imitators, and for over a hundred years, news organizations (especially for working-class readers) relied on the ad business model.  

Google and Facebook may look nothing like the Sun, but when it comes to generating revenue, it comes from the same place: ads. The current business model of Google and Facebook is a direct outgrowth of the Sun, making its money off of advertisements. The vast majority of their business flows from digital ads, and they control the market in that area. There are threeparticularly notable differences between Google/Facebook and the Sun, however.  First, unlike the Sun, Google and Facebook are ad-based but have none of the constraints that limit the worst tendencies of ad-based publications. The Sun, as a news publication, had to walk within the lines of libel and defamation law. Because of Section 230 of the Communications Decency Act, the tech platforms are not liable for falsehoods on their pages or for the use of their pages to share illegal advertisements more generally. Second, unlike the Sun, Google and Facebook use ads that are targeted instead of ads generally appropriate to the readership. This means that they have a strong incentive to gather sensitive information on their users. Third, unlike the Sun, Google and Facebook have become essential infrastructure, unavoidable to anyone who wants to participate in modern markets and social life. Jobs, birthdays, and protests are all shared on their platforms. Granted, in their heyday, some local papers had a local monopoly or took part in a duopoly, making them the only place to gather local news, but the scope of their power was far more limited. 

Of course, newspapers are not the only ad-based elements of the public sphere. Schools, buses, highways, and cities have all rented spaces for nontargeted ads to supplement their revenue. But no other actors until now have relied so heavily on ads or have used them for anything but a supplemental source of revenue. In other words, until now the ad revenue model has been used at the margins for the provision of public goods, but not until recently have targeted ads in their modern form, and with the specific pathologies they bring, been the foundation of public infrastructure. Indeed, these platforms are unique in how they combine the ability to extract fine-grained data on users, with the ability to actively shape the flow of information for their own profitability.

Thus, while the idea of an ad-based business model is not new, the way in which this business model has infected and driven the pathologies of today’s increasingly digital and private informational infrastructure is distinctive. But while this dynamic is new, we can find a remedy in adapting the historical tools of public utility regulation: banning the use of targeted ads themselves. Given how central the use of targeted ads is to shaping the business model of information platforms and their behavior and given how the behavior of harvesting data and designing algorithms optimized for virality fuels the public bads noted above, this intervention would radically alter the behavior of information platforms, preventing many of the harms noted earlier.

This kind of regulation represents a fairly straightforward adaptation of public utility regulatory tools. As described earlier, public utility regulation is a part of the antimonopoly toolkit, involving measures like common carriage and nondiscrimination. Indeed, public utility concepts are well established in free speech and free press theory and discourse, and these kinds of antimonopoly laws have regularly been used since the founding of the country to protect a meaningful political debate and public spaces.  The use of antimonopoly tools to fight communications distortions goes back to the founding of the country, when America faced concentrated control over communications platforms under the British Crown. Prior to the American Revolution, the Crown postmaster would desist from delivering newspapers sympathetic to the revolutionary cause. Publishers had to overcome huge barriers to share newspapers and media at a critical moment in the continent’s history. That history was very much on the American mind when Congress passed the Postal Service Act in 1792 and made sure that the post office was openly accessible to all, and did not discriminate among kinds of content. During the debate whether to have a flat rate for all newspapers, Congressman Shearjashub Bourne of Massachusetts asserted that the newspapers “ought to come to the subscribers in all parts of the Union on the same terms.” Senator Elbridge Gerry, also from Massachusetts, argued, “However firmly liberty may be established in any country, it cannot long subsist if the channels of information be stopped.” During the Civil War, Western Union built up control over the telegraph trunk lines across the country, eventually buying up more than five hundred rivals to achieve near-monopolistic dominance; when it did so, it began preferring its own clients and not providing universal, nondiscriminatory service. Congress responded by passing building rules governing our telecommunications regulatory structure, including the 1866 Telegraph Act, which blocked any private company from gaining monopoly control of the very first electronic medium of communication. The Federal Communications Commission created in the New Deal Era similarly sought to regulate more modern systems of radio, broadcast, and telephone.  Another key component of public utility regulation has been the restriction of pricing schemes to ensure just and fair pricing. Banning targeted ads falls squarely in the tradition of these regulatory techniques. As with nondiscrimination and common carriage, the ban would place limits on the kinds of practices legally available to information platforms. Like fair pricing requirements, the ban would alter the revenue-generating strategy of the firms themselves.

Under this kind of regulation, public utility information platforms would for most people look a lot like Facebook, Google, and Amazon today. Users could still talk to their friends, share stories, join communities, and make use of “smart city” tools that optimize the user experience of public infrastructure. But the big difference would be that these uses would no longer be monetized for targeted ads; this in turn would remove the incentives to harvest data through surveillance. As these firms are still private companies, there remains a need to generate revenue, but a ban on targeted ads would drive the revenue impulse to modes more consistent with public-serving infrastructure: fees. Those fees would likely be modest—and would still yield fairly robust profits for information platforms. Indeed, Facebook currently boasts about 2.2 billion users with an annual revenue of over $55 billion, an effective revenue of $25 per user. No doubt there are more complex wrinkles in assessing the precise value of individual users to these companies, but nevertheless that is a surprisingly (and sadly) small amount for the kinds of public bads generated by the platform as a whole.

Structuralist Regulation and the Antimonopoly Tradition

While we believe that a ban on targeted ads would bring information platforms more into alignment with the goals and values of public informational infrastructure, this ban is not the only way this result might be achieved. Rather, the broader point we emphasize is that information platforms must be regulated through the family of structural antimonopoly tools. This may involve a ban on targeted ads or antitrust measures such as breakup and separation. It might also involve measures such as a data tax that shifts business incentives by directly taxing data mining. The key, for our purposes, is that any response to the problems of private informational infrastructure must be structural: They must alter the fundamental business model and dynamics of the firms themselves. Measures that fail to do so will leave in place the incentives that drive the problematic practices of surveillance and data mining and negative forms of misinformation and amplification.

Indeed, one of the limitations of the current debate over information platforms is the orientation toward what we might consider “managerialist” solutions that rely on private self-regulation and content moderation by the platforms themselves, of the sort Facebook is proposing with its new “supreme court” approach. The problem is that this approach is insufficient for addressing the range of bads arising from private control of the digital public sphere. In many ways, the platform’s ever-increasing size exceeds the capacities of the firm to moderate content at scale. Even if the platforms were capable, in practice the operation of such content moderation has thus far been highly suspect, operating on inconsistent judgments that the platform staff make about speech and content and that depart significantly from preferred First Amendment principles. Furthermore, even legal regulations that would impose greater fiduciary duties of care on information platforms —while closer in spirit to public utility regulations and helpful in providing clear boundaries on information platform activities—could be of limited impact in addressing the core power and likelihood of abuse by platforms so long as data mining and monetization remains a tantalizing and profitable possibility. By contrast, structuralist regulations from the antimonopoly toolkit, from breakup to public utility restraints, alter the fundamental structure of the firms themselves, shifting business incentives away from problematic practices in the first place and thus precluding the thornier questions of content moderation and oversight.

Furthermore, these structuralist antimonopoly solutions are a critical missing element in many of the recent debates about information platforms and the First Amendment. At the moment the confluence of First Amendment values and information platforms is highly fraught; platforms have resisted calls for First Amendment requirements for their conduct, while some courts have applied First Amendment principles to government officials’ use of social media accounts. But the full application of First Amendment principles might complicate efforts by platforms to control and moderate content in more targeted ways—even as platforms often use the rhetorical shield of the “marketplace of ideas” to resist calls for greater public scrutiny. But a better way through this confusing mix of arguments and imperatives might be to view information platforms as a key component of our democratic public sphere—or, even better, to see antimonopoly tools from antitrust to public utility regulations as the key mechanisms by which we structure these platforms to accord with the values of a democratic public sphere.

Indeed, the Supreme Court has consistently recognized the critical role government plays in keeping communications infrastructurein particular free from private concerns that could distort it. In the Sherman Act case Associated Press v. United States, the Associated Press offered the First Amendment as a defense to a monopoly claim, saying that the law violated the First Amendment. The Court rejected that argument, “It would be strange indeed however if the grave concern for freedom of the press which prompted adoption of the First Amendment should be read as a command that the government was without power to protect that freedom.” The First Amendment “rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public, that a free press is a condition of a free society.” Indeed, administrative tools such as the powers of the FCC, the National Labor Relations Board, and the Federal Trade Commission have long been a critical, if often overlooked, mechanism through which we have assured a free, fair, and egalitarian public sphere. When, in the early 1990s, the FCC imposed must-carry rules on television providers, the Court upheld the government’s right to do so, concluding that “assuring that the public has access to a multiplicity of information sources is a governmental purpose of the highest order, for it promotes values central to the First Amendment.”  Antimonopoly and public utility regulations of contemporary informational platforms represent a modern version of “architectural” approaches to assuring First Amendment values of a free, fair, and equal public sphere. The connections between antimonopoly, free speech values, and public utility regulations are important ones to recover, not least because of how concerns about free speech are likely to be raised as arguments against this very type of utility-inspired regulation of platforms in the first place.


This paper is a first stab at some broad principles, and we hope it leads to a fuller discussion of the use of public utilities to regulate the business model of big tech. It raises many questions that demand further exploration, including, importantly, which parts of tech conglomerates should be categorized as public utilities? and, relatedly, how does public utility regulation of the kind we describe interact with divestiture? That fuller exploration would also compare public utility regulation to nationalization or decentralized municipalization and explain where and why each antimonopoly tool is preferable. We also need more scholarship that engages First Amendment critiques, fleshing out how public utility rules serve, instead of undermine, speech protections.

These questions could not be more urgent. We have always recognized the unique value of public infrastructure in general and of communications infrastructure in particular. This means that we need to pay special attention to these systems to defend and secure them. They are a public good critical for democracy. When that communications infrastructure is linked to an ad-based model, it leads to public bads, such as addiction, extremism, and a degree of surveillance intensity that verges on the creepy. 

Consent, data ownership, and privacy don’t adequately address the problem of the perverse incentives private companies have to manipulate the space in which we talk. Nor do promises of better self-regulation. Breaking up the power structure, on the other hand, is both essential and insufficient. We should use the tools of essential infrastructure and the language of public utility to impose new restrictions on what these information platforms as public utilities can do. What’s at stake is the capacity of people to debate, engage, and contest ideas, free from fears of surveillance and free from the distortions that targeted ad-based business models necessarily impose. Our democratic public sphere requires nothing less.

Printable PDF


© 2020, K. Sabeel Rahman and Zephyr Teachout.


Cite as: K. Sabeel Rahman & Zephyr Teachout, From Private Bads to Public Goods: Adapting Public Utility Regulation for Informational Infrastructure, 20-03 Knight First Amend. Inst. (Feb. 4, 2020), [].