Despite the depth, breadth, nuance, and complexity of First Amendment jurisprudence, the scope of the government’s ability to regulate election-related misinformation remains murky. That blurriness has undermined our ability to tackle the challenges of harmful political speech. With such a fragile legal foundation, the set of regulations that have developed to govern electoral speech is confusing and plagued by omissions. The federal government, states, tech platforms, and individual users all have a role to play in addressing electoral harms, but it remains unclear who bears primary responsibility for regulating electoral speech and the scope of their authority to regulate it.

Basic questions are unsettled.  For instance, is it a crime to intentionally mislead voters about the day of an election? In several states, it is. In others, it isn’t. It might be a violation of federal law, but only in some cases, such as when it is carried out “under color of law” or when it targets certain protected categories of voters. Some platforms prohibit it, but others do not.

It’s not only confusing to understand what law prohibits what speech, it’s also difficult to delineate which types of electoral speech restrictions might survive First Amendment challenge and which wouldn’t. Content-based restrictions are subject to strict scrutiny, but despite this “exacting” review, the Supreme Court has upheld some speech-restrictive voter protection laws in some cases. States like Virginia prohibit certain types of deceptive practices, such as “false information” related to “the date, time, and place of the election, or the voter’s precinct, polling place, or voter registration status,” and remain on the books.

Efforts to clarify this area of law have been plagued by this confusion. In 2006, then-Sen. Barack Obama introduced a bill to make it a federal crime to “knowingly deceive” a person about key election information, including the “time, place, or manner” of the election. The ACLU originally supported the bill without raising concerns about its constitutionality, but has since criticized its provisions on false candidate endorsements, claiming that they may constitute an overbroad restriction on constitutionally protected false speech. 

In the absence of a clear regulatory regime governing election speech, tech platforms have struggled to regulate election speech themselves, and they have contorted themselves to try to respond to criticism. In the lead-up to the 2020 election, many platforms changed their terms to prohibit certain types of election-related misinformation, some eliminated the ability to run all political ads, and others limited political ads but didn’t ban them entirely.  

The impact of these interventions by tech platforms remains unclear, but preliminary data suggests that they have been counterproductive. In a paper I authored with Duke University’s Scott Brennen, we found that platforms’ restrictions on political advertising likely had a minimal effect on curbing misinformation, but likely harmed poorer campaigns more than wealthier ones, and Democratic more than Republican campaigns.  

Despite the historic interventions in electoral speech by tech platforms in 2020, we are headed toward the 2022 midterm elections with little information about which measures worked and which ones didn’t.   

Despite the historic interventions in electoral speech by tech platforms in 2020, we are headed toward the 2022 midterm elections with little information about which measures worked and which ones didn’t. Congress and the Federal Election Commission have largely watched from the sidelines, seemingly content to hold hearings and give speeches about the problems of election lies but without passing new laws that will modernize election law for the digital age.

The responsibility for governing election speech shouldn’t be left entirely to platforms, which are likely to shift back and forth in response to pressure from politicians, the media, and shareholders. If we’re serious about addressing election misinformation, the federal government must act.

Congress should pass a federal law criminalizing deceptive practices in voting. The legislation Sen. Obama introduced in 2006 has since been incorporated into the For the People Act, which was passed by the House but remains stalled in the Senate. If Congress can’t pass this watershed reform, it could still salvage the deceptive practices provisions and pass those. Passing this bill has broad benefits, including deterring voter suppression, taking advantage of an exception to Section 230 that will enable law enforcement to prosecute platforms that violate the law, and making it possible for platforms to work collaboratively with law enforcement to investigate cases of voter suppression.

If Congress passes a law on deceptive practices, it would inevitably be challenged, and courts would have an opportunity to provide more clarity on the constitutionality of government restrictions on election misinformation. The government’s interest in regulating this speech has evolved as new communication technologies have become increasingly prevalent, and courts must take account of these new realities.

Even though electoral speech is the foundation of a democratic system of government, our laws do not provide policymakers, tech platforms, or users with sufficient guidance on the bounds of electoral speech protections. We are only a year away from our next midterm elections and still have not learned important lessons from our last election cycle. To establish sensible rules on electoral speech in a digital age, we need Congress to act.