I promise to avoid getting tied up in knots over the fundamental question of “what is a lie?” However, a preliminary problem in the U.S. is that I don’t think we are yet able—at a societal level—to agree on how to tell a lie from the truth. A majority of the population does seem comfortable concluding that when a statement is contradicted by empirical evidence, it is a “lie.” However, large segments of the population apparently do not trust conventional empirics, nor do they trust government processes to sort truth from fiction (e.g., is COVID a hoax?).

So, if we are looking for lie-related issues to worry about at the governmental level, this democratic impasse belongs high on the list. Until we have an accepted method for sorting truth from fiction at a collective level, we are at a standstill. And, to be more grim, as long as government and elite processes are not trusted by a sizable part of the population, solutions seem beyond reach.

Fortunately for me, however, these deep questions fall outside my wheelhouse, and I assume in this post that a “lie” is simply a claim refuted by empirical evidence that satisfies conventional tests for robustness.

With that definition in place, what are the most troubling government lies? In my own field—environmental and public health regulation—the most troubling lies are not the outed, false statements that make the headlines (like the ineffectiveness of Trump’s hydroxychloroquine). Rather, they are the many, often obscure “facts” used to inform the establishment of protective regulations and that turn out to be presumptively untrustworthy because of underlying flawed government processes that allow and even encourage the government to propagate misinformation. To make matters worse, in these same settings the public is often precluded from fact-checking the truthfulness of the underlying information.

Certainly, this paranoid-sounding concern does not apply to all or even much of the work of government. In many agency rule-makings, for example, vigorous transparency and deliberative requirements leave little room for lies. Agency policies are often intensely scrutinized by a broad array of vigorous stakeholders who can even enlist the courts to review information they consider untrustworthy. False claims and unreliable facts in this heated, adversarial environment tend to get smoked out. Indeed, in these healthy oversight processes, agencies tend to be extra careful to ensure the underlying information is reliable from the start.

However, there are other sectors of regulatory decision-making where we see some of these legal structures backfire, allowing and even tacitly encouraging the government to bias or distort critical information—leading to what I call “presumptive government lies.” 

What are some examples? Although we have a lot more to learn about misdesigned government processes, I can offer up two illustrations from my own work.

When the stakes are high enough, political officials within the executive branch can and sometimes do manipulate the scientific record supporting an agency decision.  

 

First, when the stakes are high enough, political officials within the executive branch can and sometimes do manipulate the scientific record supporting an agency decision. There have been a number of disturbing accounts over the last four decades of political officials directing politically motivated, secret revisions to the staff’s technical analyses, censoring dissenting agency experts, and engaging in other means to control the scientific record dishonestly. Political officials have even “stacked” the composition of the members serving on science advisory panels to ensure that peer review is more favorable to their own preferred political position.

How could these presumptive lies be occurring? Our institutional design positions political officials at the apex of all agency work; in most regulatory processes, agency experts are subservient to them. This means that the political officials, if they find it worth the trouble, can adjust and manipulate the scientific record without restraint (unless they are outed by a whistleblower). These manipulations, moreover, often occur without public oversight because internal government deliberations are protected as a deliberative process under the FOIA. Since many agencies conduct their technical analyses in ways that are not insulated from political management, it is thus impossible for those on the outside to know which supporting analyses have been manipulated and which analyses officials left alone.

A similar type of structural flaw occurs in our design of regulatory settings that are dominated or sometimes monopolized by regulated parties. In chemical and pesticide regulation, for example, industry is generally the only participant weighing in on whether a chemical or pesticide is hazardous and whether it should be restricted. But under the American Psychological Association (APA) and related legal requirements, the agency experts need only be attentive to this active set of vigorous participants. By law, the agency staff is directed to consider all the information submitted to it (which generally comes exclusively from industry) and respond to all comments (again all from industry). And finally, since they are the sole participants, industry is the only party that can challenge the agency’s final decision in court (under a court-created “exhaustion of remedies” requirement).

In these backwater regulatory programs, in fact, there is growing evidence that the factual bases for at least some agency decisions are unreliable and biased towards industry. Investigative reports document how career management has manipulated risk assessments and underregulated pesticides; these exposés provide a worrisome peek inside the regulatory black box. But even more problematic—in this legally structured echo chamber where an agency must consider and respond only to industry—we see the agencies also develop overgenerous protections that shield the underlying industry data from public view through such legal vehicles as broad trade secret protections and vague industry classification systems. So, even if there were broader public involvement in some of these decisions, the public often cannot access key information needed to evaluate the veracity of the agency’s analyses.

What can be done? The good news is that if these presumptive lies are partly a product of wrongheaded institutional design, we simply need to redesign the problematic government processes to discourage and prevent the government from propagating untrustworthy information. The solution to the political control over the agency expertise problem, for example, is in large part to institute firewalls around agency expert analyses (already done in at least one agency process).

With respect to the echo chamber problem occurring in certain regulatory programs, rigorous, robust peer review of expert work (along with other adjustments) should go a long way to counteract the dangerous biasing incentives created by the APA. Once we see why the government information can’t be trusted in these (and other) misdesigned settings, we can begin to treat the problems. And once we treat the problems, hopefully government work will begin to be both more trustworthy and more trusted.

To hear more from Wendy Wagner, be sure to attend the Government Lies roundtable on Jan. 28, 1-2:30 p.m. EST. RSVP here.