At a hearing last week in Washington, senators and executives from Facebook and Twitter wrestled with the existential question of our time: how do we prevent Silicon Valley’s signature creation – social media – from tearing the country apart? Few answers emerged, but several senators expressed an obvious truth: the public urgently needs to better understand how these platforms are shaping public discourse.

Fortunately, there is one vital step that Facebook and Twitter could take today to improve public understanding: lifting the restrictions that impede digital journalism and research focused on the platforms.

Social media platforms are transforming public discourse in ways we do not understand. Take Facebook, for example. Two billion people around the world and 200 million in the United States use Facebook to get their news, debate policy, join political movements and connect with friends and family. The platform has become the substrate of our social interactions, the means by which human relationships are formed and maintained. Facebook’s platform disseminates our messages, but it also determines whether their signals will be amplified, suppressed or distorted. Facebook is not just a carrier of social media, but an entirely new social medium.

We need to understand how this new social medium works – how Facebook influences the relationships between its users and distorts the flow of the information among them.

But figuring out Facebook isn’t easy. Facebook’s alluring user interface obscures an array of ever-changing algorithms that determine which information you see and the order in which you see it. The algorithms are opaque – even to Facebook –because they rely on a form of computation called “machine learning”, in which the algorithms train themselves to achieve goals set by their human programmers. In the case of Facebook, the machines are generally programmed to maximize user engagement – to show you whatever will make you click one more time, stay a little longer and come back for more.

This kind of machine learning can produce unintended effects, with worrying consequences for public discourse. In the past 18 months, many have wondered whether Facebook’s algorithms have deepened political divisions and facilitated the spread of misinformation and propaganda. Do Facebook’s algorithms show some ads to progressives and others to conservatives? Do they place a salacious conspiracy theory about a political candidate above accurate reporting on the incumbent’s latest policy decisions? In striving to maximize user engagement, do Facebook’s algorithms maximize user outrage?

Some of these questions can be studied by interviewing Facebook’s employees or others with firsthand knowledge, by visually inspecting the platform itself, or by using data that Facebook makes available to software developers through its application programming interfaces. (Facebook’s APIs allow the automated collection of a limited set of information from its platform.) In fact, reporting based on these sources is the reason we know about some of the ways in which Facebook’s platform has been used to facilitate privacy violations and other abuses. It’s how we know about the true reach of the Russian disinformation campaign on Facebook; about Facebook’s compilation and use of “shadow” profile data to recommend new friends to its users in invasive ways; and about Cambridge Analytica’s exploitation of Facebook user data.

But these channels go only so far. Studying machines often requires the assistance of machines to perform larger-scale statistical analysis and testing. And so the focus of many journalists and researchers who are working to illuminate Facebook’s influence on society has been on applying digital tools of study to Facebook’s digital platform.

There are two tools in particular that many journalists and researchers would like to use to study Facebook’s platform: the automated collection of public data and the use of temporary research accounts. The first would enable journalists and researchers to collect statistically significant samples of what Facebook’s users see or post publicly on the platform, allowing them to report on patterns and trends. The second would enable journalists and researchers to test Facebook’s algorithmic responses to different inputs, to explore and better understand the correlation between what Facebook knows about its users and what it shows them.

Facebook’s terms of service, however, ban digital journalists and researchers from using these basic tools to investigate the ways in which the platform is shaping our society.

While digital journalists and researchers already use these tools in other contexts, the prohibitions in Facebook’s terms of service mean that they can’t use them to study Facebook – at least, not without risking serious sanctions. Journalists and researchers who violate the site’s terms of service risk multiple kinds of legal liability. Most directly, they risk a lawsuit by Facebook for breach of contract. They also risk civil and criminal liability under the Computer Fraud and Abuse Act, a law enacted in 1984 to prohibit hacking, but which has been interpreted by both the justice department and Facebook to prohibit violations of a website’s terms of service.

The mere threat of liability has a significant chilling effect on the willingness of some journalists and researchers to study Facebook’s platform. Some have forgone or curtailed investigations of Facebook’s platform for fear of legal action. And some have been told by Facebook to discontinue digital investigations that Facebook claimed violated its terms of service. (With colleagues at the Knight First Amendment Institute at Columbia University, I represent some of these journalists and researchers.)

If Facebook is committed to becoming more transparent, it should amend its terms of service to permit this important journalism and research.

It can do this by amending its terms of service to create a “safe harbor” for the use of these digital tools in support of journalism and research that would serve the public interest. Any safe harbor for journalistic and research projects should include limitations to protect the privacy of Facebook’s users and the integrity of Facebook’s platform. But, as the Knight Institute explained in a letter sent to Facebook last month, it is possible to prevent Cambridge Analytica–style abuse of Facebook’s platform while also creating more space for the journalism and research that we urgently need.

Today, Facebook and Twitter are arguably the most powerful of private institutions, shaping society and public discourse in ways not even the companies’ executives understand. It is time for the companies to stop obstructing digital journalism and research that would help the public understand just how their platforms are affecting us all.