Free speech in a pandemic: Congress wrestles with drawing a line
Loading...
| Washington
Should Big Tech be a more aggressive arbiter of truth? Congress has been wrestling with that question for years, as social media companies have become more dominant. Now the pandemic has raised the stakes, with many seeing misinformation as a life-or-death issue.
Members of Congress have proposed more than 20 bills this year targeting a provision that has allowed social media companies to flourish by protecting them from legal liability for what their users post, while also allowing platforms to remove certain types of content. When it comes to COVID-19 misinformation, platforms have taken down thousands of accounts and removed millions of pieces of content, ranging from “widely debunked” claims about the adverse effects of vaccines to content encouraging prayer as a substitute for medical treatment.
Why We Wrote This
The pandemic has raised the stakes in a yearslong debate over free speech and social media. Many want Big Tech to do more to protect citizens in the name of public health. Others see a dangerous form of censorship.
For some lawmakers, who agree with Facebook that preventing “imminent harm” trumps freedom of expression, the platforms haven’t done enough. Others say Big Tech is violating America’s ethos of free speech and shutting down vital debates.
“There’s a danger of groupthink, of mobbing people who dissent, and the last place you want that is in science,” says Philip Hamburger, a Columbia law professor and president of the New Civil Liberties Alliance.
When Twitter recently banned a former New York Times journalist dubbed “the pandemic’s wrongest man,” many of his critics cheered. But others, including some who oppose his views, raised concerns about a world in which private corporations – taking their cues from mainstream media and government officials – can silence dissenters in today’s digital public square.
Over the past year and a half, Alex Berenson grew his Twitter base to some 344,000 followers by pillorying public health officials’ approach to the pandemic. Like many Twitter pundits, he was irreverent and provocative. But he also frequently accompanied his assertions with screenshots of data, charts, and scientific studies.
His supporters lauded him for highlighting inconvenient truths that few others were raising. Many scientists, journalists, and health officials, however, criticized him for cherry-picking scientific data to advance questionable or even dangerous narratives, especially his claims that COVID-19 vaccines were not nearly as safe or effective as touted.
Why We Wrote This
The pandemic has raised the stakes in a yearslong debate over free speech and social media. Many want Big Tech to do more to protect citizens in the name of public health. Others see a dangerous form of censorship.
Twitter sided with Mr. Berenson’s critics on Aug. 28, permanently suspending his account after he tweeted that COVID-19 vaccines are at best “a therapeutic with a limited window of efficacy and terrible side effect profile.” The company cited repeated violation of its COVID-19 misinformation policies, and removed all his tweets from public view. Mr. Berenson is now writing mainly on Substack, where tens of thousands of his Twitter followers have migrated – many offering to contribute to his legal fees if he sues Twitter.
“I am up against basically the entire media, legacy and social, and the federal government,” says Mr. Berenson in an emailed comment, “and the only answer they had to the questions I raised was to cut off my access to a platform designed for free speech?”
Nearly everyone agrees that misinformation on social media is a growing problem. But what, exactly, constitutes misinformation – and who should have the power to make that determination – is hotly debated.
Congress is increasingly wrestling with such questions as social media companies amass more wealth, power, and influence over public thought and discourse, with citizens increasingly getting news from algorithm-tailored feeds rather than traditional media outlets. And the pandemic has raised the stakes: Many now see the need to thwart misinformation as a life-or-death issue.
Facebook: safety concerns trump expression
Facebook’s head of misinformation policy, Justine Isola, said earlier this year that when there’s a risk of imminent harm, that trumps concerns about freedom of expression. Many Democratic members of Congress agree.
“I’m on the side of trying to save people’s lives and make sure that companies are not profiting off of spreading dangerous misinformation,” says Sen. Ben Ray Lujan of New Mexico, who has co-sponsored a bill with Minnesota Sen. Amy Klobuchar that would increase social media platforms’ liability for spreading health misinformation in a pandemic if it is promoted by their algorithms. Senator Klobuchar says that platforms should deploy their employees to determine what’s true and not true, just like other media organizations, even if it’s a complex, time-intensive task. “I just think that they should be able to use part of their humongous profits to make sure we’re not getting misinformation,” she says.
But others have deep concerns about Congress requiring a handful of powerful private corporations to effectively censor viewpoints that contradict public health officials. The platforms’ misinformation policies already rely on statements by those officials to determine what is credible.
“The United States government should not be leveraging its power and authority to try to make these tech companies arms of the state,” says Sen. Josh Hawley, a Missouri Republican and author of “The Tyranny of Big Tech.”
Critics say there is a clear pattern of bias against conservative viewpoints on social media platforms. On July 7, former President Donald Trump, who was banned from social media for violating their policies, filed class-action lawsuits against Facebook, Twitter, and YouTube, arguing they violated the First Amendment.
The First Amendment provides that “Congress shall make no law ... abridging the freedom of speech.” Many legal scholars argue that since social media platforms are privately owned, they are not bound to allow freedom of speech. But there is ongoing debate about that.
Daphne Keller, former associate general counsel for Google who now directs Stanford University’s Program on Platform Regulation, argues that most of the misleading information on social media platforms that is causing serious harm is protected by the First Amendment, so the government couldn’t require platforms to take it down.
“What many people think is the moral, socially responsible, right thing for platforms to do is something Congress cannot mandate,” she says. “The only way to get it done is for platforms to do it voluntarily.”
To be sure, contrarians are not the only ones who have been wrong about COVID-19. Scientists, politicians, and journalists have also made assertions that turned out to be incorrect – and while they cite evolving science, critics see politicization at work, too, and say that’s the danger of platforms relying on official consensus for determining truth.
They note that some things initially dismissed as “misinformation” were in fact later deemed worthy of investigation, most notably the hypothesis that the pandemic may have started with a lab leak in Wuhan, China. When in late May, President Joe Biden ordered the intelligence community to conduct a 90-day review of all available evidence on the lab-leak theory, Facebook changed its misinformation policy the same day. But meanwhile, investigators had lost more than a year in which to press China for answers.
Such premature labeling and dismissal of “misinformation” could interfere with the process of scientific inquiry – and that, too, could have deadly consequences, some argue.
“There’s a danger of groupthink, of mobbing people who dissent, and the last place you want that is in science,” says Philip Hamburger, a professor at Columbia Law School and president of the New Civil Liberties Alliance.
What’s getting banned?
The scope of the challenge adds urgency. Facebook and YouTube have more than 2 billion users each, and far more content than any organization could review in real time; on YouTube alone, 500 hours of video are uploaded per minute, according to the most recent data available. If misleading information didn’t spread so quickly, it wouldn’t be nearly as much of a concern. And if a few tech giants didn’t control today’s digital public square, bans wouldn’t be so consequential.
“They’ve now become gatekeepers to the public square,” says GOP Sen. Marco Rubio of Florida. “You literally cannot engage in political discourse in America if you don’t have access to those sites.”
So what type of content do social media platforms ban? It ranges from “widely debunked” claims about the adverse effects of vaccines (Twitter), to content encouraging prayer as a substitute for medical treatment (YouTube), to claims that COVID-19 deaths are overstated (Facebook).
This summer, Twitter said it had suspended 1,496 accounts and removed more than 43,000 pieces of content since introducing its COVID-19 misinformation policies.
YouTube, which is owned by Google, has removed more than 1 million videos since February 2020 that go against its standards.
And Facebook has taken down more than 3,000 accounts, pages, and groups, and more than 20 million pieces of content that violated the company’s COVID-19 and vaccine misinformation policies, according to an Aug. 18 statement by Monika Bickert, vice president of content policy.
Some of Facebook’s takedowns involved 12 individuals dubbed the Disinformation Dozen by the Center for Countering Digital Hate, whose recent report estimated that these influencers accounted for up to 73% of Facebook’s anti-vaccine content. Ms. Bickert disputed that assessment, which was based on a limited data set.
Facebook has sought to automate content moderation. But it also works with more than 80 fact-checking organizations certified by the International Fact-Checking Network. In addition, White House press secretary Jen Psaki told reporters in July that the Biden administration was “flagging problematic posts” for Facebook.
Ms. Psaki’s admission prompted Senator Rubio to propose a bill that would require platforms to disclose within seven days any request or recommendation by a government entity to moderate user content, or face a fine of $50,000 per day of noncompliance.
More than 20 bills this year alone
Senator Rubio’s bill is just one of more than 20 bills introduced this year in Congress that target a key legal underpinning of social media platforms’ success. Known as Section 230, the provision protects social media platforms – and other “interactive computer service” companies – from being held legally responsible for user content posted on their sites, with a few exceptions. That protection gives them the ability to moderate content, such as restricting access to certain categories of content, including those they deem “obscene ... excessively violent ... or otherwise objectionable, whether or not such material is constitutionally protected.”
Democratic Sen. Ron Wyden of Oregon, a co-author of Section 230, defends it as crucial to enabling social media companies to address misinformation about COVID-19 vaccines.
“Why would you take away the one tool in law that allows an important participant – the platform – to take that garbage down?” he asks.
But many note that the digital landscape has changed dramatically since 1996 when Congress passed the provision, which cited the “true diversity of political discourse” offered by the internet and a desire to “preserve the vibrant and competitive free market” online. Both Mr. Biden and Mr. Trump called for revoking Section 230 in their presidential campaigns, and an increasing number of lawmakers see the provision as needing to be amended, overhauled, or scrapped altogether – though for widely varying reasons.
Democrats want tech companies to take more action in cracking down on misinformation, as well as other content categories, such as hate speech. Republicans want to dial back what they see as censoring conservative viewpoints in the name of thwarting misinformation.
Other solutions besides government regulation
While many in Congress are agitating for change, it’s unclear they can achieve the unity needed to pass new legislation. And some say government regulation isn’t the answer.
“I think the problem with both Klobuchar and Hawley is they’re looking to government solutions for something that is a social problem,” says Neil Chilson, senior research fellow for technology and innovation at the Charles Koch Institute. “I don’t think we want government dictating to platforms or any other media channel what content they can carry, or how they should make the rules about what is truth on their platforms.”
Part of the challenge is that many social media users are not aware of how algorithms work behind the scenes to influence them. Platforms’ business models are based on maximizing user engagement with content – the more time users spend on the sites, the more platforms can profit by selling users’ attention to ad companies. And misinformation gets greater user engagement than accurate news. The German Marshall Fund found that user interactions with misinformation on social media spiked during the pandemic, and were far greater than average engagement with more than 500,000 news sites. Such misinformation often exploits emotions, leading some to see a systemic issue with social media platforms.
“Content that is engaging is very often content that is enraging,” says Laura Edelson, a software engineer and researcher at New York University’s Cybersecurity for Democracy. “What that means is you do not need to build a system to actively promote misinformation; you can build a system that optimizes for engagement alone, and that will end up promoting misinformation.”
Just how that works, and the role algorithms play, is something she has been trying to understand – until Facebook suspended her account last month for unauthorized collection of user data. She disputes the charge.