Skip to content
Breaking News Alert Louisiana Republicans, Democrats Advance Bill Granting Immunity For Killing Embryos

How Social Media Giants Can Solve Their Speech Problems With The First Amendment

Share

Social media platforms’ decisions following the Capitol unrest during a Donald Trump speech have put the spotlight on their content moderation policies like never before, sparking loud calls for reform. Conservatives howled after Twitter, Facebook, and other major platforms deleted posts and deplatformed accounts, including President Trump’s, for questioning the election results or otherwise fueling the passions that led to the violence. Leftists criticized the platforms for responding too slowly.

As usual, accusations of ideological and political bias were at the forefront of the debate. Conservatives decried a double standard in content moderation, pointing to a lack of similar action against Democrats who spent three years questioning the legitimacy of Trump’s election and condoned or incited the violence that accompanied Black Lives Matter protests and left several dozen dead.

The increasingly bitter debate about such decisions is the inevitable consequence of growing content moderation by social media platforms in response to political pressure, largely from the left. As a result, platforms face increasing condemnation by the right, increasing demands from the emboldened left, and nearly universal complaints that their content moderation policies are vague, arbitrary, and ever-changing.

Republicans are increasingly tossing their limited government principles aside to join Democrats in calls for government regulation of social media and for expansive use of antitrust laws.

The platforms are heading down a perilous no-win path. In October 2019, Mark Zuckerberg lamented that “Increasingly, we’re seeing people try to define more speech as dangerous,” while no longer trusting their fellow citizens to “decide what to believe for themselves.” “[T]his is more dangerous for democracy … than almost any speech,” he added.

Social media platforms looking for an off-ramp should adopt a content moderation standard that draws clearer and more stable lines, applies well-accepted principles, addresses concerns about political bias, favors free expression in order to minimize the ground being fought over, and can be administered in a reasonably neutral manner.

The First Amendment’s strict prohibition of viewpoint discrimination provides such a standard. While the First Amendment does not directly govern private companies, its principles have long animated the broader debate about free expression in America. Those principles include a well-developed body of constitutional law on viewpoint discrimination that should be applied to content moderation—at least in the United States—preferably through self-regulation.

Viewpoint discrimination is particularly offensive to the First Amendment and is, therefore, subjected to the greatest degree of constitutional scrutiny. Supreme Court precedent tells us that:

When the government targets not subject matter but particular views taken by speakers on a subject, [it is] an egregious form of content discrimination. The government must abstain from regulating speech when the specific motivating ideology or the opinion or perspective of the speaker is the rationale for the restriction.

By adopting this prohibition and narrowly focusing on the viewpoint bias that is at the heart of the bitter content moderation debate, social media platforms would retain plenty of control over content, while also shifting the target from their backs to the First Amendment.

The major platforms already claim that their content moderation policies are applied without bias. Everyone would benefit if that commitment were grounded in stable constitutional principles, administered in an independent manner.

Is allegedly biased treatment of President Trump and his supporters by platforms viewpoint discrimination or a rationale response to different facts? Let First Amendment law decide, as it would when pressing a free speech case against the government in court.

The First Amendment’s distinction between subject matter and viewpoint discrimination is important here. Social media platforms would be free to ban all posts discussing election integrity. However, they could not play favorites by taking down only posts critical of the integrity of the 2020 election.

People will scream about false equivalences and what-aboutism. But comparing the treatment of arguably similar speech is important in determining when viewpoint discrimination is present.

For example, if platforms ban posts that make generalizations about identity groups, comparisons would tell us whether the policy is applied neutrally or is biased by political fashions. As Supreme Court Justice William Brennan, a liberal icon, explained, we “may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.”

That said, platforms could prohibit “fighting words,” including racial and ethnic slurs, as well as obscene language, genuine incitement or threats of violence, and anything else that has been deemed unprotected by the courts.

Although legislation could require or incentivize bias-free content moderation, self-regulation is preferable. It would avoid the First Amendment issues raised by government intervention and keep the slow-moving and largely unaccountable federal bureaucracy out of the picture.

Self-regulation of viewpoint discrimination can take various forms, but should involve an independent final adjudication process. For instance, social media platforms could set up panels of First Amendment attorneys or retired judges to hear appeals of content moderation decisions.

Facebook already has an independent oversight board that is looking at the suspension of Trump’s account. But oversight is most effective when there are clear principles to oversee. The well-developed body of law on viewpoint discrimination provides exactly that.