The ongoing 'Twitter Files' revelations show that Republicans' first order of business this coming 118th Congress must be to introduce a legislative firewall between the White House — and its offshoot federal law enforcement agencies such as the Department of Justice and its offshoot, the FBI — and private social media companies. Last summer's revelations of government pressuring social media executives into blocking users not toeing the official line on COVID was as clear an example of unconstitutional "state action" as any. Courts have long ruled the government cannot pressure private entities, as an "agent of government," into censoring what itself cannot.
Facebook's Mark Zuckerberg disclosed to Joe Rogan in August 2022 that the FBI had warned him in October 2020 not to post information about what appeared to be Hunter Biden's laptop. As he stated at the time:
"The background here is that the FBI came to us - some folks on our team - and was like 'hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there's about to be some kind of dump that's similar to that.'"
Zuckerberg further said the FBI did not come to Facebook about the Biden laptop story, specifically only that his team thought the story " fit that pattern."
Then came the recent revelation about Twitter's now-former head of "trust and safety," Yoel Roth, getting the exact same FBI treatment regarding the same story. As he stated in a declaration in response to a lawsuit filed by the Tea Party Patriots Foundation against Twitter:
"I was told in these meetings that the intelligence community expected that individuals associated with political campaigns would be subject to hacking attacks and that material obtained through those hacking attacks would likely be disseminated over social media platforms, including Twitter."
This included, Roth stated, "rumors that a hack-and-leak operation would involve Hunter Biden."
Yet courts on COVID-related state-action claims, at least, have greatly disappointed when it comes to reining in the state's attempts to control online behavior. Despite numerous White House and federal agency statements directed at social media platforms on the issue, including threats, three cases have seen such claims dismissed (including one brought by journalist and author Alex Berenson) and only one has been granted discovery requiring the Biden administration to turnover any records of communications with such platforms (the case brought by Missouri and Louisiana). Clearly, free, public dialogue is too important to be left in the hands of the judicial system.
The dissemination of news and the facilitation of public discourse is central in any democracy that allows genuine participation on the part of its citizens. Chinese protesters expressing their anger against strict speech controls in China by holding up blank pieces of paper is certainly a poignant illustration of this.
An older one closer to home is America's second Congress in 1792 deciding to subsidize the postage of newspapers and other information sources; a practice that persists to some degree today, seen, for instance, in the mailers Congress sends out to its constituents. Although the term was not around at the time, Congress clearly recognized that open public dialogue was a "public good", or something which, like clean air, benefits everyone equally and greatly.
Common Carriers
Providers of public goods are generally regulated under common carriage laws. The Communications Act of 1934, for instance, allowed AT&T to enjoy monopolistic power over the public good it provided: the interconnecting of the American people by way of a unified, national standard for telephone communication.
In exchange for enjoying monopoly power, and to ensure that public goods truly remain beneficial to the public, special duties or restraints are generally imposed on such companies. For instance, before a national telephone network was created, Americans needed separate phones to contact friends and family if they used different carriers (AT&T was easily the dominant player, so it had the biggest network). In exchange for the DOJ dropping antitrust claims against AT&T, the company agreed to allow smaller companies to join its network, thus creating a universal line of communication for citizens across the country — again, all for the public good.
As Michigan State University law professor and former Commerce Department telecom official Adam Candeub has written, this is the kind of "carrot" and "stick" bargain at the heart of common carriage law. A powerful monopoly status can be conferred where a public good is beneficial enough, but to ensure that such status is not abused (and the public good stays a public good), some sort of duty or restraint must be established.
Unfortunately, Candeub notes, for other dominant telecommunications companies, such as social media platforms and search engines today, "it is all carrot and no stick." Companies such as Facebook and Google are allowed complete monopoly power to provide what are undoubted (though diminishing) public goods, but with zero restraints or obligations on their part.
Section 230
Way back in 1996, Congress came up with an amendment to the 1934 Communications Act to deal with the then-burgeoning industry of "interactive computer services", at the time, mostly message boards.
With companies such as Compuserve and AOL in mind, Congress sought to hand out special liability relief with the idea of promoting two public goods: an internet characterized by a wide dissemination and diversity of ideas; and an incentive system for platforms to create family-friendly environments.
The amendment they came up with, Section 230 of the 1996 Communications Decency Act, has two key sub-sections. Section 230(a)(1), relieves internet platforms of liability for statements made specifically by third parties. This entails giving companies such as Facebook relief from liability for a defamatory post or other unlawful content made by one of its users. While relevant when it comes to what conservatives criticize most— the politicized removals of messages and whole accounts — it is the next subsection that is most important.
Section 230(c)(a)(2) immunizes a platforms' own efforts to discriminate against certain content. As it says, it allows them to:
"restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
In doing so, however, it does mandate one qualification: In order to "restrict access" or remove material deemed "obscene", and so on, platforms have to act in "good faith." In other words, Congress here created a "Good Samaritan" clause intended to incentivize positive, pro-social behavior.
Unfortunately, in the ensuing case law that has been built up in dealing with Section 230, two giant problems have emerged, both involving a misreading of a landmark court decision, Zeran v. AOL.
The first problem is that what Congress intended when it comes to protecting social media companies from liability tied to defamatory messages posted on their platforms has been greatly expanded and now encompasses virtually any and all decisions regarding "content moderation", such as removing the accounts of epidemiologists with whom Dr. Anthony Fauci, the FBI, CIA, and possibly other federal agencies, might disagree.
The second problem is that the "good faith" condition Congress imposed on these companies to ensure against arbitrary or biased content-removal decisions has been completely erased. It is now never applied to social media companies at all.
Both problems can be traced to a misunderstanding and incomplete reading of Zeran v. AOL.
Zeran v. AOL
The landmark 1997 case of Zeran v. AOL involved a claim over AOL's refusal to remove a defamatory statement on one of its bulletin boards, specifically that the plaintiff was supposedly selling T-shirts making fun of the 1995 Oklahoma City bombing. While clearly defamatory to the extent it was simply made up, in finding for AOL, the court actually appeared to rule more broadly than necessary.
In its widely-quoted ruling, the court stated in part that all lawsuits are "barred" where they
"seek to hold a service provider liable for its exercise of a publisher's traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content..." (author's emphasis).
But this would seem to apply to more than third-party defamatory content and include a platform's full panoply of functions such as removing user-accounts or blocking content its own moderators simply do not like.
The problem is the court did not quite do this.
According to Candeub, what later judges have consistently left out in applying Zeran is the line preceding the above quotation. In it, the court expressly limits the 'barring of lawsuits' only in relation to editorial decisions regarding "third party content." As they wrote in the preceding line:
"section 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service." (Author's emphasis).
Contrary to what courts have said, writes Candeub, Zeran does not actually provide blanket protection to a platform from its own speech and behavior. And yet, due to a misquote, below, numerous courts have acted as if it does.
An example of Zeran's misapplication took place in a case involving Yelp!, where the company was sued for allegedly manipulating its review pages, specifically by removing some listings, changing their order of appearance, and so on. The court -- incorrectly -- found that Yelp! was immunized under sub-section 230(c)(1), specifically quoting (or misquoting) the above language from Zeran: "'barring of lawsuits' only in relation to editorial decisions regarding 'third party content.'" But Yelp!'s impugned action did not relate to any content or statements from a third-party's post; it related directly and completely to its own internal, proactive changes.
Again, companies such as Yelp! can, under Section 230, make their very own editorial changes and removal decisions, but only under sub-section (c)(2) which contains the "good faith" standard mentioned above. This, however, was not addressed by the Yelp! court and it did not consider the motives behind the alterations. Instead, it conflated sub-section (c)(2) removal decisions with sub-section (1)(c)'s blanket liability relief for third-party statements. In other words, both here and since, subsection (c)(2) has been rendered pointless. As a result, writes Candeub, "social media platforms are now treated like they're above the law."
Thankfully, this can be easily changed, even at the regulatory level.
In 2015, the Federal Communications Commission (FCC) inserted major internet service providers into its definition of a common carrier; a definition that entails a non-discrimination requirement designed to keep companies from blocking service to non-partner companies — for instance, if Comcast, say, were to block service to a video-streaming competitor Netflix. On its own or in response to judicial action, the FCC can simply include social media platforms and search engines into this same definition. As Candeub writes:
"the question of whether one is discriminatorily terminated from a [social media] network is not a deep technical issue... Rather, it is akin to the discrimination question in civil rights and employment law that courts routinely answer."
With some adjustment, this could be made to strip these companies of their right to exclude and discriminate against users and material (outside of genuinely "obscene" posts such as those containing pornography) and, more broadly, distort the public dialogue. For instance, genuine questions raised by COVID-management critics such as Berenson or signatories of the Great Barrington Declaration could be made available to the public instead of suppressed by authorities such as the National Institutes of Health.
Meanwhile, enforcement could come via the FCC in response to user complaints or by courts through civil litigation.
As for demonstrating racial, religious or political-opinion discrimination (either right or left) on social media, it could be done by showing, for instance, that platforms are allowing one such group to say or do things which an analogous group cannot.
More easily, a complaint might demonstrate it by pointing to explicitly discriminatory policies on the part of such companies. In 2020, for instance, Facebook began policing anti-black speech more heavily than anti-white speech.
Non-discrimination policies need not create a "wild west" scenario. To a large extent, people really do not need moderators to curate what they see on social media. They are free to do that themselves.
Companies including Facebook should welcome such a requirement: it would make the kind of "state action"-based censorship from the White House or any federal agency something they would be legally prohibited from entertaining.
The same applies to Elon Musk, who has so far kowtowed to groups such as the Anti-Defamation League and the federal government possibly out of concern that they and others would pressure advertisers to cut ties with Twitter. If Twitter were actually prohibited from applying disparate censorious treatment against, for instance, "MAGA Republicans", Musk could simply tell the ADL and advertisers that being covered by the FCC's common carriage definition means he cannot pick and choose particular political viewpoints . "Look," he might respond, "I am legally required to keep the public dialogue on my platform free and open."
Keeping people exposed to opposing views would seem essential in curbing the three interrelated evils infecting our politics today: echo chambers, polarization, and the death of debate. If actually given the opportunity to hear and understand the other side on an unfiltered, non-skewed basis, many, if not most, people can at least glance at opposing views -- or at least make the conscious and childish decision to block opinions with which they disagree.
Removing the distortive "curators", editors, "fact-checkers" and middlemen from the information process -- and reaching people who previously have been sheltered from diverse opinions -- will likely not tear people apart. It might even help to bridge misunderstandings and fill in a few gaps. That, perhaps, is the ultimate public good.
John Kline, an attorney, also contributes to Heterodox Academy Blog, American Spectator, Chronicles, American Greatness, and LewRockwell.com, among others. (Follow John Kline on Twitter)