It’s hard to remember right now, given its deserved reputation as a cesspool of hate, spam, and conspiracy theories, but people used to feel optimistic about the Internet. Thirty years ago, companies and customers could visualize a world in which people could communicate across massive distances at low costs, without space limitations afforded by newsprint or radio waves. 

But if companies had to fear litigation amid the massive volume of content that their platforms might host, no one would ever start such a platform; the costs and risks would be far too great. To operate in this world, companies would either have to assess the potential liability for each and every post that users created (horribly labor-intensive) or budget for lawsuits when aggrieved parties inevitably filed suit (incredibly expensive).

A range of early verdicts in state and federal courts led Congress to enact Section 230 in 1996. One case out of New York state court, Stratton Oakmont v. Prodigy Services, exposed an Internet service provider to liability for the content of one of its users. The user had allegedly posted a defamatory statement about the investment firm Stratton Oakmont (later made famous in The Wolf of Wall Street); Stratton Oakmont later sued both the user and Prodigy for defamation. The trial court’s ruling that Prodigy could be treated as the publisher of the user’s post—and thus secondarily liable for defamation—imperiled the promise of the nascent Internet. 

Enter Section 230, which states that a platform cannot be treated as the publisher or speaker of content created by users, and that a platform cannot be held liable for content moderation decisions made in “good faith.” Platforms, therefore, operate with a great deal of insulation from liability for their users’ content, as well as the choices they make regarding the content. If I post a potentially defamatory statement about Stephen Breyer on Facebook, Section 230 means that Facebook likely has no liability risk for publishing it. And if Facebook takes down the post, I likely have no legal claim against Facebook as well.

If platforms could face liability for their users’ content, they would likely grow much more cautious. Every TikTok video, Instagram photo, or Tweet might need to be reviewed before the platform could make it publicly available. And if Facebook needs to make sure that all my Stephen Breyer posts don’t implicate the company in a defamation lawsuit, it’s probably going to institute many more restrictions on what people can talk about on the site. That might have some marginal benefits—fringe theories like the healing powers of hydroxychloroquine probably wouldn’t become prominent without Section 230—but there would be a lot of downsides as well, particularly for marginalized communities or discussion of sensitive topics. It’s hard to imagine #MeToo taking off in a world without Section 230 given the potential for defamation liability; topics like Black Lives Matter might seem too controversial for some platforms to host, given the false, opportunistic claims levied against “violent” BLM activists.

While some cases and congressional reforms have limited the scope of Section 230—especially as Internet companies have received less deference in recent years—the core of the law remains intact. Section 230 is not perfect by any means—I’ve advocated for further statutory reforms myself—but careless judicial changes to the regime would transform the Internet, probably not for the better.

It’s against this complex backdrop that the Supreme Court, like a loud guest late to the content moderation party, has decided to enter the discourse for the first time in the quarter century that Section 230 has existed. This week’s arguments in Gonzalez v. Google and Taamneh v. Twitter will provide the first real insight into what the Court thinks about the culture war over Internet platforms and online speech. 

But the sloppy dynamics of these two cases, coupled with the proliferation of massive misinformation about the meaning and scope of Section 230, may mean that some justices’ appetite for rewriting federal law could make a real mess. Unsurprisingly given the stakes, multiple amici have lined up to caution the Court of the many pitfalls that a careless ruling might cause—including nearly fifty amicus briefs in support of Google in Gonzalez, one of which I signed on to. 

Both Gonzalez and Taamneh stem from speech relating to content in support of terrorist individuals or groups. The cases were brought by the families of two different individuals killed in separate ISIS attacks; the families allege that the platforms could be held liable under the Anti-Terrorism Act for recommending ISIS videos to users. These events are objectively horrible, but they are also extremely atypical in Section 230 cases. The mantra “bad facts make bad law” may prove especially resonant here, as some justices love to latch on to particularly shocking or sympathetic facts to distract us from their jurisprudential moves.

No circuit split exists, so why did the Court take this case? In my view, the Court granted review given the increased attention to the statute, particularly from disgruntled politicians and angry online trolls. Section 230 has become the punching bag for conservatives alleging that platforms are unfairly treating right-wing incendiaries, and for liberals who perceive the platforms as indifferent towards the racist, anti-queer rhetoric that nonetheless proliferates. I believe concern about the speech of ultra-conservative users piqued this ultra-conservative Court’s interest in “fixing” Section 230, akin to some justices’ fears that the real subordinated groups in America today are religious people. 

Of course, some Supreme Court justices can’t resist upending things. Justice Clarence Thomas has set his sights on Section 230 for years. “Without the benefit of briefing on the merits, we need not decide… the correct interpretation of §230,” he wrote in a statement respecting the Court’s denial of certiorari in a 2020 case. “But in an appropriate case, it behooves us to do so.” These two cases don’t qualify as appropriate given their procedural stickiness and unusual facts, but that likely won’t stop Thomas from having a good time.

It seems that Congress’ failure to find any consensus around “reforming” Section 230 has triggered some justices’ love for mucking around in federal statutes. (This Supreme Court hates to pass up an opportunity to clumsily legislate.) But the Court’s lack of experience in interpreting Section 230 could lead to an even worse outcome: an uninformed interpretation of the law that sloppily destabilizes the status quo, potentially leading to less speech online. Congress has its own dysfunctions, but any reforms that tinker with a complex statute should come from legislators, not the Court. Decades ago, Congress weighed its options, created a general rule, crafted exceptions, and set forth a regulatory structure after extensive negotiations and fact-finding. The Court can’t do any of that; whenever it tries, it fails.

But of course, a majority of the justices disagree; why should we question their ability to super-legislate? Oral argument this week will better indicate whether the justices will heed the flashing warning signs. Sadly for all of us who love to post, I fear the opportunity to punish the supposedly anti-conservative technology companies will seem too appealing to resist.