Two Supreme Court Cases That Could Break the Internet

Isaac Chotiner / The New Yorker
Two Supreme Court Cases That Could Break the Internet "We should be prepared for the Court to change a lot about how the Internet functions," Daphne Keller says. "It's very hard to predict the nature of the change." (photo: Yui Mok/Getty)

A cornerstone of life online has been that platforms are not responsible for content posted by users. What happens if that immunity goes away?

In February, the Supreme Court will hear two cases—Twitter v. Taamneh and Gonzalez v. Google—that could alter how the Internet is regulated, with potentially vast consequences. Both cases concern Section 230 of the 1996 Communications Decency Act, which grants legal immunity to Internet platforms for content posted by users. The plaintiffs in each case argue that platforms have violated federal antiterrorism statutes by allowing content to remain online. (There is a carve-out in Section 230 for content that breaks federal law.) Meanwhile, the Justices are deciding whether to hear two more cases—concerning laws in Texas and in Florida—about whether Internet providers can censor political content that they deem offensive or dangerous. The laws emerged from claims that providers were suppressing conservative voices.

To talk about how these cases could change the Internet, I recently spoke by phone with Daphne Keller, who teaches at Stanford Law School and directs the program on platform regulation at Stanford’s Cyber Policy Center. (Until 2015, she worked as an associate general counsel at Google.) During our conversation, which has been edited for length and clarity, we discussed what Section 230 actually does, different approaches the Court may take in interpreting the law, and why every form of regulation by platforms comes with unintended consequences.

How much should people be prepared for the Supreme Court to substantively change the way the Internet functions?

We should be prepared for the Court to change a lot about how the Internet functions, but I think they could go in so many different directions that it’s very hard to predict the nature of the change, or what anybody should do in anticipation of it.

Until now, Internet platforms could allow users to share speech pretty freely, for better or for worse, and they had immunity from liability for a lot of things that their users said. This is the law colloquially known as Section 230, which is probably the most misunderstood, misreported, and hated law on the Internet. It provides immunity from some kinds of claims for platform liability based on user speech.

These two cases, Taamneh and Gonzalez, could both change that immunity in a number of ways. If you just look at Gonzalez, which is the case that’s squarely about Section 230, the plaintiff is asking for the Court to say that there’s no immunity once a platform has made recommendations and done personalized targeting of content. If the Court felt constrained only to answer the question that was asked, we could be looking at a world where suddenly platforms do face liability for everything that’s in a ranked news feed, for example, on Facebook or Twitter, or for everything that’s recommended on YouTube, which is what the Gonzalez case is about.

If they lost the immunity that they have for those features, we would suddenly find that the most used parts of Internet platforms or places where people actually go and see other users’ speech are suddenly very locked down, or very constrained to only the very safest content. Maybe we would not get things like a #MeToo movement. Maybe we would not get police-shooting videos being really visible and spreading like wildfire, because people are sharing them and they’re appearing in ranked news feeds and as recommendations. We could see a very big change in the kinds of online speech that are available on basically what is the front page of the Internet.

The upside is that there is really terrible, awful, dangerous speech at issue in these cases. The cases are about plaintiffs who had family members killed in isis attacks. They are seeking to get that kind of content to disappear from these feeds and recommendations. But a whole lot of other content would also disappear in ways that affect speech rights and would have different impacts on marginalized groups.

So the plaintiffs’ arguments come down to this idea that Internet platforms or social-media companies are not just passively letting people post things. They are packaging them and using algorithms and putting them forward in specific ways. And so they can’t just wash their hands and say they have no responsibility here. Is that accurate?

Yeah, I mean, their argument has changed dramatically even from one brief to the next. It’s a little bit hard to pin it down, but it’s something close to what you just said. Both sets of plaintiffs lost family members in isis attacks. Gonzalez went up to the Supreme Court as a question about immunity under Section 230. And the other one, Taamneh, goes up to the Supreme Court as a question along the lines of: If there were not immunity, would the platforms be liable under the underlying law, which is the Antiterrorism Act?

It sounds like you really have some concerns about these companies being liable for anything posted on their sites.

Absolutely. And also about them having liability for anything that is a ranked and amplified or algorithmically shaped part of the platform, because that’s basically everything.

The consequences seem potentially harmful, but, as a theoretical idea, it doesn’t seem crazy to me that these companies should be responsible for what is on their platforms. Do you feel that way, or do you feel that actually it’s too simplistic to say these companies are responsible?

I think it is reasonable to put legal responsibility on companies if it’s something they can do a good job of responding to. If we think that legal responsibility can cause them to accurately identify illegal content and take it down, that’s the moment when putting that responsibility on them makes sense. And there are some situations under U.S. law where we do put that responsibility on platforms, and I think rightly so. For example, for child-sexual-abuse materials, there’s no immunity under federal law or under Section 230 from federal criminal claims. The idea is that this content is so incredibly harmful that we want to put responsibility on platforms. And it’s extremely identifiable. We’re not worried that they are going to accidentally take down a whole bunch of other important speech. Similarly, we as a country choose to prioritize copyright as a harm that the law responds to, but the law puts a bunch of processes in place to try to keep platforms from just willy-nilly taking down anything that is risky, or where someone makes an accusation.

So there are situations where we put the liability on platforms, but there’s no good reason to think that they would do a good job of identifying and removing terrorist content in a situation where the immunity just goes away. I think we would have every reason to expect, in that situation, that a bunch of lawful speech about things like U.S. military intervention in the Middle East, or Syrian immigration policy, would disappear, because platforms would worry that it might create liability. And the speech that disappears would disproportionately come from people who are speaking Arabic or talking about Islam. There’s this very foreseeable set of problems from putting this particular set of legal responsibilities onto platforms, given the capacities that they have right now. Maybe there’s some future world where there’s better technology or better involvement of courts in deciding what comes down, or something such that the worry about the unintended consequences reduces, and then we do want to put the obligations on platforms. But we’re not there now.

How has Europe dealt with these issues? It seems like they are putting pressure on tech companies to be transparent.

Europe recently had the legal situation these plaintiffs are asking for. Europe had one big piece of legislation that governed platform liability, which was enacted in 2000. It’s called the E-Commerce Directive. And it had this very blunt idea that if platforms “know” about illegal content, then they have to take it down in order to preserve immunity. And what they discovered, unsurprisingly, is that the law led to a lot of bad-faith accusations by people trying to silence their competitors or people they disagree with online. It leads to platforms being willing to take down way too much stuff to avoid risk and inconvenience. And so the European lawmakers overhauled that in a law called the Digital Services Act, to get rid of or at least try to get rid of the risks of a system that tells platforms they can make themselves safe by silencing their users.

What they have now looks more like a system where platforms do take things down, but the users get notified about it and then they have an opportunity to challenge the takedown if they think that it was wrong, and there’s some transparency. There are just all of these procedural protections around it. That still leaves Europe with way more regulation in place than we have, but it’s actually regulation that was created to get away from the kind of knowledge-based liability that a lot of the amicus briefs are telling the Court they should adopt in these cases.

To your point about transparency, this new law has tremendous new transparency obligations. In addition to notifying users when their content has been taken down, which is an important piece of transparency, it also has regular, aggregate reporting on what content was taken down and why. And it has provisions for researchers to get access to internal company data so they can understand what’s going on and figure out if the platforms are, for example, acting in a biased manner in deciding what they take down. It just has a lot in there to help the public and lawmakers understand much better what platforms are doing.

And does transparency seem like a good middle ground here, to you? Are you optimistic about what’s going on in Europe, or does it still seem like just a small step?

I think it is a really important interim measure. Lawmakers can’t possibly make smart laws unless they understand what’s going on. I mean, of course, the European civil servants who drafted the D.S.A. have been looking closely at these questions since 2011. They’ve been looking closely at the real-world mechanics of content moderation and what platforms do when they face liability for user content, and they drafted something very careful, accordingly. If our lawmakers took the amount of information that their European analogues have gathered over the years, they could come up with smarter laws than the ones that we’ve seen proposed here. But there’s still a lot more that people don’t understand and that without transparency mandates we won’t come to understand. We will get better laws in the long run with transparency, but it’s not a cure-all. It’s not that once there’s transparency, we won’t need any laws.

To go back to child sex abuse and copyright, is there a sense that attempts to regulate these categories have been successful, and have had the intended effect?

You would get very different answers from different people. In the text of Section 230, as it was enacted, it spells out that no federal crimes are immunized. And so it’s not just child-sex-abuse content. It’s terrorism, too. If the Justice Department decided that platforms were violating criminal laws and materially supporting terrorism, they could prosecute them just fine. There’s no limit on that, or drug charges, or any other federal criminal laws. That’s where these claims come in—because they’re federal crimes, they’re not immunized.

But there was a new carve-out enacted in 2018 for prostitution and trafficking-related claims. I think I should come clean that I am one of the outside counsel in a First Amendment case challenging that one, because I think they just did a really bad job. Nobody is happy with what that law actually accomplished. As for copyright, lawyers all hate it. And I certainly have my quibbles with it, but it is much more successful, I think, than what we would get if courts just took away Section 230 immunity and unleashed the forces of tort litigation to try to shape platform obligations.

We’re much better off with a law that tries to take into account the rights and interests of Internet users who are going to be affected by it. If you think about what I described, there are these provisions that are intended to protect users who will never be in court to defend their interests. If somebody is injured by online content, be it a copyright owner or the plaintiffs in this case, they go to court and they sue a platform. The court is hearing from the injured person and it’s hearing from the platform, but it’s not hearing from all of the other Internet users whose rights and interests will be affected by the outcome. So there just isn’t a mechanism for a court to put in place the sorts of procedural protections that you see in the new European law.

Let’s turn to Texas and Florida. Assuming the Court takes up these cases in some form, how potentially radical could the outcomes be?

Let me just start with the big picture by saying that these terrorism cases are about wanting platforms to take down more content, to step in more to prevent bad things from happening. And I think everybody sympathizes with that goal, whatever we think is the right set of rules to achieve it without a bunch of collateral damage. The Texas and Florida cases are about the opposite. They’re about wanting platforms to step in less and to tolerate more offensive or hateful or harmful speech than they do now. And that’s something that is seen as a politically conservative position.

I think most people can sympathize with the idea that we don’t want a very small handful of giant corporations to have the kind of gatekeeper power over public discourse that they have now. The starting-point goal of the legislators in Texas and Florida is something I’m sympathetic with, even though the laws that they enacted are crazy and also very sloppy. These are both very long pieces of legislation with all kinds of details that nobody thought through, because I think they were just having fun and being performative about it. I don’t think they really envisioned a world where platforms would try to comply with these laws.

You said you were sympathetic with the goals, but it seems that the goals might have been just to stop companies from restricting far-right content.

Yes, I do think that’s the goal. But the first time that I saw litigation on claims like this, it came from more traditionally left sources. In Brazil, Facebook took down an image of a native Amazonian woman who was topless. And [the Ministry of Culture said] this was a violation of cultural diversity.

That’s hilarious.

The other one’s even crazier. I don’t know if you know the French “L’Origine du Monde,” which is a Gustave Courbet painting? It hangs in the Musée d’Orsay. Its credentials are impeccable, but it’s also a very closeup depiction of female genitalia. Facebook took it down. And the Frenchman who had posted it was, like, “But this is art. I have a right to post art.”

Both of these state laws require platforms to carry speech that the platforms don’t want to. And both of them imposed transparency obligations somewhat similar to the ones in the Digital Services Act in the E.U. The platforms challenged both of those laws in both aspects, the transparency and the so-called must-carry provisions, on a couple of different legal grounds. But the grounds that the Supreme Court would look at if they took it is whether the platform’s own First Amendment rights to set editorial policy have been violated.

The Florida one says that, if an online speaker counts as a journalistic enterprise, which is defined very broadly and strangely, or if they’re a political candidate or they’re talking about a political candidate, then the platform can’t take down anything they say, with almost no exceptions. There’s a weird obscenity exception. Basically, that means if you’re talking about a political candidate or you are a political candidate, you can share electoral disinformation or covid disinformation or racist biological theories. All kinds of things that I think most people would consider pretty horrific. Platforms would have to leave it up in Florida.

The Texas law is also motivated by a concern about conservative voices being silenced, but it comes at it a little bit differently. It says that platforms can engage in content moderation under their own discretionary terms, but they have to do so in a way that is viewpoint-neutral. And there’s a lot of disagreement and uncertainty about what it means to be viewpoint-neutral. I think, and a lot of people think, that it means that if you take down posts celebrating the Holocaust, you also have to take down posts condemning it. If you leave up posts that are anti-gun violence, you also have to leave up posts that are pro-gun violence.

Sorry, these examples are very dark. But that is what we’re talking about here: horrific things that people say on the Internet, that, effectively, platforms such as Facebook or YouTube would have to leave up under this Texas law, unless they want to take down a whole lot of user speech. They could not let anybody ever talk about racism at all, because they have to be viewpoint-neutral on the topic, or not let people talk about abortion at all, because they have to be viewpoint-neutral on the topic, etc.

Ruling in favor of these laws would significantly affect how these corporations operate, no?

Well, there are some clever lawyers on the Supreme Court. I think they can “logic” their way through to an outcome that says, Oh, it’s only for big tech platforms that this is the rule. This is only for companies that have such a crucial role in public discourse and that hold more than a certain market share. Both the Texas and the Florida laws have size-based limitations. I think it’s probably about twenty platforms by size that are subject to these laws. Between being good lawyers or potentially just being unprincipled about it, it would not be hard for the Court to arrive at a ruling that says [that] just platforms have to do this—it does not apply to employers or to schools, or whoever else they have in mind.

But you’re right that it would still be a very big upheaval, because the case law that we have so far in this area generally supports the platform. We have case law saying that cable companies, for example, have a First Amendment right to decline to carry certain speech. We also have laws saying that, in some narrow cases, Congress can override that and force them to carry local broadcast stations, for example. So it’s not perfectly crisp that Congress can never do this, but it is very rare that the Court has allowed Congress to do this. And, in those cable cases, we saw Clarence Thomas speaking as a traditional, business-friendly conservative, taking the position that of course these are private companies, and the government can’t possibly come along and tell them what speech they have to carry. That would be outrageous. It’s a violation, maybe, of their property rights as well as their speech rights. That’s the traditionally conservative perspective—defending the rights of private companies to pick and choose what speech they want to carry.

But we’ve entered this strange, new, through-the-looking-glass world where many conservatives, particularly politicians, now have precisely the opposite viewpoint about platforms and say “Well, for platforms, of course, lawmakers can come in, the government can come in and tell them what they have to do with their property and tell them what speech they have to carry.” Clarence Thomas now has effectively changed sides on this question and has written that this should be possible.

Kavanaugh, however, in a big opinion he wrote on the D.C. Circuit, staked out a position that is much more traditionally pro-private-property rights, pro-business. In a case about I.S.P.s and net neutrality, he specifically said that platforms such as Facebook and YouTube have a First Amendment right to exclude the speech that they don’t want to carry. The exception that he indicated might exist is if the government could make out a competition claim or say that there’s market failure here, that there’s so much economic concentration in the hands of a small number of platforms that that becomes a trigger for the state intervention to be permissible.

Right, and, after all the things we’ve talked about, it does seem like there are better and worse ways to regulate this stuff, but, fundamentally, if we have companies that are this big and play this much of a role in the economy and our discourse, there just aren’t great solutions in terms of regulation.

The problem is that we have tremendous concentrations of power over discourse in private hands. One response might be to try to break up that concentration.

But look at what the lawmakers in Texas and Florida did. They looked at that concentration of power, and they were, like, “Oh, that’s terrible. Facebook and Google: you should not have that concentration of power. But, instead of reducing it, and devolving power to more people, we’re just going to take it over and tell you how to use it.” Instead of solving concentration as the problem, they are accepting concentration of power as a burden of the state. They will just tell the platforms how to exercise it, which is not, I think, in anyone’s interest.

EXPLORE THE DISQUS SETTINGS: Up at the top right of the comments section your name appears in red with a black down arrow that opens to a menu. Explore the options especially under Your Profile and Edit Settings. On the Edit Settings page note the selections on the left side that allow you to control email and other notifications. Under Profile you can select a picture or other graphic for your account, whatever you like. COMMENT MODERATION: RSN is not blocking your comments, but Disqus might be. If you have problems use our CONTACT PAGE and let us know. You can also Flag comments that are seriously problematic.
Close

rsn / send to friend

form code