Explaining Section 230 (and its connection to recommendation algorithms)

Editor’s note: This is one of the occasional posts we do speaking to someone with something to say on topics that we find interesting. If you are seeing this and are not yet signed up to Deepnews, click the button bellow to start receiving our blog posts every week, and a Digest of quality news on an important subject every Friday.



By Christopher Brennan


If you haven’t heard of Section 230 before, get ready.


Last night U.S. President Donald Trump said he was vetoing a defense spending bill until Congress had “completely terminated” the section, which in short, provides entities such as social media websites a legal shield against being sued for the content that is posted by their users.


Trump has repeatedly claimed the legislation, a bedrock of the modern internet, is biased, though dislikes and defenses of Section 230 are not neatly divided along America’s normal partisan lines. President-elect Joe Biden said in an interview at the beginning of this year that it should be “revoked, immediately” and cited misinformation on Facebook.


With further calls for change likely, it is worth examining the issues at play. There is also a Deepnews angle, as some of the criticism has focused not just on companies taking down or not taking down offending posts, but recommending posts to users.


I had actually started working on this blog post earlier in the week before the defense bill tweets, and had reached out to Professor Eric Goldman at Santa Clara University School of Law, a widely cited expert on Section 230 who has been working in internet law for decades and has written about many of its cases over at his blog. Below is a look at the section in question with some thoughts from Goldman about what it really means.


OK. So what is Section 230? 


Section 230 is part of the Communications Decency Act of 1996, which was passed during the relative beginnings of the internet and aimed at regulating pornography online. The anti-indecency parts of that law were struck down by the U.S. Supreme Court in 1997 for violating the First Amendment. Section 230, however, remains.

“Section 230 is actually a pretty simple concept. It says that websites aren’t liable for third party content. Meaning someone who submits content is liable for it, but people who are otherwise part of the distribution or publication chain aren’t,” Goldman said.

“It applies to providers and users of “interactive computer services,” which, as I teach in my internet law class, is basically everyone who’s online. As long as they’re online, Section 230 applies.”



Proponents of Section 230 say that protecting companies has allowed innovation on the internet to flourish because they are not afraid of being sued for user-generated content.


Well that sounds relatively reasonable. What are the objections to it?



Current criticism of Section 230 is wide ranging, though those complaints with the most attention from politicians are the ones that target the biggest platforms including Facebook and Twitter. In an oversimplified distinction, certain Republican lawmakers believe that these networks are biased against conservative speech and blame Section 230 for protecting companies as they take down content that should remain up. This is one o the motivations behind a recent proposal from the Trump administration. Certain Democrats believe that these same networks have not done enough to take down problematic content such as misinformation, hateful posts, etc.


There are several other proposed measures that would change Section 230. One of the big ones, the EARN IT Act, passed out of a Senate committee this summer with bipartisan support. It originally said that companies had to “earn” the Section 230 liability protection by following rules created by a government committee. That quid pro quo has been eliminated in the latest version, though it would still diminish Section 230. While the EARN IT Act is offered as an anti-child abuse bill, other legislative ideas discuss consumer protection concerns or misinformation.


You mentioned recommendations up above. What does Section 230 have to do with that?


In addition to political leaders criticizing the section itself, there have been attempts to get around it. These have included the argument that while Section 230 protects websites posting third-party content from users, some of what websites are currently doing goes beyond that, in that they are recommending posts to users through algorithms that (as opposed to Deepnews, which is focused on quality) are often aimed at generating engagement, and through engagement, profit.


“Facebook takes that posting, and they make a business decision. They use an algorithm, and they decide how to disseminate it, to monetize it,” Rep. David Cicilline of Rhode Island told the WSJ last week. “One could argue they shouldn’t be protected from liability in making that business decision.”


Arguments about recommendation and Section 230 have been offered in court, though Goldman says that he has yet to see a case where the courts agreed with them. One prominent example was against Ultimate Software, which was sued by the mother of a Florida man who died after connecting to a drug dealer through a website called the Experience Project, where users anonymously shared their experiences. The court rejected the mother’s claim that the Experience Project should lose immunity because the recommendation function was “specifically designed to make subjective, editorial decisions about users based on their posts.”


“You use the term recommendations, and that can mean a lot of different things. So, I tend to use a different vernacular and the semantics here matter quite a bit. I tend to think about Section 230  protecting ‘publication decisions,’” Goldman said. 


“There are things you’ll promote internally within a publication, to say ‘This is a teaser, go find the rest of it somewhere else.’ These are all publication decisions that we recognize in the offline world pretty intuitively, and internet services do the same thing when it comes to third-party content. They make publication decisions. Not just the go, no-go, but all the other associated editorial decisions. And Section 230 protects all of those.”


Though companies like Facebook have resisted being “arbiters of truth,” an increased focus on misinformation in the last few years has given rise to discussion of social networks such as Facebook making “editorial decisions” in their actions. Similarly to the way that the Deepnews Scoring Model highlights articles based on editorial standards, those “editorial decisions” also include algorithmic choices, such as choosing not to crack down on divisive content. But those algorithmic choices about what to display to whom also fit with Goldman’s idea of the “publication decisions” that Section 230 protects.


“The platform establishes a taxonomy, and then the user content populates that taxonomy. Just like if you go back to the old newspapers, there was a metro section or a sports section or a business section or an entertainment section,” he said. 


“These were all nodes in the taxonomy of how to organize the content. Same kind of thing with internet services. They may build a taxonomy, and then users come in and populate it with their content.”


Hmmm. It seems pretty difficult with Section 230 to regulate companies based on the speech on their platforms. Is that true?


Yes, though it goes beyond being Section 230 issue and can be a First Amendment, freedom of speech issue. Specifically freedom of speech law in America, as opposed to countries where many of our European readers live with governments that can regulate speech more.  Even without Section 230, a law trying to make companies liable for political disinformation from users on their platforms, for example, could be challenged on First Amendment grounds. 


But a law like the EARN IT Act was originally written, as a quid pro quo where companies and their algorithms follow certain standards in order to receive Section 230 immunity, could also run into First Amendment problems, according to Goldman. He said that a case from the world of trademarks knocked down a restriction where a government benefit, a trademark, was denied to those using certain speech, in that case something “immoral” or “scandalous,” because that denial violated the freedom of speech.


One place where the professor said that companies may not be protected under Section 230 is for problems where the algorithm creates the harm. “Imagine that an advertiser runs a job listing saying, “I want to reach everyone in Palo Alto, California.” But Facebook’s algorithms, for whatever reason, whether it was by design or through the unintended consequences of machine learning, only show the job listing to users under 25,” he said. 


“So now, the ad has had a potentially discriminatory impact on the employment pool, and the advertiser didn’t do anything wrong. Could Section 230 fall away under that circumstance? I don’t know. But that would be, I think, a better example where the algorithm creates the potential harm. When the third party didn’t create that at all.”


So, do all of these worries about the First Amendment and freedom of expression online mean that there won’t be attempts to change Section 230 in some way?


Not at all. In fact Goldman is worried about the future of the section and what it can mean for the internet. 


“I’ll be candid, I’m scared for Section 230. Im worried that Congress is going to do something really stupid in 2021 and I think we have to have a society-wide conversation about the things we love on the internet and the things we don’t want Congress to screw up,” he said.


His imagined future could include the “Netflixization” of the internet, where the web is dominated by large, commercial databases of professionally produced content. It could mean not just increased checks and authentication for users to post online (already being talked about because of AI-generated content) but the exclusion of those who don’t bring the benefit of their own audiences to the platforms, leaving mostly celebrities and brands.


At the same time, Goldman takes concerns such as misinformation seriously, but holds that Section 230 could be part of the solution.

“The idea is that if we want internet services to combat misinformation, which Congress may not be able to regulate, or the states may not be able to regulate, under the First Amendment, Section 230 is the tool that allows them to do this, in that it allows the internet services to decide what steps are going to take to combat disinformation,” he said.


“Exactly Section 230 enables those solutions to be developed and rolled out as soon as the technology permits, as opposed to legal code which takes years to develop and then takes additional time for the community to respond to it.”

Leaving companies the ability to try things out and take different approaches also means that users can choose what sort of platform they want to be part of. If users are more aware of platforms making editorial choices, just like with magazines or newspapers, they can choose the sort of online information world they live in.