Google warns US Supreme Court that messing up Section 230 of the Communications Decency Act could bankrupt the internet and cause devastating consequences


Last Thursday, Google filed a key defense brief in a US Supreme Court case that could reshape the legal landscape for publishers and online services. Google told the court that changing Section 230 of the Communications Decency Act, which protects companies from liability for content their users post, would “undermine a central element of the internet”. González v. Google, whose case the Supreme Court will hear next month, will decide whether Section 230 protections apply to the algorithms YouTube and other platforms use to select content to show users. An unfavorable decision for Google in this Supreme Court case involving YouTube’s recommendation engine could have unintended consequences for much of the Internet, the search giant said.

Section 230 of the Communications Decency Act, which shields companies from liability for the content of their users, allows online platforms to engage in good faith content moderation while shielding them from liability for their users’ messages. Tech platforms argue that this is an essential safeguard, especially for smaller platforms that could otherwise face costly legal battles, as the nature of social media platforms makes it difficult to immediately address any harmful messages.

But the law has been hotly debated in the US Congress, with lawmakers from both parties saying liability protections should be severely limited. While many Republicans believe the law’s content moderation provisions should be watered down to reduce what they see as censorship of conservative voices, many Democrats question how the law can protect platforms that host misinformation and hate speech.

The Supreme Court case known as Gonzalez v. Google was introduced by family members of American citizen Nohemi Gonzalez, who was killed in a terrorist attack in Paris in 2015, for which ISIS claimed responsibility. The complaint alleges that Google-owned YouTube failed to prevent ISIS from serving up enough content on the video-sharing site to support its propaganda and recruitment efforts. The plaintiffs sued Google under the Anti-Terrorism Act of 1990, which allows US citizens injured by terrorism to seek damages. The law was updated in 2016 to add secondary civil liability for anyone who aids and abets, knowingly providing substantial assistance in an act of international terrorism.

Today, the Gonzalez family hopes the high court will agree that Section 230’s protections, designed to shield websites from liability for hosting third-party content, should not be extended to also protect the right of platforms that recommend harmful content. But Google thinks that’s exactly how liability protection should work. In the lawsuit, Google argued that Section 230 protects YouTube’s recommendation engine as a legitimate means of facilitating other people’s communications and content.

Section 230 broadly protects technology platforms from lawsuits related to content moderation decisions made by companies. However, a Supreme Court ruling that says AI-based recommendations lack these protections could threaten essential Internet functions, Google wrote in its filing. Websites like Google and Etsy depend on algorithms to sift through mountains of user-generated content and display content that is likely to be relevant to each user. If plaintiffs could circumvent Section 230 by targeting the way websites classify content or by trying to hold users accountable for liking or sharing articles, the Internet would become a disorganized mess and a minefield for litigation. , the company writes.

Faced with such a decision, websites may have to choose between deliberately over-moderating their pages, effectively removing them anything that can be perceived as objectionable, or do no moderation at all to avoid the risk of liability, Google argued. In its petition, Google said that YouTube abhors terrorism and cited its increasingly effective actions to limit the distribution of terrorist content on its platform, before insisting that the company cannot be sued for recommended videos due to the protection its liability under section 230.

Gonzalez v. Google is considered a benchmark in content moderation and one of the first Supreme Court cases to consider Section 230 since it was enacted in 1996. Several Supreme Court justices have expressed a desire to rule on the law, which has been interpreted broadly. by the courts, supported by the technology industry and harshly criticized by politicians of both parties.

Google argues that it is not the Supreme Court that makes decisions to reform Section 230, but Congress. In a legal brief released last month, the Biden administration stressed that Section 230 protections should not extend to recommendation algorithms.. President Joe Biden has long called for changes to Section 230, saying tech platforms should take more responsibility for the content that appears on their websites. As recently as Tuesday, Biden published an op-ed in The Wall Street Journal urging Congress to change Section 230.

But in a blog post Thursday, Halimah DeLaine Prado, Google’s general counsel, argued that narrowing Section 230 would increase the threat of lawsuits against online businesses and small businesses that would curb freedom of movement, expression and economic activity on the Internet. Services could become less useful and less reliable as efforts to eliminate scams, fraud, conspiracies, malware, violence, harassment and more will be stifled, wrote DeLaine Prado.

Source: US Supreme Court

And you?

What is your opinion on this topic?

See also:

Supreme Court blocks Texas social media moderation ban, legal battle over HB 20 continues

New algorithm bill could force Facebook to change how News Feed works without changing Section 230

Experts tell US senators that social media algorithms threaten democracy, but Facebook, YouTube and Twitter challenge that claim

Leave a Comment