Should social media be held responsible for spreading harmful content?
The Internet sector accounts for about 10 percent of the U.S. GDP in the United States and contributes $2.1 trillion to the nation’s economy, providing more than 18 million job opportunities. This vibrant sector boomed in the U.S. — far more than in European nations — primarily due to legal protections granted through Section 230 of the Communications Decency Act (CDA).
Put simply, Section 230 states that online service providers or intermediaries — such as YouTube, Facebook, Twitter, or any online publisher — cannot be held legally responsible for what users say or do on their platforms. Without Section 230, platforms could be sued for what users publish on their websites, and would therefore, be encouraged to moderate or censor content to avoid legal trouble.
By eliminating the hassle of legal responsibility for users’ content, the CDA helped internet service providers (ISPs) innovate and rapidly expand. In recent years, however, with misinformation and harmful content spreading rapidly via the Internet, the CDA has become a hot topic among politicians from both sides of the aisle. Republican representatives advocate eliminating protection granted by Section 230, arguing that online platforms are biased towards censoring conservatives, while their Democratic counterparts claim that platforms are not doing enough to moderate harmful content.
Reexamining Section 230 is essential to bringing the law up-to-date with technology trends. Any change to this legislation, however, will bring with it significant implications. If websites are held legally responsible, they will inevitably grow more cautious about publishing users’ content without verification or modification. Reviewing and moderating content at scale is extremely resource-intensive and requires advanced artificial intelligence (AI) algorithms to automatically detect and take action on illegal content. Developing such accurate algorithms, however, is only possible with enormous amounts of data, which is mainly available only to a few giant tech companies. The consequences will be catastrophic: more power for already dominant companies.
On the one hand, holding platforms responsible for spreading misinformation and harmful content is imperative to maintain a safe and respectful cyberspace. On the other, encouraging content moderation may lead to the violation of free speech rights and would almost certainly widen the gap between big players and new entrants.
A reexamination of 230 is necessary; however, prior to any major modification to the CDA, more fundamental issues must be addressed:
1- Stronger Antitrust Legislation:
Concentration of power is one of the main reasons Big Tech companies have so much control over what users see, access, and hear as news. Stronger antitrust law will support innovation and diminish barriers to entry to the Internet sector, thus reducing the power online platforms have in controlling content.
2- A More Definite Interpretation of Free Speech:
In the Internet age, what falls under the umbrella of free speech? A widespread invitation to mass murder or genocide? Online child molestation? Hate speech and bigotry? Exercising the right to free speech in parks and streets is far less damaging than broadcasting harmful content instantaneously available to tens of millions of people. Therefore, Online content demands a stricter interpretation of freedom of speech.
Repealing Section 230 of the Communications Decency Act will impose severe consequences on the growth and innovation of the Internet sector. Instead, Congress must consider a careful revision of Section 230 to ensure it effectively fulfills its original goal — to promote the growth of and competition in the online business sector — while simultaneously secures public safety and safeguarding our First Amendment right to freedom of speech.