#3 - Section 230 (content moderation)
Welcome to the new subscribers, and thank you all for the comments and feedback on my last newsletter — please keep them coming!
As I mentioned in my introductory post, my goal for this experiment is to sharpen my writing and thinking, to meet like-minded people, and to promote healthier discourse. If you know anyone who would be interested in the discussion, please forward this along or have them subscribe.
📰 1 topic: Section 230
Joe Biden recently sat with the New York Times for an interview. In it, he says: “Section 230 should be revoked, immediately should be revoked, number one. For [Facebook] and other platforms.”
Section 230 is a provision in the Communications Decency Act and essentially states that no technology platform can be held liable for (1) the ideas or expressions of those on the platform; or (2) any good faith attempt to remove material it considers indecent. For instance, thanks to Section 230, Facebook is not liable for failing to take down 100% of hate speech.
One important note about Section 230 is that it not only applies to online tech platforms like Facebook, but also to other players in the online services stack, including Internet Service Providers (like Comcast), Internet hosting companies (like GoDaddy), infrastructure / web security companies (like Cloudflare), and search engines (like Google).
The arguments in favor of Section 230 as it stands:
The government can always impose legislation on top of Section 230 in order to police particular types of content. For example, in 2018 Congress passed the Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA). These Acts made platforms liable for any activity related to sex trafficking. In response, platforms removed all content remotely connected to sex work. In essence, the argument here goes: Section 230 acts as the base principle broadly governing moderation of content on the Internet, and additional legislation can serve to target specific types of content.
The dangers of excessive filtering. If the law begins imposing broad liability on platforms for what people say and how the platform moderates content, such a platform may err on the side of over-regulation. Facebook would have to either institute wide-spread filtering tuned to overcompensate and deliver a massive amount of false positives
The remedy for bad speech is more speech. U.S. Supreme Court Justice Kennedy put it best: “The remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straightout lie, the simple truth . . . The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market’ . . . Only a weak society needs government protection or intervention before it pursues its resolve to preserve the truth. Truth needs neither handcuffs nor a badge for its vindication.” The argument here is that by refusing to impose liability, Section 230 allows more speech to proliferate. Of course, some may take this even further and argue that Section 230 should prohibit companies from filtering at all (see below).
The arguments against Section 230 (the law should have even more requirements or even fewer requirements on content moderation):
Ethical responsibility. Section 230 should hold platforms responsible for any radically divisive speech that makes it past content filters. For instance, 8chan should be held responsible for allowing the El Paso shooter to post his manifesto on its platform. Proponents of this argument applauded GoDaddy and Google for refusing to serve the Daily Stormer, a neo-Nazi site.
Sufficient deterrence. Imposing more liability incentivizes platforms and services to build out more infrastructure to catch and ban problematic speech.
Arbitrary exercise of power. Some argue that platforms should be even more neutral, or perhaps even completely neutral, because otherwise, platforms with power can exercise that power in arbitrary ways. For instance, Ted Cruz has criticized Facebook for silencing conservative viewpoints. As Cloudflare’s CEO honestly admitted after choosing to terminate protection of the Daily Stormer: “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Platforms and services that exercise their own discretion to moderate content may ultimately discriminate and disadvantage one particular group of society.
Different principles for different parts of the stack. Internet infrastructure providers at the bottom of the stack (like Comcast and Cloudflare) should not be engaged in any sort of decision-making for whether or not to moderate content. Meanwhile, user-facing content platforms at the top of the stack (like Facebook) should. There are two reasons for this: (1) Infrastructure providers, relative to content platforms, lack transparency and are not accountable to end users; (2) Innovation depends on the bottom of the stack, so allowing infrastructure providers to make decisions on what to moderate has higher risk of stifling innovation.
I personally am in favor of Section 230. If we do want to ban certain types of speech online, I think we should be narrow and crisp in our definitions of harmful content, and we should go through the democratic process. If I had to move Section 230 in any direction, I’d move it in the direction of less regulation. Platforms should not be incentivized to exercise power and discretion in regulating content. Primarily, I’m mindful that such incredible power seems awesome when it’s working your way, but it’s awful when this power flips to the other side. I’d rather remove this chokehold of power in the first place and have the people exercise this power democratically.
What are your thoughts? What arguments am I missing?
📚 5 articles
Goldman Sachs won’t take a company public without a diverse director. We need more diversity in new companies building new things. Without minority perspectives, we only build things that serve the majority.
Clearview AI and facial recognition. Really provocative company. Certainly worth thinking and writing about. One thing I’m wondering about is Peter Thiel’s decision to invest and the libertarian view of privacy.
Uber lets drivers set rates. Very pertinent to the last issue.
Clay Christensen passed away. Christensen is the father of disruption (see The Innovator’s Dilemma), a great business thinker, and one of the most popular professors at HBS. Aside from being a business guru, he’s also well-known for his wholistic, integrated approach to life. A great quote from him: "Don't worry about the level of individual prominence you have achieved; worry about the individuals you have helped become better people . . . Think about the metric by which your life will be judged, and make a resolution to live every day so that in the end, your life will be judged a success."
Seattle residents can now vote using their smartphones. While I’m largely in favor of reducing friction to voter turnout, one thing I’ll be keeping my eye on here is cybersecurity.
Like this post? Please share or subscribe to get an issue of this newsletter delivered to your inbox once every other week. Have questions or comments? Feel free to email me.