#9 - The Facebook naysayer conundrum
Welcome to the new subscribers, and thank you all for the comments and feedback on my last newsletter — please keep them coming!
As I mentioned in my introductory post, my goal for this experiment is to sharpen my writing and thinking, to meet like-minded people, and to promote healthier discourse. If you know anyone who would be interested in the discussion, please forward this along or have them subscribe.
📰 1 topic: The Facebook naysayer conundrum
Many of the criticisms against Facebook are logically incoherent. Yes, it’s been pretty fashionable over the past five years to dump on Facebook, and Facebook certainly isn’t without its flaws. That said, we can’t have a healthy conversation about Facebook without actually having clear principles on the company and the social media landscape.
Critics have been lobbying three simultaneous attacks against the company: (1) Facebook is too big, break it up; (2) Facebook needs to take much better care of people’s data; (3) Facebook needs to take more control over problematic speech on the platform. Point (1) is fundamentally at tension with points (2) and (3). That is, enabling tighter privacy and controlling problematic speech begets larger company size.
Let’s start with privacy and company size. The obvious Facebook gaffe is the Cambridge Analytica scandal, which was an outgrowth of Facebook’s lax data practices in its early years: Back in the early 2010s, Facebook shared a bunch of user data with third-party platforms and developers. The resulting backlash called for Facebook to take better care of user data. However, an emphasis on privacy and the non-leakage of data means that the platforms that already have all the user data will become increasingly entrenched. That is, a blanket lockdown on privacy will only increase the walls on walled gardens. However, here’s the tension: Calling on companies to build higher walled gardens for privacy impedes competition, a direct contradiction to the call for smaller company size. If you want to break up Facebook and promote competition in the social media market, you probably actually want data interoperability (ensuring data can be shared among competing and similar networks). This interoperability seems anathema to privacy buffs but is important to those calling for breakups.
Now, controlling problematic speech and size. First, let’s acknowledge how difficult ‘problematic speech’ is. How do you define it? How do you detect it? Facebook and other social media companies have billions of users spread across the entire globe. Tackling this problem requires a high degree of collaboration with governments / stakeholders and investment in engineering talent and fact checkers. That said, this nearly-intractable problem is not one that Facebook has shied away from (at least since 2016). On the contrary, Facebook has thrown an incredible amount of money at the problem. One year ago, Mark Zuckerberg said, “the amount of our budget that goes toward our safety systems is greater than Twitter’s whole revenue this year. We’re able to do things that I think are just not possible for other folks to do.” Alex Stamos, the former chief security officer at Facebook who left because of Facebook’s handling of misinformation, has repeatedly said that breaking up Facebook doesn’t solve the misinformation or hate speech problem, and that, in fact, asking Facebook to exercise more control over speech actually gives them more power. Indeed, if you want more controls over speech, a dominant Facebook should also be exactly what you want.
In short, if you’re concerned with Facebook dominance, then you logically should be advocating for a competitive marketplace that differentiates according to factors like privacy and acceptance of certain speech. If you don’t, then perhaps you’re advocating for one standard of privacy and speech that all companies in a competitive market need to adhere to? If so, you’re probably asking for the government to step in to regulate privacy and speech. Although that position opens a nasty can of worms with respect to our constitutional rights, at least it’s a bit more nuanced and logically coherent. If that’s your position, though, I’d at least be up front about it.
Note that I wrote generally about breaking up big tech here and about speech on platforms here.
📚 5 articles
How Chinese media shapes conversation about coronavirus.
American higher education is a Ponzi scheme. A contrarian, somewhat inflammatory take, but a very interesting read. I disagree with it in many respects, but I do agree that American higher education is becoming pretty bloated with administrative fat. Costs to higher education are rising, and the returns to higher education aren’t worth it for many.
COVID-19 and forced experiments. Ben Evans is a role model for me, and his writing is a pleasure to read. Last week, he expounded upon the idea that COVID-19 is forcing us to experiment with new technologies and whether / how much these changes will stick. I wrote about something similar a couple of weeks ago.
A startup built entirely on top of Zoom. Grain enables users to cut and string together snippets of video from a Zoom call into a highlight reel / summary. It just raised $4m. I’m thinking about all the other types of businesses that will be started on Zoom. Apparently, haircuts are starting to happen via video chat, too.
Verifiability in AI Development. An interesting set of recommendations from OpenAI about how various stakeholders can verify and investigate the performance of a machine learning model. One observation is that a lot of the recommendations require collaboration between industry, academia, and government to develop shared standards for development. This need for collaboration has been a recurring sub-theme throughout the newsletter.