#15 - Fighting fake news: A tale of two platforms (Facebook, pt. 2)

Welcome to the new subscribers, and thank you all for the comments and feedback on my last newsletter — please keep them coming!

As I mentioned in my introductory post, my goal for this experiment is to sharpen my writing and thinking, to meet like-minded people, and to promote healthier discourse. If you know anyone who would be interested in the discussion, please forward this along or have them subscribe.

A tale of two platforms (Facebook, pt. 2)

Last week, I wrote about Twitter’s decision to fact-check President Trump and how that action is consistent with the marketplace of ideas, so long as Twitter applies its fact-checking labels even-handedly. This week is about Facebook’s decision not to fact-check Trump and Facebook’s approach to fake news more broadly.

Facebook’s fact-checking policies

Although Facebook chose not to fact-check Trump, it actually does have a fact-checking program whereby third-party fact-checking organizations like Snopes and Politico review news articles on Facebook. Facebook simply chooses not to fact-check politicians. Its reasoning is as follows:

Our approach is grounded in Facebook's fundamental belief in free expression, respect for the democratic process, and the belief that, especially in mature democracies with a free press, political speech is the most scrutinized speech there is. Just as critically, by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words.

If we couch this in language of the marketplace of ideas, Facebook is essentially saying that the marketplace for political ideas is the most free, well-functioning marketplace there is. Political speech is already covered so heavily by the press and the People that Facebook need not play a role in fact-checking.

I think this is fair. But I disagree that slapping a fact-check label on political figures would “limit political speech,” “leave people less informed,” and “leave politicians less accountable for their words.” At worst, a fact-check label has no negative effects on free speech (after all, people can simply ignore the label), and at best, a fact-check label would leave people more informed and politicians more accountable. From the perspective of the marketplace of ideas, politicians can be one of the major proxies people use to get reliable news. If politicians become sole proxies for truth, those politicians become de facto arbiters of truth, and we need them to be fact-checked.

The problem, though, is that Facebook not only applies a fact-check label to posts that are fact-checked. It also demotes the post in the News Feed. In my opinion, this action is inconsistent with the marketplace of ideas. While a fact-check label shines light on a piece of news by adding contextual information about what fact-checking organizations think about that news, demotion in News Feed ensures that a piece of news does not see the light of day. In other words, a fact-check label allows users to be their own arbiters of truth, but demotion deprives people of that opportunity in the first place. Demotion hampers the ability for news to compete in the marketplace of ideas.

Context everywhere

One of the big products I worked on while I was at Facebook was the Context Button. Essentially, for every news article posted on Facebook, we’d surface a host of contextual information about that article, such as publisher information, where the article was shared, and related articles. We were enabling users to assess this contextual metadata and come to their own conclusions about the credibility and integrity of a piece of news. For instance, was the article only shared in Russia? Might raise some red flags. Or, did the publisher create its Facebook page ten years ago? Probably more credible than if it created its Facebook page ten days ago. The idea behind the Context Button grew, and we began to add contextual information to other things on Facebook, not just news articles. For instance, we began rolling out the Context Button for branded content and political ads. When thinking about what to internally call these inter-related projects, one idea that came to mind was “Context Everywhere.”

Of course, this name is cheesy, and we ended up calling it something else, but the idea still resonates with me. The cost to produce and re-produce content is lower now than ever before. We went from papyrus to printing press to printer to Google Docs, from painting to daguerrotype to digital camera to smartphone. Thanks to technology, we are inundated with content, but as the amount of content has increased, the amount of time we devote to digesting the content also must increase. We need a way to sift the truthful wheat from the informational chaff. The problem is that this sifting takes work. As I wrote in A tale of two platforms (Twitter, pt. 1):

In practice, in order to decide what’s true, we often shift the responsibility of sifting though the informational deluge to external actors. In other words, instead of digesting information to be our own arbiters of truth, we (sometimes blindly) rely on external proxies to evaluate “truth” . . . It’s a mental heuristic to cope with the reality that we’re bombarded with information. Often, it’s impossible for us to perform the intellectual due diligence to verify every piece of information we come across . . . The more proxies we rely on, though, the better and more effective is the competition in the marketplace of ideas.

In my mind, the way to empower people to become arbiters of truth is to add proxies, or “context,” everywhere. Behind each piece of content is its own wealth of contextual information. So, no matter how quickly the amount of content grows, the amount of contextual information only grows faster. We just don’t have this valuable context. Indeed, it doesn’t even seem like Facebook shows the Context Button anymore on news articles [1].

I envision a world where people consume the news with a dose of skepticism. The Internet has massively reduced the friction to produce, re-produce, and consume content from anywhere and everywhere. As a result, we’ll come across fake news pretty often. But perhaps the best way to combat this is to introduce friction into the way we consume news, by making it a habit to understand any contextual information about the news we see. Perhaps friction isn’t always a bad thing.


  1. At least not for me. Although my News Feed is filled with news articles, I see not a single Context Button on any of these articles although I used to see them, even for a while after I left the company. I’m not sure if the feature has been killed or if they’re running a new experiment, maybe my Facebook-employee friends can enlighten me here.

📚 5 articles

The results are in for remote learning: It didn’t work. Many technologists nowadays seem to think that all industries will and must succumb to disruption. Technology has obviously transformed many industries, but some industries have been resistant to change. Must their resistance ultimately crumble and give way to the incredible force of technology? Education is one of those industries that has remained largely intact from technological disruption. Some (including me) had originally thought that COVID-19 would accelerate the adoption of online class. After going through a quarter of online learning, though, I’ve found that it sucks. In-person learning might be coupled with online class and with different cost structures, but I don’t see learning institutions abolishing it entirely.

IBM bars law enforcement from using its facial recognition technology. Other companies (like Microsoft and Amazon) also made similar announcements. I largely agree with this policy, but to play devil’s advocate, here are three reasons against: (1) We’re putting people in jail on the basis of wildly inaccurate eye-witness testimony. Facial recognition technology can provide a helpful supplement in making important sentencing decisions. (2) Police alone are probably more racist than IBM’s algorithm. (3) If we want algorithms to improve, they need to be used in the wild.

To adapt to tech, we’re heading into the shadows. Fascinating and thought-provoking. Essentially, in order to learn, use, and develop new technologies, we need room to experiment. Unfortunately, red tape and surveillance are choking these opportunities for experimentation, so we’re resorting to increasingly inappropriate and difficult-to-observe methods.

Snapchat made quite a few announcements for changes to its app. And I’m impressed. On one hand, Snapchat is becoming a platform for apps. On the other, Snapchat is also increasingly opening up its API for other apps to use.

Studying China’s ‘re-education’ program for Uighurs. Difficult read. The policy recommendations revolve around more stringent sanctions, increased reporting, and bans on surveillance technology.