One of Meta’s biggest ever changes to the way it fact-checks information on its platforms will begin next week. Meta’s version of Community Notes will begin to be made available to Facebook, Instagram, and Threads users in the United States on March 18. The program is based on a crowdsourced fact-checking system that Twitter introduced in 2021.
After Elon Musk turned the platform into X, this was the only way to correct false information. Meta executives claim that before expanding the feature to other nations, they are concentrating on perfecting Community Notes in the United States. It’s a high-stakes region for testing a major new feature, given that the U.S. is Meta’s most lucrative market, but Meta may be hesitant to roll out Community Notes in other regions such as the European Union, where the European Commission is currently investigating X over the effectiveness of its Community Notes feature.
The move could also signal Meta CEO Mark Zuckerberg’s eagerness to appease the Trump administration, which has previously criticized Meta for censoring conservative viewpoints.
In an effort to promote a wider range of viewpoints on his platforms, Zuckerberg first made the announcement of these modifications in January.
Meta has used third-party fact checkers to verify information on its platforms since 2016, but Meta’s VP of Public Policy Neil Potts told reporters at a briefing on Wednesday that the systems were too biased, not scalable enough, and made too many mistakes. For example, Potts said Meta applied false fact-checking labels to an opinion article on climate change that appeared in Fox News and the Wall Street Journal.
In another instance, Zuckerberg stated recently on Joe Rogan’s podcast that Meta should not have mistakenly dismissed concerns regarding COVID-19 vaccines. Meta hopes that Community Notes will address the public’s perception of bias, make fewer errors, and function as a scalable fact-checking system that ultimately addresses more misinformation.
However, Meta points out that Community Standards, the company’s guidelines for determining whether posts are hate speech, scams, or other prohibited content, are not affected by this system. Meta’s fact-checking procedures are being revamped at a time when numerous tech companies are attempting to combat what they perceive to be historical biases against conservatives.
Elon Musk has claimed that his social platform is centered on “free speech,” and as a result, X has led the industry’s effort. OpenAI recently announced it was changing how it trains AI models to embrace “intellectual freedom” and said it would work to not censor certain viewpoints.
In the Wednesday briefing, Rachel Lambert, Meta’s Director of Product Management, stated that Meta is utilizing X’s Community Notes-based open-source algorithms as the foundation for its new fact-checking system. Meta opened applications for contributors to its Community Notes network in February.
In a post on Facebook, Instagram, or Threads, Meta’s contributors will be able to suggest notes that directly fact-check claims. The Community Note’s helpfulness or unhelpfulness will be rated by other contributors, which will influence how visible it is to other users. Meta’s Community Notes system, like X’s, looks at which contributors typically disagree with posts. Using this information, Meta will only display a note if sides that typically oppose each other agree that a note is helpful.
Even though the majority of Meta contributors agree that a Community Note is required, this does not guarantee that one will be displayed. In addition, despite the presence of a community note on a post, Meta claims that its algorithms will not lower a post or account’s ranking. Community Notes and other crowdsourced systems have been viewed as a promising approach to combating social media misinformation for years, but they come with some drawbacks. Positively, according to a study that was published in the journal Science, people tend to view Community Notes as more trustworthy than flags from third-party fact checkers.
Researchers from the University of Luxembourg conducted another extensive study on X’s fact-checking system and discovered that posts with Community Notes slowed the average spread of misleading posts by 61%. However, either it takes too long or not enough posts have notes attached to them.
Fact-checks are typically only added after a post has reached thousands or millions of people because X, and soon Meta, require Community Notes to reach a consensus among contributors with opposing viewpoints. The same University of Luxembourg study also found that Community Notes may be too slow to intervene in the early and most viral stage of a post’s lifespan.
The problem is brought to light in a recent study by the Center for Counseling Digital Hate. On a sample of posts on X that contained election-related misinformation, researchers discovered that contributors provided accurate and pertinent information 81% of the time. However, only 9% of the posts that received suggestions achieved contributor consensus, indicating that the vast majority of these posts did not undergo any fact checks.