Facebook’s anti-fake news measures are being tested during the COVID-19 outbreak. Social media has become a hotbed of mis- and disinformation as the virus has spread worldwide, and since this misinformation relates to matters of public health and has the potential to influence a person’s decision to seek treatment, platforms have been forced to alter their approaches to prevent the spread from getting worse.
With 641 million daily active users in Asia-Pacific, Facebook outshines all other platforms—and definitely news providers—as one of the largest sources of information in the region, and therefore has arguably the biggest responsibility in ensuring information on its platform is accurate. It added 14 million users in the fourth quarter of 2019 alone.
Always the distributor and never the editor, Facebook has rarely taken to removing content from its platform unless it is in clear violation of its rules—an approach that has garnered much criticism, especially when it comes to its political advertising approach. But during the novel coronavirus, removal has become commonplace. Facebook only removes misinformation “that may contribute to physical harm”—other types of misinformation have their distribution reduced. It has been focusing on claims that have been debunked by the World Health Organisation or other credible health experts, and are most likely to result in someone getting sick or not seeking treatment.
Facebook has also introduced new tools during the COVID-19 outbreak, including a pop-up that links to credible health information that will be surfaced when people search for information related to the virus on Facebook or tap a related hashtag on Instagram. The module has been launched in the Philippines, Macau, Thailand, Vietnam, Singapore, Hong Kong, Japan, Malaysia and Taiwan.
Facebook has also been sharing aggregated and anonymised mobility data and high-resolution population density maps to researchers at Harvard University’s School of Public Health and National Tsing Hua University in Taiwan to help inform their forecasting models for the spread of the virus.
But the company is mainly reliant on its fact-checking program to stymie the spread of fake news related to COVID-19 across its platforms. In Asia-Pacific, Facebook has 27 fact checking partners across 11 countries and territories. AFP is a fact checking partner in 10 of the 11 territories (excluding Taiwan), and is the sole partner in five territories: Hong Kong, Malaysia, Pakistan, Singapore and Thailand. This is slightly more than the 26 fact check partners Facebook has in Europe across 16 countries. It has seven partners in the US.
So how effective is Facebook’s fact-checking program in identifying and reducing misinformation? Campaign Asia-Pacific has interviewed AFP Fact Check and infiltrated a number of coronavirus Facebook groups to paint a picture of its process, and its flaws.
The fact-check partner view
AFP Fact Check was established in 2018 in response to a “growing tide of online disinformation”, specifically on social media, says Rachel Blundy, Hong Kong editor for AFP Fact Check. While it has worked through various periods of misinformation spikes in the past, during the Hong Kong protests for example, the COVID-19 outbreak is likely the biggest story it has worked on so far.
The news service has seen a “wave of misinformation” in Asia about COVID-19 since the middle of January, Blundy says. Misinformation has ranged from prevention, to cures, to xenophobic claims.
“Initially, we saw a lot of misinformation about the origins of the virus, and how it was affecting people in the Chinese city of Wuhan,” Blundy says. “Then, as the virus started to spread to other countries around the world, we have seen misleading social media posts about how people can prevent themselves from becoming infected, as well as alternative 'cures'. We've seen images and videos being taken out of context or misrepresented throughout the outbreak. A lot of posts have been xenophobic in tone, suggesting the virus has a specific connection to people of Chinese ethnicity, which it obviously doesn't.”
Coping with the surge in demand “has certainly been a challenge,” Blundy says. In the majority of its 10 APAC territories, AFP has only one fact-check reporter. With a few satellite editors and a team of five editors in Hong Kong, in total it counts 18 staff in the fact-check division in APAC. During the COVID-19 outbreak, it has been leaning on support from Fact Check’s 20 bureaus outside APAC, as well as journalists from the wider AFP network, which includes more than 1700 journalists in 201 bureaus across 151 countries.
“That’s allowed us to maintain a steady flow of reports on misleading posts on the virus from multiple datelines around the world,” Blundy says.
Since mid-January, AFP Fact Check’s Asia-Pacific team has published around 60 fact-checks on coronavirus-related content to date, while bureaus from the rest of the world have fact-checked around 100 additional pieces of content. Blundy estimates that around half of the claims AFP has fact-checked on coronavirus has included a misleading Facebook post.
But AFP Fact Check reporters only find “about 30-40%” of their stories from Facebook’s dedicated fact check feed, Blundy says. Reporters have to balance their time across all platforms and information sources. When they are not looking through Facebook’s fact check feed, they sift through other platforms such as Twitter, YouTube and Weibo, web pages and articles. Using keyword searches is an effective tool, as it allows the reporters to source claims that have appeared in multiple social media platforms, such as this video AFP debunked claiming to show a murder of crows in Wuhan, that had been shared on Facebook, Instagram, Twitter and YouTube. Reporters also curate lists on Facebook-owned social monitoring platform Crowd Tangle. Usually used by publishers to keep an eye on trending news, AFP uses it to track “trending disinformation”, Blundy says. It allows the fact-checkers to keep tabs on repeat offenders who share the same misinformation across multiple pages and groups.
If Facebook’s biggest fact-checker in the APAC region is dedicating less than half of its time to the platform, is this enough? Let’s understand how it works:
How Facebook’s fact-check system works
Facebook says the system it has to identify fake news is a “hybrid between people and technology”. In the first instance, it relies on its community of users to report content they see as ‘false news’. Content that can be fact-checked includes ads (except political ones), articles, photos or videos on Facebook and Instagram, as long as the content is public.
A machine-learning algorithm then sifts through user-flagged posts—and the wider Facebook ecosystem—to scan for links to dodgy websites, in order to prioritise the posts that are to be sent to third-party fact-checkers.
Content that is flagged for review is collated in a ‘Claim Check Feed’ which fact-checkers can toggle by geography and language. The ‘Claim Check Feed’ is refreshed on a weekly basis. But beyond that, there isn’t much organisation to the feed. Content appears in no particular order and makes no suggestions on which posts should be prioritised—fact checkers are left to decide for themselves. This respects the journalistic integrity of the fact-checkers but also means that often the same claim/post is fact-checked more than once by different fact check partners or in different languages. When there is so much fake news to get through, it seems a waste of resources.
The algorithm that populates the feed is “not perfect”, according to Blundy. She says “quite a lot” of the feed is populated by the wrong kind of content: content that is incorrectly tagged, that can’t be fact-checked because it is conspiracy or opinion, along with legitimate news stories from respected publishers.
“Sometimes the moderation system is pulling in violent content or sexual content that people don’t want on Facebook. We also see conspiracy theories start to emerge relatively early on because there is so much panic and anxiety, but we can’t fact-check a lot of this because there is no evidence to support it and no way to verify it. And a whole range of media outlets have popped up in there,” Blundy says. “The understanding of what is fact-checkable is not quite there yet.”
This is problematic, because while Facebook waits for fact-checkers to work through the feed, posts that have been flagged as potentially false—either by users or algorithms—have their distribution reduced. Which could mean suppressing news stories containing vital information about COVID-19, for example.
Once a fact-checker has reviewed a post in the feed, they can give it one of nine ratings: false, partly false, true, false headline, not eligible, satire, opinion, prank generator, and not rated. Any content that has been flagged under one of three possible fake news ratings is demoted in the News Feed to reduce the distribution, and users are notified of the rating before when they click to share it. Repeat offenders are commonplace on Facebook, and the platform will sometimes take down pages that have been flagged multiple times, or at least remove their ability to monetise.
The fact-checker ratings help to train the machine-learning algorithm to spot potentially false content, to reduce the reliance on user flagging. The machine learning model can also identify duplicates of debunked stories.
Fact-checkers are asked to focus on “the worst of the worst”, that is, clear misinformation and fake news intended to harm and mislead. Facebook has four criteria it asks fact-checkers to consider when prioritising what content to check: verifiability (claims based on facts rather than opinion), importance, relevance (to news or current events), virality.
Then onto the fact-checking, which is a cumbersome process. Algorithms and digital tools may help surface potentially fake news, but the actual fact-checking process is manual and resource-heavy. AFP Fact Check reporters scrape videos and images for metadata, scan videos frame-by-frame for insignia or dialects that may give a location away, conduct reverse image searches, and combine this with regular journalistic practices like research, contacting the original sources, and obtaining official statements/police reports.
So what's the solution?
It’s clear that Facebook’s fake news fight has significant flaws, although the proportion of content it catches and removes has likely increased (it has not provided official data). User flags are proving problematic due to the newness of ‘fake news’ phenomenon, with many not understanding what fake news is, or abusing the tag to discredit publishers they don’t like (similar to a certain politician). It’s why both platforms like Facebook and news organisations like AFP are focusing on improving the general public’s news literacy.
“Misinformation cannot be eradicated, but media literacy can be boosted to help people avoid being misled. The work being done now will leave the next generation much better equipped to identify misinformation/disinformation online,” says Blundy.
Improving media literacy, especially in developing countries, will help to “cultivate a free and fair media environment” more than legislation, she believes.
“Different countries in Asia are taking different approaches. Anti-fake news laws can have an impact on the amount of misinformation online in a particular country, but they can also be used to stifle dissenting voices. Some countries have acknowledged that these types of laws don't work particularly well in practice,” she says.
While Facebook continually announces fresh investments in its anti-fake news measures, Blundy says her job over the past year “hasn’t gotten any easier”. A large part of this comes down to resources: journalists are grossly outnumbered by peddlers of fake news. Media literacy is a long-term solution, but hiring more fact-checkers is more pressing. Facebook would not disclose how much it pays its fact-check partners, but a recent Popular investigation found that it paid a top US fact check partner US$359,000 in 2019. Perhaps a bit more of that $18.5 billion net income it made in 2019 could help.