Jessica Goodfellow
Jan 30, 2020

Misinformation and hate infect social-media users alongside Coronavirus

OPINION: Misinformation was a threat to democracy before, but now it's a public health concern, and social-media platforms are woefully unprepared.

Misinformation and hate infect social-media users alongside Coronavirus

In 2016, social media's disinformation crisis was terrifyingly exposed following revelations of meddling in the US election (and perhaps the UK's EU referendum, although that has yet to come to light). Four years later, in the midst of a global health emergency, the situation remains totally out of control.

Scouring through social-media platforms for information about the novel coronavirus—a disease that originated in Wuhan, China, in December and has since spread to 20 other countries and territories—it doesn't take long to see that disinformation (the deliberate kind), misinformation (the inadvertent kind) and xenophobia are rife. Crisis really brings out the worst of social media.

One of the more popular false claims is that the virus was spread via bat soup, a so-called local delicacy. A video of a woman eating bat soup 'in Wuhan' went viral and was even picked up by several news outlets around the world.


But it turns out the video wasn't filmed in Wuhan at all—where bat soup isn't even a delicacy, either. In fact, it wasn't even filmed in China. And bat soup has been categorically ruled out as the cause of the outbreak.

Regardless, claims like this have fuelled a dangerous and harmful anti-China rhetoric on social media. Where most countries in the throngs of a crisis that has so far killed more than 200 people attract sympathy and well-wishes online, Chinese people have been subjected to racism and xenophobia. 

Shopkeepers in Japan and Korea have been reported placing "No Chinese allowed" signs in their storefronts, petitions calling for governments to place a total ban on Chinese travellers are gaining traction, and schools are being pressured to shut their doors to Chinese students. Some have been fast at attempting to thwart the spread of xenophobia. For instance, the Boarding Schools' Association in the UK called for schools to "stay alert for any signs of xenophobia".

There's also a concerning number of false medical claims gaining traction online, including claims that a salt water rinse and a concoction of herbal medicines will cure those infected. The Press Information Bureau, an agency of the Indian government, has been guiding people on Twitter to use homeopathy and Unani medicines for the prevention and “symptomatic management” of the coronavirus, in a post that has been widely panned.

Claims like this have the potential to influence a person's decision to get treated or not, so curbing their spread is a matter of public health.

There's also a lot of conspiracy theories, including a popular one that the vaccine to the novel coronavirus already exists, and that the new virus was created on purpose, with some suggesting it was planted by the US.

To be clear, there’s a constant flow of mis- and disinformation on social media at all times. Sometimes it gets popular enough to be noticed, taken down or flagged, sometimes it slips through the cracks. In parallel, hiding in the dark corners of social media, there are pockets of extremism and hate. These communities swell and ebb. Some are removed or de-monetised, but they never go away for long. During a crisis, misinformation and extremism collide, feeding off each other to inform or drive an agenda. Suddenly a fringe ideology becomes a mainstream one, misinformation overshadows truth, and a second crisis is born. A fake, hysterical, and hateful one.

The speed at which social-media users are willing to get behind damaging and bogus claims is due in part to the fact that trust in authority, the media, and experts has been plunging over recent years. News of election meddling, opaque government practices and coverups has perpetuated this belief. This is how the social-media echo chamber takes hold.

Who’s at fault, and what is being done to stop—or at least stifle—the spread of misinformation?

This isn’t a black-and-white, right-and-wrong scenario. Many factors are contributing to the spread of dis- and misinformation, and the types of misinformation being spread.

But if we’re going to point the finger—and boy, do we love pointing the finger in this industry—it should be on governments, regulators, and social-media platforms.

The chief villain in this scenario is always the platforms—and rightfully so. They created these free information platforms in the first place without considering (or, ignoring) the possibility that people lie and there should probably be a screening process, or at least extra weight to credible sources.

The latter approach is what all the main platforms—Facebook, Google, Twitter—have been scrambling to quickly correct during the coronavirus crisis. Posts that have been flagged as fake or contain words likely to be fake are being lowered down the feed, with Twitter and YouTube now steering people who search for the coronavirus and related stories to credible sources first. It's a band-aid solution to a much deeper problem.


While Twitter appears to have made some progress in removing posts that contain fake news, Facebook instead opts for a fake news label rather than a removal. That's because it has to toe a careful line between giving people the right information without becoming an “arbiter of truth”, one of CEO Mark Zuckeberg’s oft-cited concerns (or excuses, for the cynical observers). Yet while Facebook has been allocating heavy resources into hiring fact checkers and forming alliances with media outlets and organisations that can scour its platform for fake news, the volume of misinformation relating to coronavirus, at least at the beginning of the month, shows it has really just scratched the surface of the problem. And when fake news overshadows credible news, a tag won't do much good.

But it’s also worth noting that social-media platforms have been allowed to develop this way, unrestricted. Realistically, in a capitalist society, no for-profit is ever going to be too concerned with the negative impact it may be having on society when it is bringing in billions of dollars of revenue each quarter. Which leads us to regulators. 

They should have been quicker at reacting to the meteoric growth of social media, and the threat this poses to democracy and the media. Now they are attempting to create order and establish rules in what is by its nature an environment of disorder. And even their biggest threats have left the tech giants unscathed. The FTC hit Facebook with its biggest fine on record last year, but the social-media giant basically wrote off the $5 billion within one quarter.

Because it’s such a gargantuan task, there’s an impetus being put on the general public to do their due diligence and verify the veracity of the information they see on social media before sharing it. It could certainly help. If everyone took a minute to do a quick search—go to the increasing number of fact check services available from the likes of AFP to BBC—rather than rashly jumping on a bandwagon, then social media would be a much better place. But it’s sad that we are at the point of having to fact-check information ourselves, especially when it concerns matters of public health.

Now governments also have a role to play in enforcing rules on social media. But they are also responsible for handling and incubating crises such as that caused by the coronavirus. Communication is key, and herein lies the third challenge. The information vacuum caused by a not-fully-transparent government. 

While the World Health Organisation has praised the Chinese government for being transparent in sharing data on the virus in order to manage its spread, there’s whispers of an attempt by the government to suppress information in the beginning. According to reports, police in Wuhan arrested at least eight people for "spreading rumors” about the coronavirus in early January, although it's thought those people were not detained. Some Chinese people have even drawn parallels with the HBO show Chernobyl, which details how the Soviet Union suppressed information about the disaster to its citizens and the world, which resulted in many avoidable deaths.

That said, politicians from Germany to the US have lauded China for acting quicker and providing more information than it did during the 2003 SARS outbreak, which caused 774 deaths worldwide. The ghost of SARS is causing much of the coronavirus hysteria online.

Social media is a dark, dangerous place. A place that rewards unfounded clickbait, where passion shouts louder than reason, and racism and xenophobia is normalised. It is an unburdened beast that will take many years and much resource to wrangle. But it can also be a force for good, when it comes to issues like censorship. It allows the public to share videos, photos, and views that might not otherwise see the light of day. And in dark times, it can also be a place of positivity and laughter. There is hope, like this model citizen dropping off masks at a police stations:

And this youth giving away the last box of masks:

And even in the darkest of times, there's always comedy: 

Source:
Campaign Asia

Related Articles

Just Published

7 hours ago

Under-appreciated, overlooked and misunderstood: ...

Research involving more than 50 female creatives shows there is a long way to go before we realise the full value of female creative talent.

7 hours ago

EBay reviews global media account

EssenceMediacom is the incumbent.

7 hours ago

Tech companies offer poor ad transparency, study finds

A new report from Mozilla and CheckFirst found that many tech companies, including most major social media platforms, offer disorganised ad data that researchers struggle to navigate.

7 hours ago

Times Power of Print throws down the gauntlet to ad ...

The work calls for entries for campaigns that will get more voters to the booths.