Last October the Protection from Online Falsehoods and Manipulation Act (POFMA) came into effect, more than two years after the Government said it was reviewing laws to fight the scourge of fake news. The law was met with a mixture of excitement and concern in Singapore. Users, advertisers and social media firms themselves were asking how these powers will impact the platforms, the content shared on them, and how we interact with them every day. Right now there is no clear answer to these questions. However, we have started to see the impact of the Singaporean Government’s regulatory approach come into play. This week as Facebook was ordered to disable access for Singapore-based users to the States Times Review’s Facebook page, people are asking themselves about what the impact of the Singaporean government’s regulatory powers over social media might be—be that on users, advertisers, or public safety.
Sharing the responsibility
Over the last few years we have seen social media firms take significant steps to eliminate bad actors, manipulative political content, and hate speech from their platforms. From Twitter experimenting with new solutions to remove toxicity from the platform, to Instagram and Facebook’s experiments to hide likes, the platforms have shown how serious they are about making their environments healthier and happier for their users.
Recently, Facebook stated that it has an army of ‘digital police’, made up of algorithms and AI, working alongside humans to create a safer and more transparent online environment. While this all sounds like a lot, it’s clearly not enough. That’s why, by passing POFMA, the Singaporean government has opted to take a more powerful role in preventing the spread of falsehood online.
More information about misinformation
May 9, 2019
Nov 29, 2019
Feb 19, 2020
On the surface having governments take a more active role in the prevention of fake news is definitely a good thing, Just take, for example, the steady wave of fake news we have seen on social media about the COVID-19 virus. From posts suggesting that face masks can be reused after steaming to the bogus news that Fitness First gym chain had closed its clubs due to the virus. Over the last few years we have seen how dangerous and divisive fake news can be online. Therefore it makes sense that the Singaporean government should work hand in hand with the social media platforms themselves to eliminate fake news as part of a shared responsibility model.
Education over censorship
Many politicians and business owners have been asking how big of a role regulators should have in determining what content is fake and what is merely controversial. While it’s hard to define clear cut lines when it comes to whether a piece of content is harmful, it is possible to educate users and the public as a whole about behaviours on social media. While technology and regulation must play a role, it’s only by teaching people to use social media responsibly that we stand a chance of limiting harmful content in the online world for good.
The platforms themselves could be the place to start. Gently educating users, particularly younger users, about how to behave on social media is a step in the right direction. Putting their money where their mouth is and launching a global campaign on this could be a big push that the industry badly needs. But the onus isn’t on the platforms alone. Education initiatives do exist, for example Safer Internet Day. A European initiative celebrated annually, Safer Internet Day (SID) aims to teach users about topics from cyber bullying to social media. Apart from commemorating SID, Education systems, if they aren’t already, should also be including social media behaviour in their curriculum—and governments should be encouraging this. In short, collaboration between the platforms, the educators and the governments is the right way forward.
Don’t forget the past. Learn from it
Unlike with media such as radio, TV and print, the attempts made to regulate the early days of the internet didn’t meet with much success. Even today innocent searches on the internet can expose users to content that makes them feel upset and confused, or has even worse effects.
Given the scale and open nature of the internet, cracking down on harmful content is an uphill battle for regulators and for users. However, where we have seen some success is in the implementation of greater user controls. By giving users control over the content they see, whether via ad blockers, parental controls or URL filtering, the internet has become a safer, healthier place for users.
The same could easily be true for social media. If users were given more control over the content they and their children can see, the social world would undoubtedly feel like a safe place to inhabit. This shouldn’t negatively impact brands—rather, it would encourage brands to be more careful and inclusive with the content they share. Harmful stereotyping could be one example of creative advertising that this level of user control could stamp out. This can only be a good thing.
For most people the risks of social media are still outweighed by the huge benefits of the digital world. And while most users are in favour of tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech—which is one of the internet’s great strengths.
The move by the Singaporean government to take more of an active role in managing harmful content on the internet has the potential to see governments and platforms working hand-in-hand to remove harmful content and toxicity from the online world. We work with many of the largest brands in the world, and we know they get value from social media for reaching and engaging with their audiences. It has a positive impact on their business in countless ways, but no brand wants this to come at a cost—of their brand reputation, customer loyalty, or worse. Brands want to be sure that they are investing their ad budgets into safe and trustworthy platforms, which are free from harm and toxicity.
While we shouldn’t expect anything to change overnight, the actions of the Singaporean government look like a step in the right direction for both users and advertisers. Anything that makes social media platforms safer and more engaging is a win-win, both for the people using them and for businesses advertising on them.
Charles Tidswell is the VP for JAPAC at Socialbakers.