Australia’s parliament today passed new legislation introducing much tougher fines for social-media platforms that do not deal with violent content on their platforms.
According to the new law, social-media companies are liable for fines of up 10% of their annual turnover if violent content is not taken down within an “expeditious” timeframe. The content in question is defined as “the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media”.
Furthermore, individuals at these companies could face up to three-year prison sentences and/or fines for infringing The Sharing of Abhorrent Violent Material bill. It was proposed by prime minister Scott Morrison on Saturday, in the wake of the Christchurch attacks and the role social media played in spreading the perpetrator’s livestream of the mass murder.
- "The social media party is over": Industry in Asia not convinced by Zuckerberg op-ed
- WFA threatens social-media platforms with advertiser action
- New Zealand ad associations call for global action against Facebook
- New Zealand terrorist attack: Is it time for social media intervention?
“Big social-media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists,” Morrison said in a statement. “This is about keeping Australians safe by forcing social-media companies to step up and do what the community expects of them to stop terrorists and criminals spreading their hate.”
During debate on the bill in parliament on Thursday, attorney general Christian Porter said the legislation was “likely a world first”. In the days following its proposal, it has sparked a fierce debate over free speech and the accountability of social-media companies. There is also significant legal debate over what constitutes “reasonable time” in taking down content when the penalties are so severe.
If the material in question is uploaded and you don’t take it down “expeditiously”, you can go to jail. What is expeditiously? Not defined! “Who” in a company? Not defined!— Scott Farquhar (@scottfarkas) April 3, 2019
A spokesperson for Google said in a statement: "We have zero tolerance for terrorist content on our platforms. Over the last few years we have invested heavily in human review teams and smart technology that helps us quickly detect, review, and remove this type of content. We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content. We are working with government agencies, law enforcement and across industry, including as a founding member of the Global Internet Forum To Counter Terrorism, to keep this type of content off our platforms. We will continue to engage on this crucial issue.”
Industry group DIGI, which represents Google, Facebook, Twitter and others in Australia, said in a statement that the law “was conceived and passed in five days” and without “meaningful consultation”.
“[The law] does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” said Sunita Bose, DIGI managing director.
“With the vast volumes of content uploaded to the internet every second, this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right—that didn’t happen this week.”
Bose added that Australia’s new law was out of step with notice-and-takedown regimes in Europe and the US. “[The law] threatens employees within any company that has user-generated content to be potentially jailed for the misuse of their services—even if they are unaware of it. This is not how legislation should be made in a democracy like Australia.”