Staff Reporters
Apr 19, 2021

Over 80% of content removed across tech platforms from three key categories: GARM

Vast majority of ejected content comprises of spam, adult & explicit content, and hate speech & acts of aggression, new report reveals.

Over 80% of content removed across tech platforms from three key categories: GARM

Over 80% of the 3.3 billion pieces of content removed across key technology platforms participating in a Global Alliance for Responsible Media (GARM) report are from three categories – spam, adult & explicit content, and hate speech & acts of aggression. This was revealed in GARM's first report tracking performance on brand safety across seven platforms, including Facebook, Instagram, Twitter and YouTube, as the next step in its mission to improve the safety, trustworthiness and sustainability of media.

GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies, including the Association of National Advertisers (ANA), Incorporated Society of British Advertisers (ISBA) and the American Association of Advertising Agencies (4A’s). According to a statement, by aggregating existing platform transparency reports and adding in policy-level granularity, the new document creates a common framework that enables advertisers to assess progress against brand safety for each platform member of GARM. 

The GARM Aggregated Measurement Report is based around four key questions marketers can use to assess progress over time. The report is consistent with the  common framework used to define harmful content not suitable for advertising and introduces aggregated reporting. “There’s no place for harmful online content in media that’s monetised by advertising, and we need to understand the size of the problem and track progress over time,”  said Marc Pritchard, P&G chief brand officer.  “The GARM Aggregated Measurement report is an important step forward in helping brands advertise in safe and suitable places—a critical element for consumer trust.”

The report follows nine months of collaborative workshops between advertisers, agencies and key global platforms working together as one of GARM’s Working Groups, bringing together for the first-time data in a single, agreed location around four core questions and eight authorised metrics that have been agreed as critical to tracking progress on brand safety.

This report includes self-reported data from Facebook, Instagram, Pinterest, Snap, TikTok, Twitter and YouTube. Numbers are self-reported by platforms. Twitch, which joined GARM in March, will join the reporting process for the next report, due later this year. 

GARM platforms have reported increases in activity and its impact with significant progress by YouTube in the number of account removals, Facebook in the reduction of prevalence, and Twitter in the removal of pieces of content. These initial improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to COVID-19 disruptions that resulted in moderation teams working with limited capacity.

"This report establishes common and collective benchmarks that reinforce our goals and help brand leaders, organizations and agencies make sure we keep media environments safe and secure," says Raja Rajamannar, chief marketing and communications officer, Mastercard and WFA President. 

 

Related Articles

Just Published

15 hours ago

Campaign Crash Course: How to maximise DOOH returns

Digital out-of-home media buying is becoming more common and accessible across Asia. So how does it fit with an omnichannel strategy and how can you measure its returns?

16 hours ago

Raya film festival: Watch ads from Julie’s, ...

This year’s top prize goes to snack brand Julie’s, whose ad turned Raya stereotypes on its head and will be remembered for years to come.

16 hours ago

TikTok to marketers: Go native and multigenerational

The platform enlisted KFC at NewFronts in the US to persuade advertisers to spend on TikTok.

17 hours ago

Uninformed consent, addiction among persistent ...

CAMPAIGN360: Around 170,000 children go online for the first time every day, but the industry has yet to find a way to build their trust and target them safely.