Indian adtech firm SilverPush has launched a brand-suitability platform that uses artificial intelligence to scan videos for red flags such as violence and adult and extremist content. Its software found that around 8% to 9% of videos coming from Southeast Asia are not safe for brands.
The platform, called Mirrors Safe, uses a computer vision algorithm to scan the contents of a video, including faces, objects, logos, actions, and scenes. The algorithm has been trained to detect unsafe content across a wide set of brand unsafe categories, including violence (killing, death, injury, conflict), guns and arms, smoking, nudity and other obscene content.
In a test, the platform churned through approximately 15 million videos across the largest video hosting and sharing platforms in the Southeast Asia region, and found that nearly 8% to 9% of the analysed content was deemed to be brand unsafe. This means it features one or more unsafe contexts in a video.
Out of the unsafe content, the top unsafe categories included:
The company further found that parsing videos for brand-safety flags uncovered in some cases 300% more unsafe videos than keyword lists, a more common brand safety tool.
“What sets Mirrors Safe apart is its ability to custom define the scope of harmful contexts that are unique to every brand. Thus, helping brands move beyond just brand safety to a truly brand-suitable environment," said Kartik Mehta, the chief revenue officer at SilverPush. "This is limited with existing keyword and natural language processing (NLP) based blanket exclusion technologies, as these often fail to understand the complex undertones and various contexts words can be used for."
The company also noted that keyword blocking of topics such as COVID-19 can limit a brand's reach. It said its platform can use AI to distinguish between serious and precautionary coronavirus-related content, and content that features certain public figures which may be on a brands’ blocklist.
"Mirrors Safe further addresses one of the most pressing brand safety challenges of content over-blocking - a result of blanket exclusion measures offered today. This significantly limits campaign performance and often forces marketers to switch off controls in favour of reach," Mehta said.
The Mirror Safe algorithm uses five parameters to calculate a 'brand suitability score', that measures not only the safety and suitability of the content, but also of the page and the channel. The parameters include:
- Engagement: likes, dislikes and participation that the content generates
- Safety: exclusion through in-video context detection, on-screen text, and audio sentiment analysis
- Influence: organic influence that channel/page/content creates
- Relevance: how relevant is the content in terms of its peer channel/page category
- Momentum: consistency that channel/page maintains or grows in terms of engagement
Mirrors Safe is an extension of SilverPush's Mirrors platform, launched in late 2018, which used AI to match the context of a video with a brand's ad (see "SilverPush finds fuel in fight against misplaced ads").
SilverPush was founded in India in 2012 and has undergone several changes of direction over the years. It previously developed 'audio beacon' technology that used "ultrasonic inaudible sounds" to track users across devices and record when they had seen an ad. It moved away from this technology in 2016 after privacy concerns were raised, and has since been focusing on AI-driven ad tech.