LOS ANGELES – After a month in which advertisers, agencies and publishing platforms have been pilloried in the press for allowing ads to fund shady content like terrorist recruiting videos, the world’s largest media agency is asking platforms to make a simple change it says could solve the problem.
“We think we can catch 99.9% of inappropriate content before it goes out if we’re allowed to tag our ads on those properties,”GroupM brand safety EVP John Montgomery tells Beet.TV in this video interview.
“Our ad (would) read the metadata on the site, reads the URL; if there’s anything suspicious, it blocks the ad from appearing – but only if we can place our tags on those ads.”
In recent months, ad placement horror stories have hit the press, with The Times leading on YouTube showing A-list brands against terrorist content and some observers claiming extremists may have made $318,000 from YouTube ads. Dozens of top brands have pulled spending.
It is open season on brand safety, and Montgomery says journalists are “scouring inappropriate websites”, “pushing refresh” for what “makes a great headline”.
In ad tags, Montgomery thinks he knows the solution. But GroupM isn’t yet implementing the safety measure on ads bought for its clients, with Montgomery, who was named to this new role on the hot topic last year, blaming platforms.
“We can’t do that in a lot of the major social platforms like Google, Facebook, Snapchat and Twitter because of their data policies,” he says.
“That makes it more difficult for us to manage our own brand safety destinies. We are having discussions with them right now. Google has shown a little bit more flexibility in dealing with outside vendors as a result of this contextual brand safety crisis.”
Despite recent intense attention, Montgomery says not much has changed – brand safety issues are as old as online advertising. But what is different now is the volume of content used to placed ads against, and the attention of the media.
“Not a great deal of money is being paid to these inappropriate sites,” he says, adding that ad-tech platforms are legally obliged to stop the practice “We have a contractual commitment with vendors specifying that they won’t place our ads near inappropriate content.”