Analysing Google's advertising gaffe
The recent press about how Google protects its advertisers sheds light on an important area of concern we always consider when running paid media campaigns for clients. Is our activity reaching the right audience in the right places at the right time?
In our opinion, M&S, Havas, RBS, The Guardian, etc. are quite right that Google is not performing well in terms of content blocks and brand protection. One of Google’s core beliefs is that anyone can be a content creator and make themselves heard, but the default way they have been monitising content is to approve it to be eligible to show ads, taking it down only if it is later flagged as inappropriate content.
So what’s the problem?
This is a gap in their duty of care as we set restrictions around where our ads can be shown and against what content, but they wait for issues to be flagged by users or recognised by the system over time rather than having proactive safeguards in place. They consider the market to be totally different to print, TV, etc., with Google continually say they are a technology platform rather than a media company. This means that they see the rules of who should be policing the content differently. Google believe they are doing what they should be doing in order to protect advertisers.
We, like a lot of other agencies and advertisers within the industry, believe they could be doing more.
However, it’s important to keep in mind that this is not a recent change, but something that has been building for a while. In terms of how it affects the campaigns we run, there is no change, as we have always had brand protection as part of our processes when conducting any activity.
Just like ad blocking, click fraud, viewability, etc., this is a hot topic just now, and while there are gaps in the ideal processes, nothing has changed overnight. This has been an issue for a while, but it’s a bigger problem for those who have wider/more blanket visibility. The amount of content this applies to is growing, as more users like to upload questionable content to make money, make themselves famous, or troll for a reaction – but it’s still a very, very small proportion of the available inventory.
What should advertisers be doing?
We have always done as much as we can to protect the brands we promote by taking the following steps:
- Blacklisting sites that are continually producing undesirable content. This is difficult if the site is doing this for clickbait/bot reasons, as by nature these sites capture traffic, get shut down, and then respawn.
- Apply content exclusions to all our campaigns. As default at Ambergreen, we always exclude categories of sites, content, or placements related to crime, police, and emergency; death and tragedy; and military and international conflict. This is the main issue of the recent new stories, though, as Google isn’t capturing bad content within these areas quick enough.
- We also exclude parked domains, error pages, etc. from placements, as these sites attract lots of bots and don’t generally have a good quality of traffic.
But beyond this, we do put more content restrictions in place depending on the brand we’re working with. Different brands want to reach different customers, who in turn engage with different forms of content. So while appearing next to content labelled for mature audiences, is juvenile/gross/bizarre, or has profanity and rough language may be unacceptable for one brand, this may be a space another brand are happy to engage with.
As well as not categorising content quick enough, Google often miscategorises content, so if restrictions are set too harshly, this could mean valuable placements and audiences are missed. An example of this is gaming video content, where trailers can be labelled as violent and disapproved for advertising but are later approved despite the content staying the same.
Overall, we support the decision to put pressure onto Google to increase its standards, and we believe they should be doing more to protect advertisers. However, we do not believe the situation is so bad that, with the restrictions we have in place to mitigate the risk, the overwhelming majority of placements will put the brand at risk. There may be instances where this will happen, as there is on any platform, but there’s a lot of good inventory out there where we can reach our desire audience, enrich their online experience, and raise our brands visibility.
It will interesting to see what Google does next. Manually checking content will be expensive and possibly unworkable with the amount of new content published daily. If they do something like changing their processes (e.g. content has to be live for a period of time before it can be advertised to allow bad content to be downvoted by users on YouTube, etc.) those who are manipulating the system will just find a new way to bend the rules.
Google has already begun to apologise while promising tighter restrictions on content and further visibility on placement reporting, which is a step in the right direction. However, the case of classifying content is not so straightforward, with one prominent YouTuber already raising their content was restricted purely because it was about LGBTQ topics rather than breaking content guidelines.
We’ll keep you updated if anything changes, but for us, it’s business as usual, as we’re doing everything we can to protect brands. If you have any questions, our Paid Media team are more than happy to address your concerns. Give us a call to get in touch.