Over the past few days, a flurry of high profile brands and organisations, including French advertising group Havas, M&S, the Guardian, BBC, J Sainsbury Plc, Audi, Transport for London and the UK Government, have pulled ads from Google and its video sharing site YouTube amid growing concerns that their ads were appearing alongside offensive and potentially illegal content. Most recently the boycott has spread to the US, with Verizon and AT&T also halting their Google advertising. Worries were prompted by a Times of London investigation which revealed that ads from many large companies and the UK government were appearing alongside content from supporters of extremist or racist groups. Consider this much-cited example by The Times below…
Google’s European chief, Matthew Brittin, has since apologised and promised to review its policies and the safety controls already in place, but many within the industry are unsure whether Google is doing enough. Havas, for instance (which spends around 175 million euros annually on digital advertising clients in the UK), claims it failed to get the assurances it needed from Google that the ads of its UK clients wouldn’t appear next to offensive or hate material.
As BBC technology correspondent Rory Cellan-Jones rightly points out, “there are two difficult issues for Google here – spotting videos that are illegal and should be removed from YouTube, and determining which are legal but not suitable for advertising”. The latter is proving the most tricky it seems.
Google is currently relying on a combination of software controls and user alerts to identify content that could be harmful or illegal. Ronan Harris, Google’s UK managing director, said in a blog post that Google removed nearly two billion offensive ads from its platforms last year, blacklisted 100,000 publishers from its AdSense program, and prevented ads from serving on over 300 million YouTube videos. Despite this, Harris wrote, “we don’t always get it right”. He explained “in a very small percentage of cases, ads appear against content that violates our monetisation policies. We promptly remove the ads in those instances, but we know we can and must do more.”
Harris then went on to say that Google had “heard from our advertisers loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content”.
What are Google’s obligations?
Google has always insisted that it’s a technology platform and not a media business, with the mission “to make information universally accessible and useful”. It has believed “strongly in the freedom of speech and expression on the web—even when that means we don’t agree with the views expressed,” Harris explains. Ad admirable position to hold, but arguably no longer sustainable in this day and age, and not to the detriment of brands it is profiting heavily from.
While Google began life as a tech company, today the majority of its revenue comes via advertising; in 2016, its ad revenue amounted to almost $79.4 billion US. It would be fair to argue that the Google business has evolved to such an extent that it should now be facing the same tight regulation of other media companies.
Martin Sorrell, the founder and CEO of global advertising firm WPP, said in a statement that Google should have “the same responsibilities as any media company” and could no longer “masquerade” as a technology platform, “particularly when [it] places advertisements.”
Google’s position within this debacle is quite a complex, multifaceted one, as it incorporates the Google Display Network as well as its AdX ad exchange, which uses programmatic trading, placing ads alongside videos on YouTube as well as other third-party sites. The boycott indicates a growing resistance to programmatic trading, which has become increasingly controversial over the past year owing to the lack of brand control that it grants to advertisers. The most important of Google’s revenue sources, AdWords, is unaffected for now.
What options are available to Google?
Google has already said it plans to review its policies and will be making changes “in the coming weeks”, but how radical will these changes be? The gathering consensus of industry option seems to be that Google is unlikely to go far enough.
One option is for Google to hire a group of human moderators to actively detect offensive content and extremist videos etc. But the scale of this job could prove immense and Google is likely to choose to continue down the technology route. Philipp Schindler, Google’s chief business officer, has most recently said that AI will continue to be used to review questionable content moving forward.
Phil Smith, director general of Isba, the voice of British advertisers (which has some 450 members), has suggested that Google should reconsider placing ads immediately against newly uploaded YouTube content, before it has been classified, and instead quarantine it first. This ‘pig pen’ idea seems a good one, but may clash with Google’s support of a free internet. Furthermore, some claim Google is consciously allowing extremist content to remain on YouTube. The Times has pointed to an example of Wagdi Ghoneim, an Egyptian-Qatari cleric banned from the entering the UK, whose videos on Ghoneim’s Wagdy0000 YouTube channel have been watched over 31 million times, with Google handing over an estimated £250,000 in ad revenue.
Other industry experts have called for Google to pay back revenues earned through advertising that was placed alongside offensive content, but so far this is looking extremely unlikely.
How long can advertisers stay away from Google?
In two words…not long! Google knows this, and this probably explains why it isn’t taking a more radical position on the situation. Google is fundamental to any big brand advertiser, which is ultimately the problem, and probably why very little will change as a result.