Fake news. X-Rated content. Brietbart. Clickbait. If you’re a digital marketer, these terms are part of your everyday lingo by now. Content alignment – working to ensure brand advertising is adjacent to “good” content – has always been a part of the digital media conversation, but the 2016 United States election really brought it to a head. With the intense focus on the candidates and their platforms, many media outlets came under fire for either promoting or allowing sites to promote, “fake” news or other incendiaries, highly-polarising editorial content. This type of content is essentially clickbait – content created to attract attention and drive traffic to a web page, often relying on hyperbole or sensationalist headlines and pictures to accomplish the goal.
This sparked a backlash in the digital media buying industry as well, when some advertisers’ ads, unfortunately, appeared on sites or next to this content – completely at odds with their core brand values. This didn’t happen because of malicious intent or purposeful planning to buy those spots, but because the digital advertising inventory universe is vast, and buying methodologies vary in such a way that not every media impression purchased can be tracked back to its exact placement. Facebook – the behemoth itself – was one of the hardest hit platforms and in response has created new algorithms specifically to address the issue. Google, too, has been dealt a blow because of this issue, with major advertisers pulling out of Google and YouTube buys entirely due to concerns about the content.
Due to the subjective nature of what “bad” or “fake” content looks like and the speed at which it can be produced on new and different sites, no reputable agencies, media networks or publishers will be able to guarantee with 100% certainty that they will never show an ad near this type of content (not even Facebook can say that). There are, however, many steps that advertisers can take to help prevent their ads from showing up alongside unwanted content. Here are our top four tactics.
- Create and maintain/update a master “blacklist” of known sites that we do not want ads to appear on. Ensure this blacklist is in place for every buy.
- When applicable, such as when advertising on the Google Display Network, we may also exclude sites and content based on category-level exclusions, such as:
- Crime, police, and emergency: Police blotters, news stories on fires, emergency services resources, etc.
- Death and tragedy: Obituaries, bereavement services, accounts of natural disasters, accidents, etc.
- Military and international conflict: News about war, terrorism, sensitive international relations, etc.
- Juvenile, gross, and bizarre content: Jokes, weird pictures, videos of stunts, etc.
- Profanity and rough language: Moderate or heavy use of profane language, etc.
- Sexually suggestive content: Provocative pictures, text, etc.
- Require transparency from media partners; the ability to see where ads are placed after they are purchased can be a key “make or break” factor in the media RFP process.
- When possible, leverage a tool or partner that can help address the issue of quality and identify areas of weakness in the placements, such as Integral Ad Science, Trust Metrics, DoubleVerify, MOAT, Sizmek and Amobee.
Given the vastness of the internet and the speed of content proliferation, there will always be new content designed for sensationalism or clickbait. Staying on top of your blacklists and ensuring you keep a continued focus on transparency in your digital media buys can seriously improve the odds that your ads will only appear on content that is safely aligned to the brand, and which will deliver your message to the right audiences.
Truth in Advertising is a three-part series developed by Jenna Watson, DAC’s VP, Digital media and Michael Jurik, Director of Display at DAC. The collaboration examines some of the most pressing concerns in digital advertising today and provides brands with strategies to safeguard their reputation and maximise their investment in digital advertising.