I recently watched the footage of Dylann Roof’s police interview on the New York Times. Roof killed nine African Americans in their church
I found the whole thing fascinating. But what really struck me was the seemingly banal role of Google Search in the story. As the Times reports:
He said his “racial awareness” had been inspired by a Google search of the phrase “black on white crime” after the reaction to the 2012 shooting of Trayvon Martin, a black 17-year-old, by George Zimmerman, a neighborhood watch volunteer in Sanford, Fla. ”That was it,” he said.
Later he talks about how he came to see things in racial terms and I wonder whether and how Google supported this. Did it confirm what he already thought or give Roof the impression that his beliefs were fact because Google said so? And Google doesn’t lie. They aren’t evil.
Let’s ask a counterfactual question:
What if, when Roof searched, he’d found some different results? Would it have changed things? We know that Google matches results to a users interests. So, if someone demonstrates latent “fascist potential” (as it’s called in the Authoritarian Personality studies) what if Google used this to restrict their access to provocative material? I’m not saying they should but things might be different. Doesn’t that mean Google Search has some active role in this crime?
As regular readers (all zero of them) will know, the logic behind this matching of search results is largely driven by an appeal to advertisers and a need for Google’s business model to work for the myth of matching to be applied across Google Search. I don’t want this to fall into yet another “aren’t algorithms evil” post but I think it’d be interesting to consider Google Search’s role in radicalisation. As I understand it, in the UK it’s a crime to encourage terrorism.
I was looking at the 2018 European Marketing Academy Conference theme. It’s ‘People Make Marketing’. I’m not saying they have stolen the theme of this blog but it’s remarkably similar! I hope to be there.
It’s a few years old now but I just came across this series of posts by Ad Block Plus in which they surveyed users of their Adblocker about the service. One of the questions asked why people used Adblocker. The results are quite interesting.
They gave respondents 7 possible reasons and forced a choice through a four point scale (ie there was no ‘neutral’ option). Forcing choice in this way can distort results as it, obviously, forces people to express an opinion on a matter they might not care about.
I think we can group 3 items as ‘content issues’ (distracting animations and sounds, offensive or inappropriate content and missing separation of ad an content); 3 items as ‘provider issues’ (security concerns; privacy concerns and page load times); and one as a personal issue (ideological reasons). If this was done more robustly we might separate each of these items out into multiple dimensions and see how they inter-relate. But it wasn’t.
Just eye-balling it, it seems that most of the motivations for Ad Blocking relate to a lack of trust – provider issues. This is followed by content issues. Although ideological reasons motivated about half the sample (and given the selection bias you’d expect this is an over estimation), that leaves about one-third of the sample who block ads not because they are “anti branding” but just because they don’t trust advertisers to act responsibly and because their ads are kind of annoying.
If I were a brand I’d find this very hopefully as these are much easier to fix than overcoming ideological opposition to ads. In fact, the same problem has already been solved on other media through regulation initatives (see my other blog on advertising governance).
Here’s a short article I wrote for The Conversation on Google’s current battles with brands… More on this to come I think.
I happened to be looking at my own paper in the European Journal of Marketing and saw that it’s in the most read papers for the last week!
For several years the DNT (Do Not Track) initiative has been trying to formalise a standard feature for the worldwide web to allow users to tell a website whether they are happy to have their activities tracked by the website and their partner. These initiatives have been opposed by a group of organizations with clear interests in the digital marketing market (IAB in particular). They have confused, obfuscated and in some cases intimidated participants in the project. This lead the New York Times to describe DNT as a ‘slow death‘.
But, I came across this interesting study by Goldfarb and Tucker. Put very simply, they find that consumers respond best to ads which are contextually targeted or highly visible on screen but that contextually targeted and highly visible ads perform relatively badly. The paper speculates that consumers privacy concerns might be the explanation for this effect. A targeted and visible ad reminds consumers that the site is tracking them and makes them more critical of persuasive communication.
So, just imagine the power of knowing which consumers cared about privacy and “detargeting” them – but perhaps using highly visible ads instead. In the paper, Goldfarb and Tucker estimate that 5% of digital ad spending is wasted targeting people who are turned off by targeting. That’s a spicy meatball.
A little while ago, I did some empirical research on outdoor advertising. We travelled round Nottingham photographing every outdoor ad we came across. One thing which we noticed when collecting the data was that many outdoor ads are out of date.
Time and time again we saw adverts for movie’s which opened months before and special offers that had ran out. I’ve been thinking about these and I think the best way to describe them is zombie ads. My guess is that outdoor media owners have some low value inventory where it simply doesn’t make sense to remove ads but people don’t want to use the space that much. So once they’ve put ads up, they get left in place.
This seems like a sneaky way for advertisers to get a lot more exposure for their ads than they pay for.