I recently watched the footage of Dylann Roof’s police interview on the New York Times. Roof killed nine African Americans in their church
I found the whole thing fascinating. But what really struck me was the seemingly banal role of Google Search in the story. As the Times reports:
He said his “racial awareness” had been inspired by a Google search of the phrase “black on white crime” after the reaction to the 2012 shooting of Trayvon Martin, a black 17-year-old, by George Zimmerman, a neighborhood watch volunteer in Sanford, Fla. ”That was it,” he said.
Later he talks about how he came to see things in racial terms and I wonder whether and how Google supported this. Did it confirm what he already thought or give Roof the impression that his beliefs were fact because Google said so? And Google doesn’t lie. They aren’t evil.
Let’s ask a counterfactual question:
What if, when Roof searched, he’d found some different results? Would it have changed things? We know that Google matches results to a users interests. So, if someone demonstrates latent “fascist potential” (as it’s called in the Authoritarian Personality studies) what if Google used this to restrict their access to provocative material? I’m not saying they should but things might be different. Doesn’t that mean Google Search has some active role in this crime?
As regular readers (all zero of them) will know, the logic behind this matching of search results is largely driven by an appeal to advertisers and a need for Google’s business model to work for the myth of matching to be applied across Google Search. I don’t want this to fall into yet another “aren’t algorithms evil” post but I think it’d be interesting to consider Google Search’s role in radicalisation. As I understand it, in the UK it’s a crime to encourage terrorism.
It’s a few years old now but I just came across this series of posts by Ad Block Plus in which they surveyed users of their Adblocker about the service. One of the questions asked why people used Adblocker. The results are quite interesting.
They gave respondents 7 possible reasons and forced a choice through a four point scale (ie there was no ‘neutral’ option). Forcing choice in this way can distort results as it, obviously, forces people to express an opinion on a matter they might not care about.
I think we can group 3 items as ‘content issues’ (distracting animations and sounds, offensive or inappropriate content and missing separation of ad an content); 3 items as ‘provider issues’ (security concerns; privacy concerns and page load times); and one as a personal issue (ideological reasons). If this was done more robustly we might separate each of these items out into multiple dimensions and see how they inter-relate. But it wasn’t.
Just eye-balling it, it seems that most of the motivations for Ad Blocking relate to a lack of trust – provider issues. This is followed by content issues. Although ideological reasons motivated about half the sample (and given the selection bias you’d expect this is an over estimation), that leaves about one-third of the sample who block ads not because they are “anti branding” but just because they don’t trust advertisers to act responsibly and because their ads are kind of annoying.
If I were a brand I’d find this very hopefully as these are much easier to fix than overcoming ideological opposition to ads. In fact, the same problem has already been solved on other media through regulation initatives (see my other blog on advertising governance).
According to this report they are.
But it is wrong – I think.
First, if a website does work when an ad blocker is installed that could be because the web designers have put in a guard against ad blockers that stops the website working. Think of it like this: on some DVDs you can’t fast-forward through the commercials and if you try it just starts them again. No one would say that trying to skip them “breaks” your DVD player. So the findings of this report might just be: some websites are designed not to work if an ad blocker is installed.
Second, a website not working is not the same as the internet breaking. That’s like saying, “My computer is broken” when your Microsoft Word shuts down unexpectedly.
Third, an ad block may stop a website from appearing in a user’s browsers in the way the designers of the website intended but that does not mean that it is broken from the perspective of a user. If users has made an informed choice to install an ad blocker, it is most likely because they want their browser to filter out some content. This might mean that some useful content is also filtered out but that’s the choice they have made. I’m sure many users consider this a minor inconvenience that is more than covered by the benefits of blocking ads or they wouldn’t use them.
Forth, and a bit ore technically, I take issue with this conclusion: “publishers whose content we access have the right to protect the Integrity and Delivery of their web content from any form of manipulation, change or censorship”. Really? Publishers have the right to deliver web content without any form of manipulation, change or censorship? Do you really mean that? So cyber bullying is okay? Isis videos? Child pornography? Good luck arguing that with the Chinese authorities? What is the content includes malware? Presumably you don’t mean this. You mean publishers have the right to deliver ads without interference.
Finally, it is a bit disingenuous to say that a webpage should be “delivered to a user as intended by a publisher”. To my knowledge, most ad blockers try to block third party content that is not provided by a publisher but one of their partners often without any knowledge or intention on the part of the publisher. I doubt that journalists are losing sleep that their articles aren’t read alongside ads served through Google.
Perhaps if digital marketers understood the difference between what they want to show audiences and what audiences want to watch there would be no need for ad blockers in the first place.