Google made me do it

I recently watched the footage of Dylann Roof’s police interview on the New York Times. Roof killed nine African Americans in their church

I found the whole thing fascinating. But what really struck me was the seemingly banal role of Google Search in the story. As the Times reports:

He said his “racial awareness” had been inspired by a Google search of the phrase “black on white crime” after the reaction to the 2012 shooting of Trayvon Martin, a black 17-year-old, by George Zimmerman, a neighborhood watch volunteer in Sanford, Fla. ”That was it,” he said.

Later he talks about how he came to see things in racial terms and I wonder whether and how Google supported this. Did it confirm what he already thought or give Roof the impression that his beliefs were fact because Google said so? And Google doesn’t lie. They aren’t evil.

Let’s ask a counterfactual question:

What if, when Roof searched, he’d found some different results? Would it have changed things? We know that Google matches results to a users interests. So, if someone demonstrates latent “fascist potential” (as it’s called in the Authoritarian Personality studies) what if Google used this to restrict their access to provocative material? I’m not saying they should but things might be different. Doesn’t that mean Google Search has some active role in this crime?

As regular readers (all zero of them) will know, the logic behind this matching of search results is largely driven by an appeal to advertisers and a need for Google’s business model to work for the myth of matching to be applied across Google Search. I don’t want this to fall into yet another “aren’t algorithms evil” post but I think it’d be interesting to consider Google Search’s role in radicalisation. As I understand it, in the UK it’s a crime to encourage terrorism.

Advertisements

Ad blocking

It’s a few years old now but I just came across this series of posts by Ad Block Plus  in which they surveyed users of their Adblocker about the service. One of the questions asked why people used Adblocker. The results are quite interesting.

They gave respondents 7 possible reasons and forced a choice through a four point scale (ie there was no ‘neutral’ option). Forcing choice in this way can distort results as it, obviously, forces people to express an opinion on a matter they might not care about.

I think we can group 3 items as ‘content issues’ (distracting animations and sounds, offensive or inappropriate content and missing separation of ad an content); 3 items as ‘provider issues’ (security concerns; privacy concerns and page load times); and one as a personal issue (ideological reasons). If this was done more robustly we might separate each of these items out into multiple dimensions and see how they inter-relate. But it wasn’t.

Just eye-balling it, it seems that most of the motivations for Ad Blocking relate to a lack of trust – provider issues.  This is followed by content issues. Although ideological reasons motivated about half the sample (and given the selection bias you’d expect this is an over estimation), that leaves about one-third of the sample who block ads not because they are “anti branding” but just because they don’t trust advertisers to act responsibly and because their ads are kind of annoying.

If I were a brand I’d find this very hopefully as these are much easier to fix than overcoming ideological opposition to ads. In fact, the same problem has already been solved on other media through regulation initatives (see my other blog on advertising governance).