I recently watched the footage of Dylann Roof’s police interview on the New York Times. Roof killed nine African Americans in their church
I found the whole thing fascinating. But what really struck me was the seemingly banal role of Google Search in the story. As the Times reports:
He said his “racial awareness” had been inspired by a Google search of the phrase “black on white crime” after the reaction to the 2012 shooting of Trayvon Martin, a black 17-year-old, by George Zimmerman, a neighborhood watch volunteer in Sanford, Fla. ”That was it,” he said.
Later he talks about how he came to see things in racial terms and I wonder whether and how Google supported this. Did it confirm what he already thought or give Roof the impression that his beliefs were fact because Google said so? And Google doesn’t lie. They aren’t evil.
Let’s ask a counterfactual question:
What if, when Roof searched, he’d found some different results? Would it have changed things? We know that Google matches results to a users interests. So, if someone demonstrates latent “fascist potential” (as it’s called in the Authoritarian Personality studies) what if Google used this to restrict their access to provocative material? I’m not saying they should but things might be different. Doesn’t that mean Google Search has some active role in this crime?
As regular readers (all zero of them) will know, the logic behind this matching of search results is largely driven by an appeal to advertisers and a need for Google’s business model to work for the myth of matching to be applied across Google Search. I don’t want this to fall into yet another “aren’t algorithms evil” post but I think it’d be interesting to consider Google Search’s role in radicalisation. As I understand it, in the UK it’s a crime to encourage terrorism.