NEW PAPER: The construction of marketing measures

This paper reveals the fundamental lie of contemporary marketing – particularly marketing analytics and digital marketing. The data they analyse are stupid!

It details how marketers ignored this fact until powerful actors (primarily Google) realised they could make more money selling their products if marketers were convinced to care about a new type of data.

They called it viewability. It was meant to represent that an advert has been viewed by a consumer. But guess what? It doesn’t measure this at all.

The paper explores how and why marketers can be so stupid!


One for the academics: EMAC 2018 “People make marketing”

I was looking at the 2018 European Marketing Academy Conference theme. It’s ‘People Make Marketing’. I’m not saying they have stolen the theme of this blog but it’s remarkably similar! I hope to be there.

One for the academics: The rise of the strawmen (and women).

Last week the Association of Business School published its latest “journal quality guide“. I’ve long been making public complaints about this guide… to no effect.

Fortunately, the ABS has put the new guide behind a registration wall – making it much harder to consult the guide and, as result, people don’t seem that bothered by it. As my old music teacher said, don’t put the instrument in the case. It’s one barrier to getting it out and practicing. (Of course it doesn’t help that the ABS Guide was not very accurate in predicting the REF outcome – which, despite what the ABS says, is pretty much the only reason it gets used by anyone).

In place of all this, there’s a growing suspicion that citation metrics will be the way forward. Google’s search engine is built on citation metrics so it might be useful to look at “search studies” to think about the possible consequences of moving to citation as the key indicator of academic success.

The media theorist Lovink makes an important point about Google: it assumes that any mention (link) is positive. The same is true of most citation metrics. To understand the implications of this consider these two examples:

“Cluley (2000) has produced an authoritative theory of everything. Everyone should read it”.

“Cluley (2000) is wrong in every way. No one should read it”.

Each sentence has one citation to the same paper (Cluley (2000) but one is saying that the paper is amazing, the other says it is bullshit – or, in the language of the UK academic and university administrator, the first is 4* and the second 0*.

Now, I would assume that any right-thinking human would say these are not equal evaluations of Cluley (2000). One says the paper has made a positive contribution to knowledge; the other says it hasn’t. If you were looking to allocate research funding on the basis of these two statements, the first suggests the paper should be funded/supported/rewarded and the second doesn’t.

Yet, most citation metrics treat these two sentences the same. Both add one citation to your total. On Google’s search engine the result of this has been the meme-ification of the internet. Everyone wants to produce what I call “sharebait“. That is to say, the goal is not to say anything important or interesting but to say something in a way that will encourage people to share it. In YouTube videos this usually involves flamed conversations and simple, short clips, preferably with nude celebrities.

Academic research can’t really do this – yet. But what I think could happen is that citation metrics will encourage people to produce work that will be shared by academics. That is to say, strong arguments that are wrong! Remember it’s much easy to say something stupid than it is to say something ground-breaking and intelligent – this is why we have research evaluation exercise in the first place. Each mention will add to your worth by increasing your citations.

In the age of big data and sentiment analysis is it not possible to distinguish these?

If not, get ready for the rise of the strawmen (and women).