Scholarly Roadkill
ScholarlyRoadkill_title

Mitch’s Blog

Citation Wars Heat Up Again, but Who’s Counting

Friday, December 16, 2016

Everyone in university social science and humanities departments complains about the journal impact factor (JIF). With good reason. They’re like the 1%, getting all the goodies while the rest of us resent their unfair advantage.

In this case, it’s more like the elite 11%. For those not familiar, the impact factor is a number created by a private firm called Thompson-Reuters, which maintains an index of 11,000 academic journals called the Web of Science (one source indicates that there are about 100,000 academic journals in all, but that’s a slippery, constantly changing number) and counts how many times each article in these journals is cited in the other journals in the index over a 2 year span. It's designed to show the, er, impact of your work. The higher the number attached to the journal, the more important your work is. That’s fine if you’re writing for a cancer journal since the database is heavily skewed toward science journals, toward older journals, toward journals owned by the 4 or 5 largest journal publishers, toward journals with larger subscription bases. And 2 years is longer than the lifespan of most cancer research articles.  But if you’re writing for a journal about new materialism or slam poetry criticism or indigenous archaeology, one which is in humanities or social sciences, from a smaller journal publisher (I used to be one), or on an emerging topic that won’t get traction in the university until the 2020s, you are not likely to be included in the T-R database at all and therefore don’t get an impact factor score for articles that appear in your journal.

You care a great deal about that. In the audit culture of today’s universities, their governmental overseers, and granting agencies, the impact factor is everything. At least it seems to be. Don’t publish in a highly ranked journal and your chances at tenure or a federal grant plummet. Your state legislature might decide that the JIF is not high enough for your campus and cut its budget. No matter that the number measures only the importance of the journal and of not your article, it gets handed around like a photo of a prize marlin caught in the Caribbean or a trophy kill in the Serengeti. You may have published a raft of articles this year and are on the front edge of an important new wave of scholarship. You don’t have a JIF, therefore your research sucks.

For a publisher, no JIF= fewer subscriptions and therefore fewer readers. And less money to support new publishing in cutting edge fields. Scholarship is rewarded for being conservative, corporate, mainstream, boring. I’ve personally felt the consequences of that impact of the impact factor.

Besides the litany of complaints about the unfairness and inconsistencies of the system, of which there are many (both inconsistencies and complaints about them), there have been a variety of other measures proposed as alternatives to the JIF. These use different criteria, come from sources other than T-R, and measure things other than the citations the journal receives. If you want to read up on them, you can look at H-index, Google Scholar, SNIP, Scimago (which sounds more like the name of an Italian bandit than a journal ranking system), and others.

T-R has been trying to minimize criticism to keep their JIF as the gold standard. They created a social science index to parallel the Web of Science. They’ve added an arts and humanities database, but don’t give impact factors to those journals. Last year, they added an Emerging Sources Citation Index (ESCI) that picks up more journals that are “regional” rather than international (e.g. Hungarian Journal of Psychology) or “emerging,” whatever that means. But these don’t get impact factors either. Sort of a halfway house for lesser journals.

Now, with great fanfare, Elsevier, one of the other humongous journal publishers has announced the creation of a competitor to JIF, called CiteSource (CS). It uses their own proprietary database, Scopus, which houses 22,500 journals to T-R’s 11,000 and tweaks the counting mechanism in several minor ways. The resulting score is open access and readily available, rather than behind a firewall like the Web of Science scores. And, unlike T-R, whose method of selecting journals is deliberately opaque, the Elsevier methodology is clearly outlined, even if it’s hiding behind their firewall. Sounds like a good thing, huh? Maybe a little competition to shake up the JIF and democratize it for the rest of us.

Weeeelllll, maybe and maybe not.

First of all, Elsevier owns many of the journals in its new index and therefore would directly benefit from higher rankings. And they do. In the first analyses just summarized in Inside Higher Education, Elsevier journals tend to score higher on their own index than in the T-R one, up as much as 25%. Curious, huh. Or that the rankings of key journals of one of Elsevier’s chief competitors, Springer, plummets in the new index. Doubly curious.

So, we’re back to titans of the 1% dueling each other for market share rather than helping the rest of us. Sound familiar? Call it the trickle-down effect in citation indexes. It’s bound to help us eventually, isn’t it?  That’s what they tell us.

Don’t hold your breath.

Back to Scholarly Roadkill Blog


 

Scholarly-Roadside-Service

HJ Design Logo