Fake Journal Impact Factor and how to spot them

In recent weeks, two articles have been published drawing attention to the proliferation of fake, spurious and counterfeit journal metrics. The First paper, published in the Wiley journal Bioessays, attributes this phenomenon to the rise of ‘predatory’ journals with questionable scientific practices, which nevertheless require strong rankings in order to appear reputable. The authors identify a total of 21 websites that provide such metrics – websites that, they claim, thereby ‘exploit the desperation of some publishers and authors to show some kind of scholarly metric’ (Gutierrez, Beall, & Forero, 2015, p. 2). Independent analyst and publishing consultant Phil Davis likewise acknowledges the problem in a recent post on The Scholarly Kitchen, and claims that the use of such metrics erodes faith in all metrics, reputable and predatory alike. Both articles outline the potential causes and consequences of spurious metrics within the academic publishing industry – but how do you spot them? On receiving a letter (such as the above), how do you know whether it’s time to celebrate, or whether you have been targeted by a scam?

For those of you affiliated with a publishing house (such as Wiley), your first step should be to refer the letter to your editorial team. Most publishers will have procedures in place to handle data feeds to the many abstracting and indexing websites, and will have experience in identifying fraudulent services. However, there are also some steps you can take to sense-check your invitation:

1. Research the company

A simple web search will often pull up questions raised by other editors or analysts regarding a particular metric source. Failing that, explore their website. What is their history? Do they give an office address, or a contact for data corrections?

2. Are they asking for an upfront payment?

Very few reputable metrics will ask you for a fee in return for indexing (and providing metrics) for your journal; after all, indexing quality content enhances the value of their database, and publicizing the metric on your website will direct traffic back to them. If they’re asking for a payment, be wary.

3. Do they outline the mathematical formula for their metric?

It’s all very well to know that your Impacting Factoid is 3.662, but without knowing the calculation of the metric (or what that means in the context of other indexed journals), it is a pretty useless number. So, do they tell you what the calculation is – and if so, does the calculation make sense?

4. What is their data source?

In Harry Potter and the Chamber of Secrets, J.K Rowling reminds us never to ‘trust anything that can think for itself if you can't see where it keeps its brain’. In the context of metrics, you should never trust a metric if you can’t identify (and validate) the underlying dataset. A citation metric in particular must draw data from a robust citation database. Unlike a normal abstracting and indexing database, a citation database needs to index the article reference lists. It then counts citations by forming links between the references and other indexed articles. Accurate citation metrics rely on a citation database that is carefully curated, avoiding issues such as the inaccurate or duplicate indexing of articles. The most reliable citation databases are Thomson Reuters’ Web of Science and Elsevier’s Scopus database, which power metrics such as the Impact Factor, the SCImago Journal Rank (SJR) and the Source Normalised Impact per Publication (SNIP). Google Scholar is another popular (and free) database, although the data quality is less robust.

5. If they cite their own datasource – how has it been compiled, and what is its scope?

So, the website tells you that they calculate their citation data from their own in-house database of indexed journals. That’s all well and good. However, remember that a citation database can only count citations to and from the papers indexed in the database. Therefore, it is important to know:

  • How many journals are indexed?  
  • What are their indexing criteria? 
  • Can you search the database (at the article level) – i.e., can you validate their data?

This list is, of course, far from exhaustive, and some valid services will not meet all points – however, if you are suspicious of a service, and they fall short on these criteria, my advice is to avoid them. The encroachment of fake metrics into the journals market, as Phil Davis notes, does damage to faith in all journal metrics, and it is firmly in our best interests to limit their growth.

References

Davis, P. (2015, March 10). Knockoffs Erode Trust in Metrics Market. Retrieved January 31, 2023, from Scholarly Kitchen.

Gutierrez, F., Beall, J., & Forero, D. (2015). Spurious alternative impact factors: The scale of the problem from an academic perspective. Bioessays.

Post a Comment

0 Comments