Saturday 11 September 2010

The World Cup of Economics Journals

Even this week I got an email informing me about the latest impact factors (IF) of some journals. What's in a number?

The IF of a journal for a given year is calculated as the number of citations to the papers published in the preceding two years, divided by the number of papers published in those years. This measure was originally introduced by Garfield to highlight journals that are hot. The index has been calculated annually ever since and for an ever increasing set of journals. As this indicator is readily available (for a non-negligible fee!), people started to use it as a quality indicator. A journal with a higher IF was (and is) considered better. A paper in a journal with a higher IF was considered better. A researcher with papers in journals with higher IFs was considered better. A department with researchers with papers in journals with higher IFs was considered better.

In a sense everyone was right. Ceteris paribus, a journal with a higher IF has a greater impact (whatever that means) and is therefore better in some sense. Unfortunately the IF is not only about quality, but many other things.


There are several possible ways to rank academic journals. Some rankings are based on the evaluation of "experts". Perhaps because I am a mathematician I believe that quality is somehow measurable. It should be visible... and I agree with Garfield (and many others) that the evaluation should be based on citations. A citation indicates that the author uses or acknowledges some existing result, admits that the result has been developped elsewhere. When a journal is cited a lot it contains many useful or interesting results. True, there are negative citations, but would anyone cite a bad result in a bad or unimportant journal? So so far the IF is OK. The problems start here. I will not go on to present all the criticisms about the IF - I have seen literally hundreds of papers about these over the years. They come under two themes: IFs of different disciplines are not comparable due to different citation customs and "temperature". For instance the hottest economics journals are lukewarm at best, according to Garfield, due to the differenent pace of disciplines. A cardiologist friend was telling me with amazement about a paper that was published more than 15 years ago. In game theory such a paper is actually quite recent, but cites to it do not enter the IF formula.

In a recent working paper (Kóczy and Strobel, 2010) we discuss problems of a different nature. Given that journal rankings are used in hiring, tenure, funding decisions researchers want to publish in good journals, so journals want to be good. Let us hope that this is also their general wish, but why not give a bit of extra push in the rankings, when this is possible. We find that existing ranking methods are manipulable. Manipulation can take various forms: adding self-cites, adding cites to other journals that improve the evaluation of the own journal, publishing more or longer papers. Our criticism is constructive. We propose a new ranking method that is neutral in this sense. Our idea is totally new in that we use something that has been around for decades. We use standard models from social choice to choose top journals.

First we compare journals in pairs: when journal A cites B more often tha B cites A, B publishes more valuable results and is therefore better than A. The result is an (incomplete) tournament. A tournament is a mathematical object, but the name is rather descriptive. It is like the world cup where some teams (here: journals) play against each other. The tournament is incomplete as not every pair of journals is compared (the same is true for most sports tournaments), and there are many ways to solve tournaments (Laslier, 1997). For a number of (good) reasoms we decided to go for the simplest, a scoring method where a win is 1 point, a draw is 1/2, a loss is 0 and we look at relative performance, the score earned divided by the possible maximum score. In the paper we present a ranking of the academic journals in Economics based on data until 2005. (I am working on getting more recent data now.)

References

Kóczy L.Á., Strobel M., 2010. The World Cup of Economics Journals: A ranking by a tournament method. Working Paper 1011, Óbuda University, Budapest.
Laslier J.F., 1997.  Tournament Solutions and Majority Voting.  Springer, Berlin.

No comments:

Post a Comment