[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: citations as indicators of quality



I recommend to you all an article in the October issue of PS titled "Ranking Political Science Journals: Reputational and Citational Approaches" by Michael Giles and James Garand. It is very illuminating about both the advantages and disadvantages of relying on citation counts in evaluating journals.

It begins by noting one fundamental flaw in any citation analysis by quoting another author thus: "if [Journal X] published an execrable paper that attracted a million critical citations as an example of appalling practice, all other papers previously and later published in that journal would suddenly be much more highly ranked."

Using ISI data, but relying on an earlier reputational survey conducted by the authors to identify journals, the article ranks 90 journals as being in the field of political science. Right off the bat, though, its analysis reveals some shortcomings. E.g., the Journal of Policy History, which our Press publishes, was excluded with some twenty others because ISI does not collect citation data for it--despite the fact that in an article in PS last year by a top expert in the field of American political development, it was called the second best journal in this subfield right behind Studies in American Political Development. The ranking also does not include Philosophy and Public Affairs, a journal I helped found at Princeton, now published by Blackwell, that has a sterling reputation among political philosophers working in the tradition of John Rawls and that is currently edited by a political scientist at Princeton, Charles Beitz.

In this regard, the article does confess that the "demarcation or boundary problem" in defining what is or is not a political science journal can severely affect the validity of a citational analysis, more so than for a reputational analysis, and in a further investigation of why journals in international relations tended to be disproportionately highly ranked, the article reveals that differences in citation practices among scholars in different subfields can also have a substantial effect on the results.

The article ends by cautioning against confusing impact with quality: "Citations are a direct measure of impact but measure the quality of an article only indirectly and imperfectly. To the extent that there is a relationship between journal citations and quality it is asymmetrical. High journal impact may provide a reasonable basis for inferring high average article quality for a journal, but low impact does not provide a basis for inferring low quality.... In the absence of reputational measures directly assessing perceived journal quality we are concerned that this important distinction between quality and impact might be elided in practices of professional assessment."

And, I would add, we should be concerned about its elision in making decisions about journal cancellations, too.

Sandy Thatcher

___

Sanford G. Thatcher
Director, Penn State Press
University Park, PA 16802-1003