[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: citations as indicators of quality



I don't think the matter is so straightforward as you make it out to be, David. After all, this was an effort to rank journals in the field of political science, and there have been several such efforts in the past. Yes, I agree that it would make more sense to do rankings within subfields--at least the major subfields, which in political science are American politics, comparative politics, international relations, and political theory. But where does one stop? There are many different sub-subfields within, say, comparative politics. Should one do rankings only with sub-subfields? And what about a field like philosophy, where there has traditionally been a split between Anglo-American analytic and Continental philosophy, with journals reflecting one or the other orientation but rarely both together. Those are not even subfields but rather methodological orientations, but they do structure that discipline in meaningful ways. Further, there are areas like political philosophy that cross disciplines like philosophy and political science. Should one attempt rankings in such an area separate from rankings in the respective disciplines? In short, there is no end of such ways of cutting the knowledge pie, and my own opinion is that no one method of ranking is really going to provide an adequate assessment of the merits of any given journal.

As for "rejected work," where does one draw the line? I note that you don't mention "intelligent design," David. If there is a huge amount of writing about this subject, citation counts will soar, but one surely wouldn't base a decision about whether a journal is worth including in a science collection because it favors that approach and draws attacks from many quarters. Maybe include it in the sociology of science? One might also argue the point that disputes over the bell curve and cold fusion really "drive further inquiry." Perhaps they may better be viewed as distractions from real science, impeding its progress.

Sandy Thatcher
Penn State University


No librarian or publisher-- nobody but an uninformed academic
bureaucrat-- would ever attempt to compare the quality of
journals between different fields, or the work of faculty between
different fields, using publication counts or citation metrics,
regardless of attempts at normalization.

There may be rational objective methods for the distribution of
resources within individual academic subjects, but the
distribution of library or research or education resources among
the different subjects is a political question. It is for example
reasonable to attempt a rational discussion of which
developmental molecular biologists do the best research, or the
relative importance of the different publication media in
developmental molecular biology, but to decide the relative
importance of researchers in that subject with respect to the
other fields of biology--let alone to mathematics--or even more
absurdly, comparative literature-- is not a question for
calculation.

But Sandy falls into the fallacy of attributing unimportance to
rejected work. The disputes over the Bell Curve, or cold fusion,
are what drive further inquiry. We progress in all fields of
science by scientifically disproving error.

David Goodman, Ph.D., M.L.S.
previously:
Bibliographer and Research Librarian
Princeton University Library

dgoodman@princeton.edu