[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: citations as indicators of quality



There are natural clusters.  It's always possible to find fringe 
cases where the rules don't really hold, or cases on the 
boundary. That does not affect the basic validity of citation 
analysis, any more than such problems affect the validity of 
other scientific approaches. . There are always small 
differences, and I can discuss at some length whether, for 
example, Journal of Biological Chemistry and Biochemistry (ACS) 
are in separate microclusters. But the same basic citation 
patterns hold in both of them.

When I collected at Princeton, I purchased for the biology 
library everything about intelligent design having any reference 
to the ordinary scientific literature, on the grounds that the 
biologists need to know about it. There is actually not all that 
much cross-citation: the ID people cite a very small part of the 
biology literature, and only to attack it. (And the biologists n 
turn cite a very small part of the fundamentalist religious 
material) That pattern pretty much holds in the only fringe and 
pseudo sciences--they don't really talk to theregular sciences 
and vice versa.

And there are good examples of work done on ostensibly the same 
subject where there are isolated literatures--psychoanalysis vs. 
the rest of psychiatry & psychology is a good example--one I used 
for teaching. Medline covers both, but there are very few cross 
citations

David Goodman, Ph.D., M.L.S.
dgoodman@princeton.edu

----- Original Message -----
From: Sandy Thatcher <sgt3@psu.edu>
Date: Tuesday, November 27, 2007 11:24 pm
Subject: Re: citations as indicators of quality
To: liblicense-l@lists.yale.edu

> I don't think the matter is so straightforward as you make it 
> out to be, David. After all, this was an effort to rank 
> journals in the field of political science, and there have been 
> several such efforts in the past. Yes, I agree that it would 
> make more sense to do rankings within subfields--at least the 
> major subfields, which in political science are American 
> politics, comparative politics, international relations, and 
> political theory. But where does one stop? There are many 
> different sub-subfields within, say, comparative politics. 
> Should one do rankings only with sub-subfields? And what about 
> a field like philosophy, where there has traditionally been a 
> split between Anglo-American analytic and Continental 
> philosophy, with journals reflecting one or the other 
> orientation but rarely both together. Those are not even 
> subfields but rather methodological orientations, but they do 
> structure that discipline in meaningful ways. Further, there 
> are areas like political philosophy that cross disciplines like 
> philosophy and political science. Should one attempt rankings 
> in such an area separate from rankings in the respective 
> disciplines? In short, there is no end of such ways of cutting 
> the knowledge pie, and my own opinion is that no one method of 
> ranking is really going to provide an adequate assessment of 
> the merits of any given journal.
>
> As for "rejected work," where does one draw the line? I note 
> that you don't mention "intelligent design," David. If there is 
> a huge amount of writing about this subject, citation counts 
> will soar, but one surely wouldn't base a decision about 
> whether a journal is worth including in a science collection 
> because it favors that approach and draws attacks from many 
> quarters. Maybe include it in the sociology of science? One 
> might also argue the point that disputes over the bell curve 
> and cold fusion really "drive further inquiry." Perhaps they 
> may better be viewed as distractions from real science, 
> impeding its progress.
>
> Sandy Thatcher
> Penn State University
>
>
>>No librarian or publisher-- nobody but an uninformed academic
>>bureaucrat-- would ever attempt to compare the quality of
>>journals between different fields, or the work of faculty between
>>different fields, using publication counts or citation metrics,
>>regardless of attempts at normalization.
>>
>>There may be rational objective methods for the distribution of
>>resources within individual academic subjects, but the
>>distribution of library or research or education resources among
>>the different subjects is a political question. It is for example
>>reasonable to attempt a rational discussion of which
>>developmental molecular biologists do the best research, or the
>>relative importance of the different publication media in
>>developmental molecular biology, but to decide the relative
>>importance of researchers in that subject with respect to the
>>other fields of biology--let alone to mathematics--or even more
>>absurdly, comparative literature-- is not a question for
>>calculation.
>>
>>But Sandy falls into the fallacy of attributing unimportance to
>>rejected work. The disputes over the Bell Curve, or cold fusion,
>>are what drive further inquiry. We progress in all fields of
>>science by scientifically disproving error.
>>
>>David Goodman, Ph.D., M.L.S.