[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: citations as indicators of quality



I recommende everybod to read Geoffrey Crossicks article "Journals in the arts and humanities: their role in evaluation" in Serials, November 2007 showing that peer-reviewed journals are not the self-evident location of choice that they are in sciences.

The most presitious form of output in the humanites is still the monograph, thematic books, scholarly editions of texts and judgment of peers remains the core way to establish the quality of research outputs.

What is the impact factor for Plato's The Republic, written approxiamtely 360 BC. Still popular, still cited, still an extremely important philosophical text.

Jan

Sandy Thatcher wrote:
But the authors of the article I cited raise a very crucial point in
demonstrating that citation practices differ across disciplines and
subfields within disciplines. It surely makes no sense to rank a
journal higher, or keep subscribing to it, because scholars in that
subfield, like international relations, simply cite more than their
colleagues in other subfields. (If this were the main criterion, I
suppose law journals would always rank highly because they contain
massive numbers of citations, with many pages having more footnotes
than text, though of course we all know that they are not really peer
reviewed, being edited by law students.) This is one among several
reasons these authors put forward to argue for using reputational
analysis, too, in order to make up in part for the shortcomings of
pure citational analysis.

The reductio ad absurdum of citational analysis would be works like
"The Bell Curve," which received a tremendous amount of attention,
most of it quite negative, or articles touting "cold fusion," an
equally controversial topic, or "intelligent design." One would surely
have to use scare quotes in describing any of these kinds of works as
having "value."

Sandy Thatcher
Penn State University Press