[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Impact Factor, Open Access & Other Statistics-Based Quality



There is of course a distortion. If one is looking to measure quality, an
impact factor is unlikely to be the right tool. Two equivalent papers, one
OA and the other in a subscription journal, should have the same or a very
similar IF. If not, they're not equivalent (or, more to the point in the
current situation, their impact isn't measured properly, e.g. by arbitrary
exclusion from the count by the 'impact factory').

But impact factors do not measure quality; they measure impact. Not nearly
the same thing. The OA paper of two equivalent ones is likely to have the
best impact (when measured, of course).

Everybody is playing the impact factor game. Authors and publishers
(including BioMed Central with some pretty nice impact factors) do,
because most funders and tenure committees do (though often deny it), so
careers and business prospects depend on it. But it shouldn't be confused
with quality.

On quality flaws in high impact journals, this may be illustrative reading, too: http://www.biomedcentral.com/content/pdf/1471-2288-4-13.pdf

Jan Velterop


On 1 Jun 2004, at 06:34, Sally Morris ((ALPSP)) wrote:

I'm concerned that there's possibly a built-in distortion here. Impact
factors (or any other 'qualitative' measures) need to be equally
applicable across the entire literature, both open and closed-access.
However, both 'big deals' and OA may have an inbuilt distortion factor
which has everything to do with availability and nothing (necessarily) to do with quality.

Can anyone suggest how we can solve this dilemma? I'm assuming our aim is to 'measure' quality, not to skew perceptions in favour of any particular business model ;-)

Sally Morris, Chief Executive
E-mail: chief-exec@alpsp.org