[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Impact Factor, Open Access & Other Statistics-Based Quality Models
- To: liblicense-l@lists.yale.edu
- Subject: Impact Factor, Open Access & Other Statistics-Based Quality Models
- From: Michael Leach <leach@eps.harvard.edu>
- Date: Wed, 26 May 2004 19:44:28 EDT
- Reply-to: liblicense-l@lists.yale.edu
- Sender: owner-liblicense-l@lists.yale.edu
Dear Colleagues:
It was not surprising to read on the Nature "Web Focus" site ("access to
the literature") John Haynes' comment that "[o]ne of the obstacles for
publishing in NJP was removed in 2003 when the journal gained an impact
factor." ("Can Open Access be viable? The Institute of Physics'
experience") As other research has shown (and as many on this list have
commented), having an Impact Factor can and does influence the success (or
not) of a given open access title, although it is not the only factor for
success. The suitability and accuracy of the Impact Factor will certainly
continue to be debated, but its influence on readers and authors can not
be disputed.
As we build institutional repositories (IR) and begin the process of
linking these repositories, we could have the ability to create our own
impact factors, linking the articles and citations among repositories all
over the world. Similarly, as IR administrators work with publishers
(including open access as well as more traditional publishers) to directly
deposit postprint copies of articles and other digital objects in IRs, the
new IR-Impact Factors could gain a similar weight to the Thomson/ISI
Impact Factor. It is likely that the IR-Impact Factor could cover
literature not currently covered by Thomson/ISI, so while the two Impact
Factors overlap, they would provide some independent means of assessing a
journal's or article's impact in a given community.
However, there may be another way to create an "Impact Factor-like" statistic to analyze open access materials and other published works. With the COUNTER standard and similar e-journal statistical tools, it is
possible for a variety of libraries to merge their user access statistics
and produce lists of "most accessed papers" or "most accessed ejournals" for given fields.
For instance, the NERL (NorthEast Research Library) Consortium could pool
their statistics to produce such lists, or perhaps the top research
institutes in a given field (e.g. MIT, Harvard, Stanford, CalTech, etc. in
physics) could produce the lists. Granted, this "ranking" would be less
"scientific" than the current Thomson/ISI Impact Factor, but it may still
serve the purpose our users and readers want, which is defining quality
and relevance.
License agreements would have to be adjusted with publishers to include a
provision for publishing and pooling the statistical data. Open access
publishers would have to be willing and able to supply such data as well.
The debate surrounding open access, in part, resides with quality and
relevance issues. Waiting five years for an Impact Factor, as IOP's New
Journal of Physics did, could hinder the process of open access
acceptance. Creating other measures of quality, such as the "pooled
statistics/ranking" or IR-Impact Factor model above could provide another
measure, and an earlier one, for many new publications. With many such
quality models available, individual readers and authors could pick what
works best for them in determining quality and relevance.
Michael R. Leach
Harvard University
Physics Research Library & Kummel Library of Geological Sciences
617-495-2878 or -2029 (voice)
leach@eps.harvard.edu or mrleach@fas.harvard.edu
- Prev by Date: Cost of Open Access Journals: Other Observations
- Next by Date: Investment vs. value (RE: Costs of open access publishing)
- Previous by thread: Cost of Open Access Journals: Other Observations
- Next by thread: Investment vs. value (RE: Costs of open access publishing)
- Index(es):