[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Open access to research worth A3 1.5bn a year



I'd just like to add a footnote to Phil's characteristically thoughtful
comments.  Another complication in trying to arrive at a like-for-like
comparison is which of their articles authors choose to self-archive.  
Maybe I'm just a cynical old git, but would be surprising to me if authors
weren't at least slightly more likely to self-archive their best work and
less likely to be bothered with their more humdrum output.  You might
expect these to be more heavily cited however they were made available.

It's a bit embarrasing to be contributing yet another anecdotal/unproven
hypothesis to the list, but in my own defence I can say that I'm not
saying (a) that this definitely happens, (b) it's impossible to quantify
for the purposes of analysis or (c) that if it does happen you can't come
up with a methodology to compensate.  I'm just saying it's a factor that
needs looking at and to me it doesn't look easy to fathom and that the
various conclusions we currently have to hand may be interesting but are
not even close to definitive.

Tony McSe�n
Director of Library Relations
Elsevier
+44 7795 960516
+44 1865 843630

-----Original Message-----
[mailto:owner-liblicense-l@lists.yale.edu] On Behalf Of Phil Davis
Sent: 30 September 2005 02:18
To: liblicense-l@lists.yale.edu
Subject: Re: Open access to research worth A3 1.5bn a year

I just read the JEP article (referred to by Peter Banks) comparing
articles printed in Pediatrics with other articles only appearing in the
online addition.  The authors' main findings suggest that despite wider
potential audience for articles published freely online, articles
appearing in print received more citations:

"The difference between the mean citation levels for print and online was
3.09 �0.93 in favor of print (95% CI), meaning that an online article
could expect to receive 2.16 to 4.02 fewer citations in the literature
than if it had been printed."

Or in other words, their data do not support the hypothesis that full OA
journals receive more citations than non-full OA journals.

Yet it is methodologically difficult to rigorously test this hypothesis,
and the use of inferential statistics in this study suggests that they are
trying to generalize beyond their own journal.  In this study, the authors
compared two different sets of articles: 1) those that were selected for
inclusion in the main journal, and 2) those that were not.  Selection bias
alone may explain the different results, or at least interject a large
enough bias where the results may not accurately reflect their research
question.  In other words, it would be difficult to understand whether
their results are a reflection of accessibility, or selection bias.

Still, this article fails to support the unstated hypothesis that full OA
journal articles receive more citations than non-full OA journal articles.  
For that conclusion alone, we would be wise to stay with the null
hypothesis (that is, no significant difference) unless we start seeing
compelling evidence the other way.

The other conclusion that we may come to is that it may be impossible to
come up with universal statements about Open Access publishing (i.e. it
can provide 50 - 25% more citations).  Methodology problems in designing
rigorous studies may only permit us to make anecdotal statements about
particular journals or publishing models that have very narrow parameters
for generalization.

--Phil Davis