[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [SIGMETRICS] Alma Swan: The OA citation advantage: Studies and=results to date



Here, reposted, is some feedback on meta-analysis from one of its
leading exponents:

Gene V Glass Says:
Mar 12, 2010 at 12:47 pm

http://scholarlykitchen.sspnet.org/2010/03/11/rewriting-the-history-on-acce=
ss/

Far more issues about OA and meta analysis have been raised in
this thread for me to [be able to] comment on. But having
dedicated 35 years of my efforts to meta analysis and 20 to OA, I
can't resist a couple of quick observations.

Holding up one set of methods (be they RCT or whatever) as the
gold standard is inconsistent with decades of empirical work in
meta analysis that shows that 'perfect studies' and 'less than
perfect studies' seldom show important differences in results. If
the question at hand concerns experimental intervention, then
random assignment to groups may well be inferior as a matching
technique to even an ex post facto matching of groups.
Randomization is not the royal road to equivalence of groups;
it=92s the road to probability statements about differences.

Claims about the superiority of certain methods are empirical
claims. They are not a priori dicta about what evidence can and
can not be looked at.

Glass, G.V.; McGaw, B.; & Smith, M.L. (1981). Meta-analysis in
Social Research. Beverly Hills, CA: SAGE.

Rudner, Lawrence, Gene V Glass, David L. Evartt, and Patrick J.
Emery (2000). A user's guide to the meta-analysis of research
studies. ERIC Clearinghouse on Assessment and Evaluation,
University of Maryland, College Park.

http://glass.ed.asu.edu/gene/resume2.html


On Thu, Mar 11, 2010 at 9:56 PM, Stevan Harnad <amsciforum@gmail.com> wrote=
:

> On Thu, Mar 11, 2010 at 12:17 PM, Philip Davis
> <pmd8@cornell.edu> wrote:
>
>> Stevan,
>> In my critique of this review today (see: http://j.mp/d91Jk2 ), I commen=
ted
>> on the inappropriate use of meta-analysis to the empirical OA citation
>> studies:
>>
>> "Meta-analysis is set of powerful statistical techniques for analyzing t=
he
>> literature. Its main function is to increase the statistical power of
>> observation by combining separate empirical studies into one =FCber-anal=
ysis.
>> It=92s assumed, however, that the studies are comparable (for instance, =
the
>> same drug given to a random group of patients with multiple myeloma), bu=
t
>> conducted at different times in different locales.
>>
>> This is not the case with the empirical literature on open access and
>> citations. Most of the studies to date are observational (simply observi=
ng
>> the citation performance of two sets of articles), and most of these use=
 no
>> statistical controls to adjust for confounding variables. Some of the
>> studies have focused on the effect of OA publishing, while others on OA
>> self-archiving. To date, there is still only one published randomized
>> controlled trial.
>>
>> Conducting a meta-analysis on this disparate collection of studies is li=
ke
>> taking a Veg-O-Matic to a seven-course dinner. Not only does it homogeni=
ze
>> the context (and limitations) of each study into a brown and unseemly me=
ss,
>> but it assumes that homogenization of disparate studies somehow results =
in a
>> clearer picture of scientific truth."
>>
>> --Phil Davis