[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Article in "Inside HigherEd"
- To: <liblicense-l@lists.yale.edu>
- Subject: RE: Article in "Inside HigherEd"
- From: "Sally Morris \(Morris Associates\)" <sally@morris-assocs.demon.co.uk>
- Date: Wed, 25 Mar 2009 18:40:19 EDT
- Reply-to: liblicense-l@lists.yale.edu
- Sender: owner-liblicense-l@lists.yale.edu
Computers can mine the literature, just as search engines can index it, without it necessarily having to be freely available to human users. It's not the 'free' that's the issue, it's the structure and adherence to standards. Sally Morris Email: sally@morris-assocs.demon.co.uk -----Original Message----- From: owner-liblicense-l@lists.yale.edu [mailto:owner-liblicense-l@lists.yale.edu] On Behalf Of David Prosser Sent: 24 March 2009 22:34 To: liblicense-l@lists.yale.edu Subject: RE: Article in "Inside HigherEd" Surely Joe the answer is simple. Any smart tools that we build to help with the information overload are going to have to have access to the information. Of course you can start with what is licensed by your local library, or what's in the abstract, or what the keywords are. But the tools will work better and have greater efficiency if they have access to all the literature. (Just as data-mining tools work better with greater access.) And then, if the wondrous tools find something that you think is of interest to you, don't you want access? David C Prosser Director, SPARC Europe
- Prev by Date: Re: Article in "Inside HigherEd"
- Next by Date: Re: "Accepted Manuscript"
- Previous by thread: Re: Article in "Inside HigherEd"
- Next by thread: RE: Article in "Inside HigherEd"
- Index(es):