[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Libraries and archiving (Re: RE: If electronic is to replace paper)



Ted Freeman wrote:

> I agree with Mr. Meyer about libraries acting as useful archives for
> thousands of online journals. The reality is that libraries can "archive"
> or store data in various formats currently being used by publishers to
> deliver journals, such as PDF, HTML, SGML/XML, Postscript, TeX, plain
> ASCII text, etc. (some of which, as Meyer points out, will cease to be
> usable over time).

Yes, it is possible for libraries to maintain large quantities of licensed
archival data - especially if it is in non-proprietary file formats like
SGML/XML or ASCII. many libraries are beginning to digitize collections
and build infrastructure that is capable of supporting complex document
types.  But, this sort of infrastructure is not commonplace.

> But can they afford and do they have the expertise to
> build, maintain and refresh the systems to integrate and deliver all of
> this data effectively to their patrons, particularly given the variety of
> SGML/XML DTDs and searching and linking algorithms involved in the
> publishers' delivery systems?

There is a difference between archiving content and delivering it and
there are vendors that provide software that enables the kind of
integration and delivery mentioned above...OhioLink and the CIC are doing
this...and there will be others as well...however, these will be the
exception - not the rule.  Many publishers will allow you to license their
data only - as long as you can guarantee that you are controlling access
and use as per the terms of the license.

> "Getting the content out to market in a
> reasonably durable format," as an earlier arguer put it, is still what the
> publishers are doing when they build elaborate full-text journal web sites
> using an SGML database. As it happens, they're also building in some cases
> impressive archives and universal access points at the same time,
> something only libraries were able to do effectively in the world of
> print.

Again, there are libraries that are building impressive full text
collections using local materials combined with licensed proprietary
information...I think that many publishers will continue to license just
the data as long as they can be assured that their content is being
protected according to license terms.

> But publishers are not going to give libraries the proprietary
> source code driving these sites that has cost them in some cases hundreds
> of thousands if not millions of dollars to create, and which would be
> difficult to assimilate and to integrate by a third party in any case.

They will not give away their search engines and other tools - but some
already do license their proprietary software... libraries that would want
to license the data only would likely already have information systems
that would support current file formats and document standards (as defined
by such non-proprietary specifications as SGML/XML) or would be willing
and able to build such systems.

I see nothing wrong with "perpetual access" clauses and with asking
vendors to show due diligence with respect to refreshment and migration of
content as well as supporting hardware/software infrastructure
(particularly storage media)...but, I fear that our digital content is
much more fragile than our printed information as is evidenced, I think,
by such technological disasters as Y2K...anyone who isn't convinced that
the long term retention issues are not of vital concern has a very, very
short memory...there are many of us around the country who spent large
quantities of time managing Y2K compliance efforts and it brought us face
to face with some very troubling long term retention issues...so,
archiving of digital content (and supporting software) as issues for
whomever has content that they hope to make available into the future.

Mark McFarland
Head, Digital Library Services Division
University of Texas at Austin
m.mcfarland@mail.utexas.edu