Below is a summary of the programs I attended at the Computers in Libraries 2009 conference. I attended all but one of the digital library track programs.
Digital Preservation, E-government, and ERM
This presentation had two parts. The first was presented by two National Defense University librarians, Julie Arrighetti and Trisha Bachman. They offered a demo of the MERLN (Military Education Research Library Network) system. NDU keeps archives of web sites and documents relevant to military policy. In this demo, they focused on their Iraq War Collection. MERLN has a set of online research guides called MIPALs. These are subject guides that sift through hundreds of U.S. policy statements and commentaries from scholarly journals and think tanks, providing the archived Web pages and documents with the best and most relevant information. The web pages are saved via Adobe Professional in PDF format, and imported into Content DM. NDU developed their Web archiving process in response to the new Presidential administration—when the new President takes office, the old administration’s policy statements come down from the White House website, and possibly disappear. They have had the added challenge of trying to collocate Barack Obama’s policy statements, as he has chosen a more informal blogging approach to the White House web page. Formal policy statements usually need to be culled from elsewhere. They aim to be a niche version of Google, providing the most relevant military policy information. They were looking to have an interface modeled on the University of Wisconsin-Milwaukee digital collections page.
The second part of the presentation was given by Anna Creech and Cindi Trainor, from the University of Richmond and Eastern Kentucky University respectively. They discussed the possibility of end users using ERM statistics and data, rather than just the library staff. Staff currently use ERM for ILL statistics, tracking acquisitions, and other such tasks. In the case of electronic journals, once a user has searched for an article, they are often confused about where to find it. They may get 4 or more entries in a list, and not all of them are full-text copies of the article. They felt that ERM data may help re-sort the search output, showing perhaps the most frequently chosen of the various article options—as with Google, one may assume that the highest-use choice may be the most relevant one. They also discussed the possibility of using free tools with ERM data, such as wikis, shared documents, note fields in A-Z lists, and LibGuides. These tools could be used the add management information to the databases and push it out to users.
Digital Rights Management
I had hoped this session would explain a bit more about the specific limitations of DRM. However, it was more of an overview of DRM in general. When copyright law was mostly about books, the existing laws worked very well. The two laws of importance are the First Sale Doctrine (once you buy a book, you have the right to lend it or resell it), and Fair Use (which allows a photocopy of the item to be made for research use, etc.). Photocopying a whole book does not retain the quality of the original, and originals could not be modified.
Digital media changed a lot of that—not only was it easy to copy, it was easy to modify. Digital Rights Management is actually a cluster of laws, technologies, and licensing practices that extend to the publisher, content provider, and the actual product itself. This ends up extending author and publisher rights far beyond the intent of the copyright laws. It increases the responsibility of librarians to enforce DRM policy, even though they do not have the force of law, and decreases access to intellectual works. The conflict is difficult to resolve, but the main point to remember is that Digital Rights Management should not exceed the intent of copyright law. The presenters suggested 4 possible choices for the future—new DRM policies consistent across publishers that can be enforced within the context of the present copyright law, amend copyright law to include fair use in a DRM environment, encourage new licensing arrangements like the Creative Commons, and educating the public about the social value of the free exchange of ideas.
Moving Libraries to the Cloud
This was an amazing presentation by Roy Tennant and Andrew Pace from OCLC. Roy explained that “cloud” is a metaphor for virtualized resources available on the Internet. These can include infrastructure as a service (hardware capacity), a platform as a service, and applications. The benefits of cloud computing are low barriers to entry (i.e., you don’t need to be a tech expert or have a special tech staff), you can pay as you go for what you actually use, software upgrades are automatic, and tech staff that you might have can spend their time on things more useful than just maintaining and upgrading machines. The drawbacks of cloud computing are that you do give up some control, and are reliant on network connectivity and speed. He gave examples of cloud computing in business, in libraries, and in “machine services”. The latter consist of APIs that add an XML layer to the application, separating the application from a prefabricated presentation interface. A number of library-related APIs are available at http://worldcat.org/devnet. Some of them require an OCLC subscription, others are completely free, but some of them are very cool. I liked the Facebook Worldcat citations application (allows you to search Worldcat from Facebook or your mobile phone and get citations in any format you choose), and Compare Everywhere, a phone application that lets you make comparisons while you are looking at products (i.e., in the store, in the library, etc.). Andrew Pace added to Roy’s discussion by talking about webscale for library management. This involves getting your local information out in a “webscale” environment, giving your users many more choices, and providing many of the benefits of cloud computing.
Developing a Sustainable Library IT environment
The presenters were from the Metropolitan Museum of Art library, and they discussed examples of unsustainable strategies for sustaining library technology. The first example was the “case of the vanished programmer.” This is a situation where someone is hired to do custom programming in a computer language no one else knows, then leaves. It is better to use whatever is available and known. Another example is “web design by committee”. Everyone thinks they are “special” and that they need a lot of interface customization. The suggestion was to work with something proven to work, and easy to use. Open source software and social software applications were the way to go for these presenters. The advantages are a community of support, the fact that they are built to be successful, and their familiarity to patrons and staff. The problem of the “creep of the internal shared drive” was handled by having an internal blog where all shared staff documents were posted. Everyone on staff can post materials, and no one has to wait for a specified staff person to update or add information. The last thing discussed was working with another IT department within your organization. The example given was of an IT department that did not want to meet with library IT staff, nor coordinate efforts. Oleg Kreymer (the MMA systems librarian) suggested that this actually gives the library the freedom to do what they want, if the other IT department isn’t interested. He also suggested informal networking with IT staff.