This article is more than 5 years old.

Here are the highlights of the most important sessions I attended at Charleston:

Derrik has already covered the first session on discovery services. I won’t repeat what he said, except to link to the slides. I’ll also point out that we were one of the 149 libraries that gave approval to be studied (slide 10), but I don’t know if we were ultimately selected. In a related presentation on Friday, Bruce Heterick from JSTOR discussed efforts in getting their content to show appropriately in discovery services. JSTOR found that usage plummeted after certain schools implemented certain discovery layers. (My opinion: Students will frequently use JSTOR on name recognition alone – even when it’s not the optimal source for their topic. If the discovery service delivers more appropriate up-to-date content, so much the better.) Heterick said that many discovery services depend heavily on subject metadata for relevancy ranking. JSTOR does not include that metadata, and it would be expensive to produce. (Just a thought – many JSTOR articles are indexed with subject metadata in A&I places like MLA, which are sometimes included in the discovery service as well. How can that be harvested appropriately?)

Librarians from Ferris State reported on how they processed titles that they committed to retain within their Michigan consortium. They used a 912 field in the MARC record to indicate reasons for retention. Missing books and those in poor condition took extra time to process since they needed to find another consortium member who would take responsibility for keeping the title.

Kristin Calvert from Western Carolina reported on a project to move all their usage stats to EBSCO Usage Consolidation (hence: EUC). Before implementing this project, it took them four full working days each year to collect e-journal stats. I know Derrik would identify with some of the frustrations that Calvert expressed. After the decision to use EUC, it took…

  • 2-3 weeks to set up (I’m not sure if non-stop work is implied here.)
  • 8 hours for initial cleanup
  • 4-6 hours for quarterly loads (could do this annually to save time)
  • <1 hour/month for cleanup

The product includes an “Exceptions” list of journals that had some kind of mismatch in the system. WCU staff had to reconcile the exceptions, but once they did, EUC remembered the fix so the same exception wouldn’t pop up again. The screenshot that Calvert showed had zero exceptions. Calvert concluded that she found this project worthwhile given the efficiencies gained at the end.

On Saturday, two librarians from Bucknell discussed how they dropped their approval plan and went with print DDA for everything. They use WorldCat/WorldShare for their catalog and discovery layer, so they could accomplish this without any loading (or deactivation) of records in their system. Patrons click on a ‘Get It’ button (powered by GIST), and a librarian decides whether to fulfill the request by purchase or by ILL. In the end, they ordered 1/3 fewer titles, spent 50% less, and ILL decreased. Bucknell took this path because their approval books circulate at a low rate. They also weed aggressively (12K new books/year and 6K deletions/year), so their collection was a revolving door. They pointed out that their library focuses on undergraduate curriculum, not research, so WFU may not want to pursue this idea. One point that resonates with me though: they reminded us that ‘efficient’ does not necessarily mean ‘effective.’ Approval plan ordering is the most efficient way to get books, and e-book DDA is even more efficient at delivery. However, are they as effective in getting users to the content they need in the format they want?