This article is more than 5 years old.
At this year’s NASIG Conference, there were plenty of sessions on practical things (which I’ll discuss in a bit), but there were also several apt phrases and interesting concepts that jumped out at me. The first phrase came from a session on patron-driven access, where the speakers quoted Peter McCracken of Serials Solutions, who said, “What is interlibrary loan, but patron-driven access?” I thought this was a nice way to show that patron-driven access isn’t so foreign or new to libraries, we’ve been doing it for a long time, just under a different name. The second interesting concept came from one of our vision speakers, Paul Duguid, a professor at the University of California-Berkeley School of Information. He spoke about the importance of branding information in the information supply chain, as it supplies context and validation for information. When someone in the audience said that as librarians, we are experts in information (and old saw if ever there was one), Duguid responded that actually we’re experts in information structures. He went on to say that that’s one thing we have over Google, because an algorithm isn’t a structure. I found that very interesting. The third thought-provoking phrase/concept that appealed to me came in a session on getting the most out of negotiation. The speakers discussed the Samurai idea of “ordered flexibility,” which is essentially the idea of studying and developing a plan, but being prepared to deviate from that plan as necessary to deal with changing conditions and opportunities. I really like this idea of “ordered flexibility,” as it fits with my philosophy to planning large-scale project (if you develop a thorough plan, you have more room to adapt to changing conditions on the fly).
Now, as for the meat-and-potatoes of the sessions I attended, the most interesting one was called “Continuing Resources and the RDA Test,” where representatives from the three US national libraries (Library of Congress, National Agricultural Library, and National Library of Medicine) spoke about the RDA test that has been conducted over the last year and a half or so. This session was on June 5, so it was conducted before the report came out this week, and the speakers were very good about not tipping their hand (the national libraries have decided to delay implementing RDA until January 2013 at the earliest, but still plan to adopt the code). The session covered the details of how the test was conducted and the data analyzed. The 26 test libraries were required to produce four different sets of records. The most useful set was one that was based on a group of 25 titles (10 monographs, 5 AV items, 5 serials, 5 integrating resources) that every tester was required to catalog twice, once using AACR2 rules and once using RDA rules. The national libraries then created RDA records for each of the titles, and edited them until they were as close to “perfect” as possible. During the analysis phase, the test sets for each item were compared against the national libraries’ benchmark RDA records, and scored according to how closely the records matched. The data produced during the RDA test will eventually be made available for testing by interested researchers (maybe you, Dr. Mitchell?).
Another interesting session was conducted by Carol McAdam of JSTOR and Kate Duff of the University of Chicago Press. JSTOR, or course, provides backfiles to numerous journals, but they have begun working with certain partners to publish brand new content on the JSTOR platform. They are still trying to iron out all the details in their pricing model, but this move makes a lot of sense, it seems to me, especially for university presses. If all their material is eventually going to wind up residing on the JSTOR platform anyway, why not just make the new issues available with the backfiles to subscribing institutions?
I also saw a presentation by Rafal Kasprowki of Rice University about IOTA, which is a new NISO standard designed to measure the quality of OpenURL links. Briefly, here’s how OpenURLs work, when a patron clicks on a citation, a Source OpenURL is generated, which, in theory, contains all of the information necessary to adequately describe the source. This Source OpenURL is sent to a Link Resolver, which consults a knowledge base to find the holdings for the library. If the library holds the item, the Link Resolver generates a Target OpenURL which opens the full-text. Prior to the development of IOTA, there was no way to test the reliability of this data, but IOTA tests the Source OpenURL and provides a standard for how much information it should contain, in order to properly identify a resource.
I also attended a session by Amanda Yesilbas of the Florida Center for Library Automation, who discussed how FCLA uses a Drupal-based website in place of an ERMS. I can’t say that I fully understood everything she said, but it might be an inexpensive, low-maintenance alternative to implementing a full-blown ERMS here at ZSR.
This was a busy conference for me. In addition to attending the last meeting of my two year term as a member of the NASIG Executive Board, I started working on a new NASIG committee. And the conference was in St. Louis, my hometown, so I came in early with Shane, so he could spend time with his grandparents , aunts, uncles, and cousins. I also took him to his first-ever Major League baseball game, and the Cardinals beat the Cubs handily.
1 Comment on ‘Steve at NASIG 2011’
Ordered flexibility: love it.