This article is more than 5 years old.
Last week I attended the 30th annual NASIG conference, presided over by our own Steve Kelley, who looked more and more carefree as the conference progressed and he got closer to handing over the presidential gavel. Well, at NASIG the outgoing president actually receives a gavel; the new one usually gets a hat. This was also the first conference under the newly-official name “NASIG” (no longer the “North American Serials Interest Group”).
The presentations at this year’s NASIG conference (or at least, the presentations I attended) seemed to steer away from “how we done it good” and focused instead on “here’s what we learned from looking at the data.” The following synopses are taken primarily from my notes, so I apologize for any misrepresentation.
- Marlene van Ballegooie, from the University of Toronto, spoke about the OCLC Knowledgebase (OCLC KB), which is designed to reduce the time librarians spend managing e-resource holdings. Rather than the library having to communicate to the KB provider which journals they subscribe to from which publishers, the publisher sends a holdings file directly to the knowledgebase. Van Ballegooie attempted to assess the effectiveness of the service by comparing each load against information she obtained directly from the publishers. Results of course varied by publisher, but common problems were irregular data loads, and a time lag between a title’s availability at the publisher site and its activation in the KB. Also, any local changes get overwritten in the next data load. But the presenter concluded that the method does have potential for saving time, especially with custom packages or aggregator platforms where manual selection is necessary (such as e-book providers).
- Gabrielle Wiersma and Esta Tovstiadi, from Univ. of Colorado at Boulder, presented an analysis of approximately 100 randomly-selected e-books published in 2014 across multiple platforms. Using a rubric based on a tool developed by the Center for Research Libraries, they assessed 16 aspects of the user experience, such as metadata, linking, pagination, etc. Some examples from their findings:
Metadata – some platforms include subtitles, others do not; “date” may refer to date published, copyright date, or date posted online; editors are sometimes named as authors; etc. In none of the cases they examined did the platform-generated “MLA citation” actually match MLA format.
Searching – different platforms may return search results at the word, page, or chapter level. Most (61%) were chapter-level, which is probably the least useful for searchers.
Pagination – system page numbers often don’t match the page number displayed on the PDF (probably due to how front matter is counted); in EPUB format, page numbers are often missing altogether.
The presenters showed examples of how search results may vary wildly from one platform to the next. This can be caused by search functionality, such as auto-stemming, or how the platform treats hyphenation, or whether it defaults to AND or OR searches. They also found problems caused by OCR spacing errors — e.g. “Japa nese” or “infl uential”, or words joinedtogether withouta space.
See their slides on SlideShare for side-by-side examples. - Michael Matos of American University shared his analysis comparing library journal holdings to works referenced in faculty publications. The goal was to use the data to demonstrate the extent to which faculty rely on the library for their research. I confess that his complex methodology lost me. Next steps include looking more closely at the materials referenced which are not held by the library, then compare that to ILL data (thus demonstrating that the researcher also used the library for those materials).
- In “Strategies for Expanding eJournal Preservation,” Shannon Regan, from Columbia University, described a Mellon Foundation Grant-funded project to identify e-journals that are not currently being preserved by a trusted 3rd-party repository, learn why they are not being preserved, and explore ways to get them preserved. I was kind of surprised, and kind of not-so-much, at the amount of content—even from major publishers—not being preserved. As for reasons why, I came away with the impression that the most prevalent reason is a question of rights/permissions. In some cases, a publisher may not have secured rights from the authors; in other cases, publishers (typically smaller ones) have no understanding of the need or the process for preserving content, or may fear a loss of control over the content (thinking, for example, that permitting an archiving agency to preserve the content would be equivalent to making the journal open access). Other times, the step of preservation may just slip through the cracks. Regan recommended that librarians should make preservation a part of the conversation with publishers, vendors, consortia, faculty, and other stakeholders.
- In a fun presentation, Kristen Garlock of JSTOR and Eric Johnson of the Folger Shakespeare Library described some projects/products developed as an outgrowth of usage data. The first was JSTOR Classroom Readings (http://labs.jstor.org/readings), a free tool intended to give educators a list of articles for core courses. Developers had originally wanted to gather college syllabi and curate a list of articles from those, but there were too many obstacles. So instead they looked at usage data for signs of “teaching use” (short bursts of use at a single institution). Though not perfect, and not yet considered final, Garlock seemed pleased with the methodology and the resulting product. Johnson talked about (among other projects) a JSTOR tool called Understanding Shakespeare (http://labs.jstor.org/shakespeare). The user can select a play, then choose a line in that play and get a list of articles in JSTOR that quote that line. Again, not complete (only includes 12 plays so far), but a pretty nifty tool.
In other sessions, I learned a few new Excel functions to try out, plus a couple of things to try with CORAL. I was also pleased to hear EBSCO Chief Strategist Oliver Pesch say very plainly and repeatedly that EBSCO supports customer choice and is actively seeking ways to optimize customer choice. I felt encouraged when he said that “no one vendor can offer libraries all the resources” they need, and that if you want to use an EBSCO product for one part of your workflow and a competitor’s product for another part, the workflow should not only work, it “should be optimized.”
Finally, my favorite quotes from the conference:
Scott Vieira, Rice Univ., referring to typical functionality in e-resource management systems: “Forcing the acquisition of e-resources into a linear workflow is like trying to train tortoises to walk in a straight line.”
Marcella Lesher, St. Mary’s Univ., about a journal weeding project (I’m probably paraphrasing): “We’re talking here about the care and feeding of print resources … although at this point we’re probably starving them to death.”
3 Comments on ‘NASIG 2015’
NASIG: a wholly owned subsidiary of ZSR!
Thanks for the report, Derrik.
Interesting sessions. I think the one about finding how much faculty rely on the library for their research is a good one, even if complex. And I confess I have never thought about if tortoises walk ina straight line!
Thanks for an informative and amusing report!