This article is more than 5 years old.

I attended the 23rd annual North Carolina Serials conference in Chapel Hill on March 14, presented by the NCCU School of Library and Information Sciences. The keynote address, “Altmetrics: Finding Meaningful Needles in the Data Haystack,” was a fast-paced and informative presentation by David Crotty, Senior Editor at Oxford University Press. Arguing that we do a poor job of measuring the impact of a scholar’s work, he proposed moving beyond the conventional metric for measuring the impact of publications, the Impact Factor, in favor of alternative metrics, i.e. altmetrics. Advanced technologies now permit the tracking of individual published papers in order to assess the impact of a scholar’s research. He described the impact factor as “One metric to rule them all,” and argued that is slow, difficult to compare among different disciplines, favors review articles over primary literature, creates a ranking system influenced by a small number of highly cited articles, and gives a false implication of accuracy. In short, it is an archaic practice.

Altmetrics.com, Plum Analytics, and Impact Story are examples of altmetrics sites that track citations as well as social media captures and mentions. PLOS and Nature have incorporated this approach, making it possible to look at an individual paper rather than averaging it in with all its neighbors. However, he acknowledged the challenge of separating the signal from the noise in altmetrics, noting that popularity is not to be equated with quality, nor attention with impact. A grand finale of thought-provoking questions drew the session to a close. Does altmetrics favor researchers skilled at social networking (he noted that James Watson, of DNA fame, has no twitter account), with the result that sensationalism (e.g. the sex habits of fruit bats) and navel-gazing are encouraged in the resulting echo chamber? When does good faith effort in the legitimate dissemination of information becoming gaming the system (for instance, marketing one’s publications all over blogs, Facebook, Twitter, and other social media)? Does social media count for impact when we spend an average of 15 seconds on a web page? Does marketing become a core activity, so that researchers must change their behaviors and self-marketing becomes unduly important? To what extent do altmetrics reward non-research efforts? Finally, he proposed that altmetrics do not measure quality as much as attention, and although this can be useful, the final challenge for human judgment remains to assess whether a paper is as good as the level at which a journal sits.

The break-out session I attended in the afternoon was an NC LIVE talk presented by Emily Guhde, Online Services Librarian, and Jill Morris, Assistant Director, entitled “Making Usage Data Meaningful: A Consortium’s Attempt to Better Understand eResource Usage.” They described a benchmarking project that NC LIVE commenced in April 2012, to study electronic resource use among member libraries within a variety of library peer groups. They noted two different perspectives: that of the consortium, using data to make decisions about NC LIVE services and to decrease cost per use, and the perspective of libraries, concerned with what kind of use should be seen at their own libraries, and what should be done to improve use of resources. The study had three objectives: to identify peer groupings of North Carolina libraries, to identify data points for measuring the use of databases (AcademicSearch, Masterfile and Wall Street Journal, Learning Express Library, and Simply Map), to develop a framework for creating usage benchmarks in each peer group, and to analyze and report the qualities of high use libraries. They considered access and authentication, content and collections, awareness and outreach, community characteristics, and library characteristics. In their analyses, they used cross tabs, difference of means tests, and multiple regression. They found, repeatedly, that no one library is at the top or the bottom for all resources, that database use varies widely even among peer institutions, and that flexible peer groups may be more useful than permanent peer groups. To single out four-year college and university libraries: in the top 1/3 of peer groups, 94% authenticate with a local proxy, 82% use direct links to NC LIVE resources, 53% have a high number of librarians per 1000 FTE, and 41% have NC LIVE Committee representatives (as opposed to linking to the NC LIVE website, using passwords to authenticate, and displaying the NC LIVE search box). All of this is relevant to planning for future NC LIVE services related to usage data and to resource selection that takes place every three years, which they are doing now for 2015-2017.

The closing session was presented by Donna Tolson, Library Strategist at the University of Virginia, and Peggy Myers, Director of Library Development at UNC-CH, and was entitled, “Telling Your Story: Effective Packaging of Assessment Data.” Tolson directs assessment staff, focusing operations on strategic priorities of the library and the university, and she pointed out that assessment is not just measurement: one has to make something of it, and quoted Lord Kelvin’s adage that “If you cannot measure it, you cannot improve it.” At Alderman Library and the University of Virginia libraries system, internal management and library staff are the greatest creators and consumers of data, and she re-packages that information depending on whom she is talking to. For instance, as she put it, managers “speak library,” are interested in details, value the big picture, and assessment should stay in that language. The Dean translates library, is interested in implications, and values strategic data. The profession compares and contrasts, speaks library, regards consistency as paramount, values new approaches to shared issues, and agrees on definitions and talks about the same stuff, while vendors speak library, are interested in controlling data, and value our business. Thus, selection and packaging of data vary according to different audiences and purposes and is significant in the impact and usefulness of data.

The NC Serials Conference is outside of my usual peregrinations, but I found the sessions all interesting, thought-provoking, and the entire event enjoyable. ZSR Library was well represented with Carol, Chris, Derrik, Jeff, and Steve all in attendance.