This article is more than 5 years old.

At the beginning of June, I flew out to Palm Springs, California for the 25th Annual North American Serials Interest Group Conference. As a member-at-large of the NASIG Executive Board, I had to head out on Tuesday, June 1, so I could attend an all-day strategic planning session on the following day. Then on Thursday, I had a regular Board meeting in the morning. So I had already done quite a bit of work by the time the conference officially opened on Thursday evening.

The conference sessions began on Friday, June 4, with a vision session from Eric Miller of Zepheira, LLC. Miller is an information research scientist who formerly worked at OCLC and W3C, and his presentation was called “Linked Data & Libraries.” Miller described the idea of linked data, which involves exposing raw data sets on the Web and making the data manipulable by users. Rather than heaping all data into one database, linked data lets data stay where it is, but allows it to link to other data and be exposed. For example, it would allow a user to take data from a spreadsheet and layer it over a map to see if clusters form on the map indicating geographically where the data occurs. The use of linked data can allow us to have applications that are not just on the web, but of the web. Linked data would require the development, assignment and use of web identifiers, which are URLs that are being used to identify not just documents, but data elements, abstract entities, ideas, people, etc. Without web identifiers to serve as primary keys, there may be a disconnect between pieces of data that should be linked, meaning the use of web identifiers would require a degree of authority control. According to Miller, a linked data solution would enable human computation, would empower users to create their own views of data, and would build a community around data in which users create and curate data, but this solution would have to skip the supermodel idea of trying to build one enormous database with every piece of data in it.

I next attended a session by Colin Meddings of Oxford University Press called “Digital Preservation: The Library Perspective.” Meddings discussed the findings of a survey of OUP’s customers, as well as a survey of publishers regarding digital preservation. The most surprising thing in the presentation was that about a quarter of both publishers and libraries surveyed are doing nothing at all about the preservation of digital content. Luckily, the large majority of publishers and libraries are involved with digital preservation, through initiatives like Portico, LOCKSS, CLOCKSS, dark archives, etc. OUP found that digital preservation is important to their customers, but that there is still confusion around the issues. It remains unclear who is ultimately responsible for digital preservation (publishers, libraries, national libraries?), and who should pay for it. The cost of preservation is actually more of a problem than the technical issues. OUP has found that further education and discussion on these issues is needed.

I then attended a session led by Steve Shadle on how catalogers at the University of Washington use their ERM. It appears that an ERM can be quite useful in managing the loading of bibliographic record sets. If we have an ERM in our future, that may be something to pursue. I also attended a session on serials industry initiatives in standards involving the UKSG (NASIG’s sister organization in the United Kingdom). Ross MacIntyre discussed TRANSFER, a program for smoothing the transfer of journals from one publisher to another, KBart, a standard for knowledge base practices, COUNTER, an initiative for developing usage factors based on the use of journals, and PIRUS, an initiative for developing usage factors for individual articles, which is also COUNTER-compliant.

On Saturday, I attended the conference’s second vision session, a program by Kent Anderson, publisher of “The Journal of Bone and Joint Surgery” as well as the Scholarly Kitchen website, entitled “Publishing 2.0: How the Internet Changes Publications in Society.” Anderson argued that the developed of Web 2.0 technology has changed the nature of publishing. Media on Web 1.0 was the digital version of broadcast, it was one-way and hierarchical. Web 2.0 technology has made media less a source of information and more a place for conversation, where users can comment on, add to, and enhance information. This leads to a form of organization called heterarchy, in which authority shifts, develops on the fly, is democratic, waxes and wanes, is situational, and fades as problems are resolved. Furthermore, in older publishing models, information was scarce, and in a world of scarcity, users need an intermediary, but in newer publishing models, information is abundant, and users need a guide. This process of guiding information users is called apomediation, as opposed to intermediation. Another factor that Anderson discussed was the fact that increasingly consumers own the infrastructure on which publishers publish (smart phones, iPads, etc.) and can control how published material displays, which limits the control publishers have over the material they produce. Anderson closed with the simple but powerful idea that users are changing, expectations are changing, and publishers must change as well.

After Anderson’s program, I attended a session called “When Jobs Disappear” led by Sally Glasser of Hofstra University. This session looked at the results of a survey of libraries asking about the effects of elimination or significant reduction of print serials management tasks on positions and employees. There was no really big surprise to find that the task that was most reduced was binding print serials. Also, the survey found that most libraries assign new tasks to the staff who used to perform the tasks that have been eliminated. Most libraries have plenty of stuff that needs to be done, and the elimination of several tasks can open up the opportunity to perform new functions. So we are not alone in facing these sorts of issues.

I followed this session with an update on the activities of CONSER, the serials cataloging consortium. Not to bore you with the gory details, CONSER has changed a few MARC coding practices and has been testing RDA.

On Sunday morning, I attended another Executive Board meeting, and followed that up with one last session (Chris and I had to leave the conference a little before it closed in order to make our flight). The session was called “Making E-Serials Holdings Data Transferable-Applying the KBART Recommended Practice,” and was led by Jason Price of Claremont Colleges Library and SCELC Consortium. KBART stands for Knowledge Bases and Related Tools, and is a new standard from NISO and the UKSG. It is a universally acceptable holdings data format created by publishers, aggregators, knowledge base vendors, and libraries, designed to allow for the timely exchange of accurate metadata between vendors and libraries regarding holdings. If the standard is accepted and applied it should save libraries the trouble of badgering publishers to send complete title access lists, would end libraries having to navigate title changes and ISSN mismatches, and would end libraries having to wait for the knowledge base data teams to make updates. Price argued that librarians should learn about what KBART is and does, and should help lobby publishers to adopt KBART practices.

So, that’s about it. Oh, but I forgot to mention that on Saturday night there was a special dinner and reception in honor of NASIG’s 25th anniversary. The reception included dancing, which both Chris and I participated in. However, there’s no photographic evidence of it available (thank goodness), so you’ll just have to take my word on it.