This article is more than 5 years old.

Steve, Leslie, Monesha, and I attended NCLA RTSS’s Fall Workshop at North Carolina A&T State University in Greensboro on 10/7/16. The workshop was impressively well-attended by folks from throughout the state. By way of summing up, we wrote a paragraph or two each. Here they are, in alphabetical order by author (aka Jeff-first order):

“Watch This! Including Streaming Video in Our Collections” (Jeff Eller)

Christine Fischer from UNCG and Angela Dresselhaus from ECU talked through a number of issues and best practices involved in providing streaming video to campus communities. At this point the list of major streaming vendors is somewhat well-established in the academic market: Alexander Street (“Press” is no longer part of their name), Swank Digital Campus, Kanopy, Docuseek2, Films on Demand, etc. Just a couple of years ago when I started at ZSR, this was not so much the case. Christine Fischer cited a 25% drop in circulation of DVDs from FY2015 to FY2016 at UNCG, a decrease likely attributable to some extent to streaming video offerings. (Incidentally, Carol Cramer did a smart CPU-oriented analysis a while back here at ZSR that pretty strongly supported our continued collecting of DVDs.) When it comes to streaming video, many of the same problems involving licensing, availability, and patron demand would seem to be ubiquitous from institution to institution. I ended up contributing quite a bit to the conversation (it’s true; ask Monesha).

“BIBFRAME and Linked Data for Libraries: What is the Future for Bibliographic Exchange?” (Steve Kelley)

The day began with a presentation by Beth Cramer and Andie Leonard from Appalachian State, called “BIBFRAME and Linked Data for Libraries: What Is the Future for Bibliographic Exchange?” Cramer began by discussing how the MARC format, although it has done solid work for decades, is gradually losing utility. It severely limits what we can do with integrating resources on the Web, and makes it hard for our data to be discovered on the Web. BIBFRAME (or BF) allows individual pieces of data (such as the name of an editor or illustrator) to be discovered on the Web, because the data isn’t trapped in a record in a siloed database, as it is currently with MARC records and separate catalog databases. So, how does that work with BF? Well, with personal names, BF records contain URIs (Universal Resource Indicators) that directly link to a definitive authority, like the Library of Congress Authority File. That will mean that when the authority record in the LC Authority File is changed, the headings are automatically updated, because the name in the BF record is simply a URI that points the LC Authority File, not a string of letters that requires updating. Cramer showed a brief example of how records are created using a BIBFRAME Editor program, which has drop down menus that help the cataloger fill in information. The program also automatically looks up names in the LC Name Authority File to find the correct URI.

Cramer and Leonard also discussed a number of challenges to moving to BIBFRAME. We have millions of legacy MARC records that will need to be transformed into BF. This promises to be an expensive and complicated process. We will also have to deal with a changing required skill set for metadata librarians. We might not all need to become programmers, but we’ll have to know enough to be able to join in the conversation. We will need to play by the rules of the Web, which means thinking in terms of shared, online data, rather than our own individual databases. We will also need vendor cooperation, which is currently lagging far behind what we need to see before BF can become a viable option. Finally, Leonard emphasized that we will need to shift our focus and our thinking as BF is developed. We will need a new business model to cope with the changes brought by BIBFRAME and linked data.

“Supporting Research Data: Public and Technical Services Collaboration” (Leslie McCall)

Two colleagues from UNCG shared their workflow for research data management (aka datasets). They’ve recently created a new position, Data Services and Government Information Librarian, who assists faculty with developing data management plans (recommended: UC’s DMP Tool), and with the legal aspects of depositing datasets (copyright clearance, confidential data, etc.). Technical Services then does the upload work and creates the metadata. They partner with UNC-Chapel Hill’s Odum Institute and its multi-school repository NC Docks. Because NC Docks’ native metadata structure is pretty basic, they also make use of Dataverse, an open-source repository software. They’ve found that DryadFigshare, and ICPSR work better for images and media.

“OERs and Alt-Texts in Tech Services: How to Get the Job Done” (Monesha Staton)

I attended a break out session on OERs (Open Educational Resources) and Alt-Texts in Technical Services. Considering I had just cataloged quite a few text books that were very expensive, I was interested to see what alternatives were out there and what other institutions were doing. Lisa Barricella (ECU) and Beth Bernhart (UNCG) shared some mind boggling information about the cost of textbooks and the affect it has on student loan debt. To help offset the rising cost of textbooks, UNCG began providing open source textbooks as an alternative. They were able to get buy in from some professors by offering mini grants to encourage use low cost or free alternatives to expensive course materials. In their first year of the program they provided a savings of over $214,470. I was very impressed at both ECU and UNCG’s success with their respected programs. Their collaborative efforts were recognized by being awarded a LSTA grant to expand their efforts and share their experiences with others. I hadn’t realized that there were so many alternative options for our faculty and students. You can find more information out UNCG’s collections here.