The E-Resources and Libraries (ER&L) Conference did not disappoint! I’ve written up what I think is of broadest interest.
Robert McDonald’s keynote address, “Dawn of the Machine Age? Near Futures for Libraries and Higher Education” was eye opening for me. I was unaware that a singularity is predicted by author Ray Kurzweil to happen in 2045, where “non-biological intelligence will vastly exceed the sum of of all human intelligence, marking a profound shift in existence,” (McDonald citing Ray Kurzweil, author of The Singularity Is Nearer: When We Merge with AI). McDonald contrasted several historical experiences of libraries with expectations for the future. For example, compare the earlier effort to achieve a single search box for finding sources to now heading towards a seamless integration of finding answers across a vast quantity of content. He had a slide (#10) titled “Beyond the Collection: The Library’s New Mandate” with these points: algorithmic auditors (librarians teaching students to “ask critical questions about data and provenance”); provenance as a service; and democratized access. His slides and ancillary resources are openly available.
A comment in Slack related to this keynote from esteemed colleague Miranda Bennett pointed to a helpful blog post, The Library’s New Entryway – An interface that combines the advantages of the traditional index with the power of LLMs is the path forward, by Dan Cohena. Worth a read!
Evaluating the AI in library tools and resources
Illinois State University (ISU) created an evaluation method for AI enhancements to tools and e-resources we use (e.g. Primo, ProQuest, Web of Science, JSTOR, Oxford, and multiple Elsevier products). They looked at a yes/no style decision tree from Oregon State and derived from it to create a document template with questions to ask internally and to ask to vendors. The last section of their template is a list of tests to perform for accuracy, hallucinations, etc. There is one specific item on their list to compare and contrast for natural language search versus performing the search without that special feature. They prioritized evaluating tools with opt-out available and will not at this time evaluate tools where the AI feature is active and has no opt-out. See https://library.illinoisstate.edu/about/policies/ai/ for the library’s AI policy. This session was titled “Fest of Frights: Find Your Way through the AI Maze of Emerging Tools and Features in Electronic Resources.” Please let me know if you want more information.
Bowling Green State University librarians offered a session titled “Applying an Ethical Framework for Assessing the Proliferation of genAI tools in Electronic Resources” that explained an intense rubric with scoring. One of “the best AI tools” they had evaluated with it got a score of 23.5 out of 100. They desire scores around 75. The rubric covered many points such as data center cooling type, electricity usage, data source (what LLM was used), and data privacy (i.e. are they collecting data on the users of the product), just to name a few. If the answers were not readily available from the vendor, zero points were awarded during the scoring process.
Handling e-resource access problems
I saw at least two sessions where the presenters reported that it saved a lot of staff time to offer a choice to patrons reporting an e-resource access problem as to whether they wanted contact with follow-up or not. (Some people just want to report the problem so that it can be fixed.)
One session underscored something we in Resource Services worry about, which is ZSR’s lack of capacity to address e-resource maintenance well. Nevada State carried out research that proved that pro-active maintenance is “meaningful” meaning that it can prevent access problems for researchers. Nevada State found statistically that about 1 in every 5 links fails. They found that over 80% of the link problems in Primo were holdings-related. They employed student assistants in carrying out their project. The students were merely noting in a spreadsheet which of the test pool of random citations they could reach successfully and which they could not. The students had a hard time making key distinctions in correctly identifying the types of access problems even though they had instruction. The librarians are going to try another project with more emphasis on that part and also slow the pace of work from checking 125 links a week to only 25. Full time employees replicated the problems that the students encountered to pursue resolutions. A small percentage of resolutions required reporting the problem via a ticket to the vendor.
Another session by librarians at Texas Tech reported on their analysis of “Ask-A-Librarian” chat transcripts related to troubleshooting e-resources. They reported that maintenance-based problems (metadata, parser, coverage, improper activations, etc.) was the third largest category of problem types in their analysis.
Carol Cramer and Kathy Martlock did an excellent job in their own presentation on e-resource maintenance in our WFU Alma. (They handle some batches for Law and Medical.) They explained how they streamlined keeping up with the Alma Community Zone Updates Task List (CZUTL), clearly dividing up work when the feature does not provide the ability to assign items to individuals, unlike some other features of the system. They figured out ways to set up parameters for report types and vendors so that Kathy can dismiss or attend to many and refer the ones that are hard to figure out to Carol. Carol described the background and strategy. Kathy explained detailed how-to, with pulling two spreadsheets together (one of parameters, and the other the chosen report) and using vlookup to flag the different actions. Then she showed how to carry out the actions in the CZUTL. This was Kathy’s first presentation! Kudos to our colleagues, both on the ingenious streamlining and the smooth presentation!

Add Your Comment