Lauren and I had a really good dinner discussion with a VP of a database vendor, talking about what is and isn’t important for researchers and libraries. That VP and our regular sales rep have already scheduled a campus visit to continue the conversation.
I had a conversation with a publishing company’s VP of Sales regarding demand-driven acquisition (DDA). I described the DDA usage and spending patterns we have seen here, and we talked about the difficulties of finding a sustainable balance for publishers and libraries. We also talked about “evidence-based acquisition” (EBA), where the customer pays first for access, then at the end of the access period can select content for perpetual access, up to the amount paid. I told the VP that the entry cost for EBA is typically too high. He immediately understood—the up-front price that a large library could afford would be cost-prohibitive for smaller libraries. He seemed to like my suggestion that they base the entry cost on the customer’s historic spend.
I had a good meeting getting to know our e-book vendor’s new rep, and his supervisor sat in on part of our meeting so I was able to bend her ear too, mainly about DDA. I learned that there is talk of developing a variation on the short-term-loan DDA model, though nothing concrete yet as far as I know. I don’t want to divulge any secrets here, but I am cautiously optimistic about what they told me.
There were lots of other productive conversations; in all I spoke with at least 17 vendors (that I kept track of). It feels weird to keep this section of my report so brief, but I fear the rest of the vendor stories would get tedious.
A speaker from a large university library described how they collect and analyze data about e-resource outages. Staff enter and track e-resource problem reports in a commercial incident-tracking system. They record the cause (e.g. metadata error, simultaneous-user limit, user error, etc.), the time it took to resolve, and other data. Tracking outages allows them to become aware of trends. One benefit is that they can present a record of incidents to vendors, with actual numbers instead of “your site goes down a lot.” In the first year of collecting data, proxy problems accounted for a small fraction of the total errors. 25% of errors were because the target content was missing from the vendor’s site (i.e. an article or issue missing from a database).
In another session, a representative from a large university press spoke about how usage-based acquisition is affecting the Press. She acknowledged that DDA is scary because they know that not every book will get used, but the only way to know which books will get used is to publish them. She said it will take a while for them to evaluate DDA because they don’t know yet when the revenue for a book will come in and it is difficult to assess which marketing efforts are working. She also expressed a concern that was a new idea to me—she wondered whether access to a large pool of DDA titles might actually obscure the fact that libraries are underfunded.
I attended a presentation by Len Vlahos, Executive Director of the Book Industry Study Group (BISG). Vlahos said wholesale book revenue has remained fairly flat over the past five or six years. The rate of growth of e-book sales has slowed (i.e. still growing, but the curve has flattened); hardcover revenue dipped in 2010 but has regained overall. Sales of print textbooks are declining, but that trend is publisher-driven, unlike the consumer-driven trade market. Publishers are developing online interactive learning systems as a replacement for printed textbooks, since textbooks that are simply digitized versions of the print are not well received. Vlahos predicted that the next big disruption in the book industry will be a business model (like retail discounting in the 1970s or e-commerce in the 1990s) rather than technological (like the printing press or the Kindle). He noted the growth of a subscription economy, in which consumers are being trained that it’s ok not to own content (Netflix, Spotify, Pandora, etc.), and even beyond content (ZipCar, bikeshare), and suggested that publishers expect the subscription model to have a positive effect on revenue within the next 5 years.
The Continuing Resources Standards Forum included an overview of the NISO Recommended Practice for Demand-Driven Acquisition of Monographs. The standard was published last June and to me it already felt a little out of date because it doesn’t address some of the more recent tensions in the DDA market. The Forum also included a review of the very new (published last month) NISO Recommended Practice on Access and License Indicators. This is a simple standard for encoding at the article level whether or not that article is “free to read,” plus a link to the article’s license information.
In an excellent overview of linked data, the presenter described the evolution of the Web from a web of static documents to a web of data. In the web of data, instead of describing an entity with a record (i.e. a surrogate for the entity), an entity has its own unique identifier, and that’s where you go for information about that entity. Note that BIBFRAME is about identifying bibliographic entities. The presenter said that libraries have been very involved in the web of documents, but cautioned about the danger of a “library-shaped black hole” in the web of data. Library projects have tended to use library vocabulary instead of the vocabulary of the larger web, so it is difficult for web searches to find and link to them. The presenter said that the reason libraries should share linked data on the web is the same as the historical reason for cataloging – “So people can find our stuff.”