This article is more than 5 years old.

It probably seemed like everyone was talking about linked data because that was the focus of most of the sessions I attended.

One of the more interesting ones was the Library of Congress BIBFRAME Update Forum, because in addition to Sally McCallum and Beacher Wiggins of LC, they had speakers from Ex Libris, Innovative Interfaces, SirsiDynix, Atlas (think ILLIAD and ARES), OCLC, and Zepheira. At this stage, I think they were all trying to reassure clients that they will keep up with change. I took more notes on Ex Libris than the others since we’re a current customer: After some prologue on revolution vs evolution, Ido Peled, VP, Solutions and Marketing, said, that moving to a native linked data catalog is more revolutionary and Ex Libris is more comfortable with evolution. But I thought he gave more concrete evidence of readiness for linked data than the others because he said ALMA was built to support MARC and Dublin Core already and that Primo Central is already in RDF format, using JSON-LD. He also emphasized the multi-tenant environment and said, “Technology isn’t the focus. The focus is outcomes.” Because linked data includes relying on the data of others and interlinking with your own data, the “multi-tenant” environment concept made sense suddenly and helped me understand why I keep hearing about groups moving to ALMA, like Orbis-Cascade. I’ve also heard from individuals that it hasn’t been easy, but when is a system migration ever easy?

I also attended “Getting Started with Linked Open Data: Lessons from UNLV and NCSU.” They each worked on their own linked data projects, figuring out tools to use (like OpenRefine) and work flows. Then they tested on each other’s data to help them refine the tools for use with different future projects and for sharing them broadly in the library community. They both said they learned a lot and made adjustments to the tools they used. I got a much better sense of what might be involved in taking on a linked data project. Successes and issues they covered reminded me of our work on authority control and RDA enhancement: matches and near matches through an automated process, hits and non-hits against VIAF, cleaning up and normalizing data for extra spaces, punctuation, etc. In fact this session built well on “Data Clean-Up: Let’s Not Sweep it Under the Rug,” which was sponsored by the committee I’m on with Erik Mitchell, the ALCTS/LITA Metadata Standards Committee. I got a good foundation regarding use of MARCedit and OpenRefine for normalizing data to eliminate spaces and punctuation. While I knew regular expressions were powerful, I finally learned what they can do. In one example, punctuation stemming from an ampersand in an organization name caused data to become parsed incorrectly, breaking apart the name of the organization every time for the thousands of times it appeared. A regular expression can overcome this problem in an automated way — there’s no need to fix each instance one by one. (Think in terms of how macros save work.)

The ALCTS President’s Program: Three Short Stories about Deep Reading in the Digital Age featured Maryanne Wolf, Director, Center for Reading and Language Research and John DiBaggio Professor of Citizenship and Public Service, Tufts University. It was interesting to learn from her that brains weren’t designed for reading — think about cave men and their primary goals, which didn’t include reading. She gave a great overview of the development of language and reading and incidentally showed that those who operate in CJK languages have different parts of the brain lighting up than those of us who operate in other languages. This was all foundation leading up to how the brain operates and the effects of reading on the screen. The way we read on a screen results in the loss of certain abilities like reflection and creating connections. She measured that it takes time to regain those abilities too. She isn’t by any means anti-electronic though — she’s doing interesting work in Ethiopia with kids learning by using tablets. We’ll have to get her forthcoming book when it is finished!

I also attended committee meetings, met with vendors, networked, and got to catch up with former colleagues Erik Mitchell and Lauren Pressley over a dinner that Susan organized. (Thanks, Susan!) I especially enjoyed catching up with former colleagues Charles Hillen and Ed Summers, both dating back to my days at ODU in Norfolk, Virginia. Charles now works for YBP as Director of Library Technical Services and Ed just received the Kilgour Award from LITA/OCLC. Thanks to Ed, I got to meet Eric Hellman, president of the company that runs Unglue.it. And thanks to WFU Romance Languages faculty member Alan Jose, who mentioned the idea, I went Monday afternoon with Derrik and Carolyn to visit the Internet Archive offices, where we met Brewster Kahle. The volume the organization handles is mind-blowing! Kahle says they only collect about 40 TV channels right now and it is not enough. They have designed the book digitization equipment they are using (and selling it at a reasonable price too). They have people digitizing reels of films, VHS, and audio, but Kahle says they’ve got to come up with a better method than equipment using magnetic heads that are hard to find. Someone is working on improving search right now too. Some major advice offered was to learn Python!