This article has been reproduced in a new format and may be missing content or contain faulty links. Contact email@example.com to report an issue.
I have attended the DLF Forum every year since I began library school, but this year was the first year that I attended as a full-fledged librarian. It was a very different experience to attend the Forum while constantly asking myself “What will I bring back to ZSR?” Below are three of my major takeaways, culled both from formal conference sessions and from informal conversations with other attendees.
Investigate moving towards large-scale digitization of archival materials.
The digitization of rare and unique materials broadens access to those materials beyond the reading room to any screen that can access the Web. Early digitization projects often cherry-picked specific items to digitize and created rich descriptions of those items, similar to how items might be selected for a physical exhibition. Increasingly, however, digital collection managers recognize that completely digitized collections support scholarly inquiry better than boutique digitization efforts. Both an access model and a content strategy, large-scale digitization¹ selects entire collections (or entire series within collections) for digitization, and online access replicates the reading room experience by contextualizing individual items within the archival arrangement of a processed collection. Rather than painstakingly creating metadata at the item level, large-scale digitization makes use of existing metadata from the finding aid at the container, series, and collection level. This approach can both streamline production workflows and better meet the needs of researchers.
At the DLF Forum, a panel presentation titled Big Archival Data: Designing Workflows and Access to Large-Scale Digitized Collections focused on how the principles of large-scale digitization were put into practice in different institutional contexts. Michael Doylen and Ann Hanlon of the University of Wisconsin-Milwaukee discussed the digitization of the Kwasniewski photographs, the collection of a Polish-American photographer who captured images of the Polish community in Milwaukee. 80% of the digitized photographs re-used existing item-level metadata transcribed from negative sleeves during processing of the collection; 20% of the digitized photographs were designated for further image processing and metadata enhancement – e.g. titles that are unique and more specific, description, and additional subject headings. By taking a comprehensive approach, this digital collection makes available “the rare, the lesser-known, the overlooked, the neglected, and the downright excluded.”² Following Michael and Ann, Karen Weiss of the Smithsonian Archives of American Art discussed the workflows that her institution have developed in order to link container lists in their finding aids to digital materials in their digital asset management system. Starting from a collection summary page, the researcher can browse to a particular series and then view all of the items that are contained within a particular folder. In this way, the digital collections experience better approximates the in-person reading room experience.
When performing digital humanities outreach to faculty and students, lead with content.
Another advantage of a large-scale digitization approach is that it enables the library to market its digital collections as corpora for digital humanities research. During THATCamp Digital Humanities & Libraries following the DLF Forum I had the opportunity to chat with Zoe Borovsky, who is the Librarian for Digital Research and Scholarship at the UCLA Library. Zoe shared with me that one tack that she is taking more and more frequently is to demonstrate that UCLA’s digitized special collections support digital humanities modes of inquiry -because the more faculty who build digital projects on top of existing digital collections, the more digital projects the library can support. Thus far, I’ve reached out to a few faculty that I’ve met at social events to learn more about their digital scholarship and pedagogy and how the library might support those aspects of their work. But in the emerging area of digital humanities it’s not always the case that there’s an existing library solution to a faculty problem. At this stage, my goal is to build relationships and gather requirements. Do some faculty want to create crowd-sourced collections, which they could eventually contribute to WakeSpace? Do other faculty want to text mine newspapers? Do still other faculty want to use Omeka to incorporate building digital collections into course projects? These needs are quite heterogenous! In the presentation Testing Omeka for Core Digital Library Services Jenn Riley (formerly of the University of North Carolina at Chapel Hill, now of McGill University) said that she is planning for a future when every humanities faculty member at her university is interested in creating a digital project. With that kind of scalability in mind, when I meet with faculty, in addition to gathering requirements, I will also market ZSR’s existing digital collections as potential corpora for digital humanities research.
Investigate adopting the DMPTool to support data management planning for faculty.
The DMPTool enables universities to provide investigators who are writing data management plans with custom guidance. The DMPTool has been available for some time now, but a new version was recently released, and the development team presented at the sessionDMPTool2: Improvements and Outreach at the DLF Forum. At our last Digital Scholarship team meeting, we discussed investigating the DMPTool as a goal for next year. When an institution adopts the DMPTool, admins are able to provide suggested answers for each question on a particular funder’s data management plan form. After modifying the suggested answers supplied in the DMPTool, the investigator can generate a PDF of their data management plan and append it to his or her grant application. Customization of the DMPTool now includes the option to provide Shibboleth authentication. DMPTool2 improvements for plan creators include the ability to:
- copy existing plans into new plans
- work collaboratively with colleagues – e.g. add co-owners of plan
- request review of plans
- share plans within institutions
- provide public access to plans
DMPTool2 improvements for administrators include:
- a module that enables direct editing of customized responses to different funder templates or the ability to create your own templates (before administrators had to email the DMPTool development team in order to enact this sort of customization)
- several new administrator roles – e.g. institutional reviewer and institutional administrator
- enhanced search and browse of plans
- mandatory or optional review of plans
Outside of conference hours, I enjoyed exploring Austin. Highlights included visiting the flagship Whole Foods store, watching the bat colony emerge from under the First Street bridge at dusk, and eating fabulous mole at El Naranjo (an authentic Mexican restaurant recommended by the Texas Monthly). Work hard, play hard!
(1) For a formal definition of large-scale digitization, see page 55 of the 2010 OCLC Research report Taking Our Pulse: The OCLC Research Survey of Special Collections and Archives.
(2) Flanders, Julia. (2009) The productive unease of 21st-century digital scholarship. Digital Humanities Quarterly,3(3). http://www.digitalhumanities.org/dhq/vol/3/3/000055/000055.html