ZSR AI Engagement Framework
Why This Framework Exists
AI tools are increasingly embedded in the platforms we use, the databases we license, and the workflows we're asked to support. While we don't always have control over when or how AI enters our environment, we do have agency in how we respond, what we endorse, how we implement it, and which values guide our decisions.
At ZSR Library, we affirm our commitment to our Core AI Values: transparency, critical thinking, curiosity, equity, human agency, academic integrity, privacy, and collaborative learning. This Engagement Framework establishes a shared set of principles to help guide how we engage with AI technologies in practice in a way that aligns with our values.
This framework is not intended to tell you what to do in every situation. Instead, it provides shared principles to support thoughtful decision-making and offers a common language for conversations with colleagues, vendors, and the broader campus community.
Our Starting Point: Honest Acknowledgment
Our staff hold a genuine range of views on AI: from enthusiastic adoption to cautious curiosity to principled concern about labor practices, environmental impact, and potential educational harms. This framework doesn't paper over that range. Instead, it treats disagreement as a resource: *it reminds us to move thoughtfully and to keep asking whether we're serving our values.* (A prompt to move thoughtfully and continually assess whether our choices align with our values.)
We also acknowledge that AI has been embedded in library systems for years (e.g. recommendation algorithms, cataloging tools, accessibility features). What's changed is the visibility, surrounding hype, the way the profession is talking about it, and the intensity of those conversations. Generative AI, in particular, raises new questions about intellectual honesty, creative labor, and the integrity of the information ecosystem we're charged with stewarding.
Engagement Principles
1. We are transparent with ourselves and others (Transparency)
When we use AI tools, we're honest about it: with each other and, where relevant, with the people we serve. We don't pretend AI-assisted work is something it isn't, nor do we hide our processes from colleagues who may learn from or build on them.
In practice: We're developing shared norms for acknowledging AI assistance in our workflows. We communicate clearly about AI features in the databases and tools we offer, even when we had no say in their inclusion.
2. We remain the human in the loop (Critical Thinking)
As library workers, we will apply the same critical lens to AI tools and their outputs as we do to all other information sources. We're cautious about AI uses that might shortcut intellectual struggle, undermine creative development, or make it harder for people to build genuine competence.
In Practice: We always review AI-generated outputs before acting on them. We teach our users to evaluate these outputs for accuracy and bias. We will help our users discern how to use AI responsibly in ways that support their learning and critical thinking, appropriate to their needs and level of expertise.
3. We stay learners ourselves (Curiosity)
The landscape is changing fast. We commit to ongoing learning: not to chase every new tool, but to understand enough to make informed decisions and to help our community do the same.
In practice: We learn about AI developments relevant to our work. We share what we're learning with each other. We approach this as curious professionals, not as boosters or as resisters, but as people trying to figure out what serves our mission.
4. We care about equity and access (Equity)
The library remains committed to our values of equity and accessibility. We will remain open to AI’s potential to increase accessibility and discoverability for our users while remaining attentive to the ways AI can harm progress towards equity and access, including through training data biases, algorithmic biases, paywalls, and misinformation.
In Practice: We will continue to explore ways in which AI can improve the accessibility and discoverability of the library’s resources. We will continue to monitor and evaluate equity and access issues as they relate to AI and educate ourselves and our users about these issues as an ongoing component of our professional development and instructional practice.
5. We affirm that human agency and judgment remain essential (Human Agency)
By providing access to diverse sources of information and teaching critical evaluation skills, libraries have long supported the conditions that allow people to exercise human agency and make independent, informed decisions. While AI can assist with tasks, it cannot replace the professional judgment, ethical reasoning, and relational care that define library work.
In Practice: To the fullest extent possible, we remain committed to helping our stakeholders retain the ability to make informed, independent choices about when and how they engage with AI. We use AI with care and professional judgment and do not rely on it for decisions requiring contextual understanding, the balancing of competing values, or responsiveness to individual needs. We do not use AI for communications that convey institutional values, evaluate employee performance, or express positions on sensitive matters.
6. We uphold academic integrity (Academic Integrity)
As AI reshapes how knowledge is created, shared, and accessed, the library continues to support a culture of academic integrity in scholarly work. We remain committed to informed and principled use of all technologies that are part of the scholarship life cycle.
In Practice: We encourage thoughtful engagement with how AI use may impact academic integrity in the creation of scholarly work. This includes consideration for the responsible use of sources and data, the accuracy and reliability of AI summarized content, and the disclosure of AI use in the research process. We help our users navigate the implications of using copyrighted content in university-licensed and non-university licensed AI platforms.
7. We firmly believe that privacy and data protection are paramount (Privacy)
We do not input personal information (HIPAA, FERPA, PII) or confidential institutional data into AI tools that are not approved for use by Wake Forest University. We take seriously our obligations to protect patron privacy, copyright, and honor the terms of our license agreements.
In practice: Before using an AI tool for any work task, we ask: What data am I putting into this system? Who else might access it? What are the terms of service? We keep current on license clauses related to AI, and we factor these into vendor negotiations. We are also committed to helping our patrons retain their privacy, to the extent that it is possible.
8. We foster collaborative learning about AI (Collaborative Learning)
The library has a longstanding commitment to advancing research, scholarship, and information literacy. As AI becomes increasingly integrated into research and learning, the library is committed to helping our community thoughtfully evaluate, contextualize, and engage critically with information produced with AI assistance.
In Practice: The library creates opportunities to partner with faculty, staff, and students in exploring AI’s role in scholarship and learning. The library supports the development of information literacy skills that help students thoughtfully consider when and how AI can assist in the research process, when it may not be appropriate to use it, and how to critically evaluate AI-generated information and research.
Living Document
This framework will evolve as we learn more, as the technology changes, and as we have more conversations. We'll revisit it regularly and update it based on what we're learning.
Framework developed by the ZSR AI Task Force, March 17, 2026. Initial concepts were informed by surveying staff, followed by the use of Claude to produce a baseline draft. The Task Force substantially revised this draft, incorporating iterative feedback from library staff across departments into the final version.
Appendix: Summary of Staff Perspectives
In developing this framework, we gathered input from across ZSR. Key themes included:
On AI generally: Perspectives ranged from cautious curiosity to principled concern about environmental impact, labor exploitation, lack of regulation, and educational harms. Many expressed a "both/and" view, recognizing potential benefits while remaining wary.
On current uses: Staff reported using AI for transcription, drafting, idea generation, grammar checking, and teaching. Some avoid AI entirely; others use it selectively with human review.
On concerns: Privacy and data protection; equity between paid and free tiers; the pace of change outrunning guardrails; difficulty distinguishing thoughtful AI-assisted work from "slop"; the risk of normalizing tools built on contested practices.
On our role: Staff want ZSR to have a clear stance, to educate the community, to maintain our identity as a reliable information source, and to approach AI "like we approached the Internet—cautiously and with intent to educate."