ZSR Guide to Using AI
Appendices: Decision-Making Questions
When you're uncertain whether or how to use AI for a task, consider:
- Purpose: What am I trying to accomplish, and is AI the right tool for this? Could I accomplish it without AI? What would I lose or gain?
- Data: What information would I need to put into this tool? Does that raise privacy, confidentiality, or licensing concerns?
- Quality: Can I verify the output? Do I have the expertise and time to catch errors? What are the consequences if the output is wrong?
- Values: Does this use align with our principles? Would I be comfortable explaining this choice to a colleague, a patron, or the public?
- Equity: Who benefits from this use? Does it create or reinforce disparities?
- Learning: If this involves learning, skill-building, or professional development, does the AI use support genuine learning, or does it risk short-circuiting the intellectual work?
- Agency: To the extent that I am able, can I provide my students, users, and/or colleagues the opportunity to choose their level of engagement with AI?
Where We're Currently Using AI in Spring of 2026
These are areas where we've found AI tools may be useful and consistent with our values:
- Transcription for oral histories and recordings
- Drafting assistance for routine communications (with human review and revision)
- Idea generation in early planning stages
- Syntax and grammar checking for polished written work
- Teaching about AI in information literacy contexts: helping students evaluate AI-generated content, understand limitations, and make informed choices
- AI-driven search assistants with human review and revision
- Lesson planning, including scaffolding concepts and generating assessments and rubrics
Where We Proceed with Care in the Use of AI in Spring of 2026
These are areas where we've determined AI use would conflict with our values or professional responsibilities:
- Communications conveying leadership, values, or institutional stance. The initial draft of these should require human voice and accountability, though AI may be useful in a review.
- Performance evaluation. Evaluative assessment of colleagues and students requires human judgment, context, and care.
- Creative and curatorial work that's core to our professional identity (course content, book displays, programming), though AI may be used in support of the design of this work.
- You can opt out if you do not need an AI tool to do your work.
Where We Do Not Make Use of AI in Spring of 2026
- Any use involving protected personal information (HIPAA, FERPA, PII).
- Loading proprietary library-licensed content into AI tools: where prohibited or restricted by our licenses.
- AI-generated content presented as authoritative library guidance
Talking with Vendors
As AI features appear in databases and platforms, we're developing approaches for vendor conversations:
- We ask about data practices: What user data does the AI feature collect? How is it used? Can it be turned off?
- We negotiate for transparency: Can patrons tell when they're interacting with AI? Can we communicate this clearly?
- We review license terms for AI-related clauses and track these centrally
- We push back on features that compromise privacy or create patron-unfriendly defaults
Communicating Our Role to Campus
We position ZSR as a thoughtful guide in a confusing landscape. As an organization we are critically informed partners who can help people think through hard questions. We emphasize:
- The library remains a source of reliable, human-curated information
- We can help people evaluate AI-generated content and understand its limitations
- We teach information literacy skills that matter more, not less, in an AI-saturated environment
- We're here to help people think, not to replace their thinking
What This Framework Doesn't Do
This framework doesn't resolve every question. It doesn't tell you whether a specific tool is acceptable or how to handle every edge case. It's meant to be a starting point for thinking, not a substitute for judgment.
When you encounter situations this framework doesn't address, bring them to your colleagues and supervisors. Those conversations will help us learn and will inform future iterations of this guidance.