This article has been reproduced in a new format and may be missing content or contain faulty links. Contact email@example.com to report an issue.
Yesterday a group of us (Lauren C., Lauren S., Thomas, Roz, Mary Beth and Susan) participated in the Surveys in Libraries webinar presented by the ACRL-ULS Evidence Based Practices Discussion Group. One of the goals for this year’s Assessment Committee is to take advantage of any educational opportunities that might help guide our assessment efforts to be more effective.
This webinar focused on using surveys to learn about patron perceptions about whether their needs are being met by services. Well-designed surveys can be useful to gather this type of information. Poorly designed surveys are a waste of everyone’s time.
Here are a few helpful insights I gained from the session:
- Actionable surveys are those that ask the right questions, are focused and are designed to gather data that can lead to action to improve processes.
- An Action Gap Survey might be a useful tool for us. In this type of survey you might select 10 services that we offer. Then you ask the participants to choose the 3 services they think we do well, the 3 that they think need to be improved and finally, ask which 3 are the most important. This can show if what we do well is important to them, and whether our efforts need to be directed at improvement if the service in question isn’t important.
- Surveys should be simple and focused. There was no *real* ideal number of questions, but the speakers agreed that less is better.
- Longer surveys tend to have a higher drop rate (think the long version of LibQual+). People get frustrated and/or bored when there are too many questions.
- There was agreement that when using the Likert scale (is that pronounced Like-ert or Lik-ert?, look it up in OED), the ideal number of values is 5 (strongly agree, agree, neutral, disagree, strongly disagree).
One speaker addressed the use of commercial survey products (Counting Opinions and LibQual), another talked about adding library questions into campus-wide surveys (which we have had a little success with to date). My take away on commercial versus home grown surveys is that they both have a place in our assessment efforts. The commercial ones allow us to compare our services against other academic library peers/aspirationals, while locally developed surveys can help us dig down to the actionable level.
If you are interested in viewing the webinar, it is available here.