This article is more than 5 years old.
Yesterday a group of us (Lauren C., Lauren S., Thomas, Roz, Mary Beth and Susan) participated in the Surveys in Libraries webinar presented by the ACRL-ULS Evidence Based Practices Discussion Group. One of the goals for this year’s Assessment Committee is to take advantage of any educational opportunities that might help guide our assessment efforts to be more effective.
This webinar focused on using surveys to learn about patron perceptions about whether their needs are being met by services. Well-designed surveys can be useful to gather this type of information. Poorly designed surveys are a waste of everyone’s time.
Here are a few helpful insights I gained from the session:
- Actionable surveys are those that ask the right questions, are focused and are designed to gather data that can lead to action to improve processes.
- An Action Gap Survey might be a useful tool for us. In this type of survey you might select 10 services that we offer. Then you ask the participants to choose the 3 services they think we do well, the 3 that they think need to be improved and finally, ask which 3 are the most important. This can show if what we do well is important to them, and whether our efforts need to be directed at improvement if the service in question isn’t important.
- Surveys should be simple and focused. There was no *real* ideal number of questions, but the speakers agreed that less is better.
- Longer surveys tend to have a higher drop rate (think the long version of LibQual+). People get frustrated and/or bored when there are too many questions.
- There was agreement that when using the Likert scale (is that pronounced Like-ert or Lik-ert?, look it up in OED), the ideal number of values is 5 (strongly agree, agree, neutral, disagree, strongly disagree).
One speaker addressed the use of commercial survey products (Counting Opinions and LibQual), another talked about adding library questions into campus-wide surveys (which we have had a little success with to date). My take away on commercial versus home grown surveys is that they both have a place in our assessment efforts. The commercial ones allow us to compare our services against other academic library peers/aspirationals, while locally developed surveys can help us dig down to the actionable level.
If you are interested in viewing the webinar, it is available here.
4 Comments on ‘Surveys in Libraries: ACRL-ULS Webinar’
I really like getting a point of view different from LibQual (is that LIBE-Qual or LibbQual?)
I never miss an excuse to look up something in the OED. They say /?la?k?t/ which is LIKE-et. The absence of the R probably reflects British norms for r-less-ness. I would deduce that LIKE-ert would be the prestigious American pronunciation. Thanks for an informative post!
Thanks for the write up, Susan. You’re such a tease asking people to go look it up in the OED. That’s like candy for us librarian types.
I really liked the 3-3-3 services survey. That seems like something we could do rather easily to good effect.
And since everone else chimed in…the Likert scan is named after Rensis Likert (http://en.wikipedia.org/wiki/Rensis_Likert) and it is pronounced ‘lick-urt’ (short i)