What We Learned from the 2022 Library Search Benchmarking Survey

Introduction

In autumn 2022, U-M Library’s UX + Design Team performed a benchmarking study to measure user experience with our Library Search discovery layer. Our study was modeled on a survey designed by Harvard Library’s User Research Center, using a standardized questionnaire and several open-response questions to measure user experience. This survey design allows us to repeat the study over time and compare the results, or even compare results with other discovery systems that have used the same questionnaire.

NOTE: If you are affiliated with another library and would like to run a version of this survey for your campus, please reach out. We look forward to growing our opportunities for data comparison and to learn from each other.

Study Objectives 

The objectives of our study were to:

  1. Measure a baseline of user satisfaction and other key behavioral and attitudinal metrics from Library Search users that we could compare over time,
  2. Learn which audience segments (role, discipline, etc.) are better-served or underserved in our Library Search application, and
  3. Compare results with the similar study by Harvard Library completed in fall 2021, which serves a similar audience but uses a different library resource discovery platform.

Implied in our first objective is the intention of repeating the survey over time, which we hope to do next in autumn 2024. Although we have performed many user studies for Library Search in the past, those studies were designed around very specific research questions or targeted features under development rather than taking a broader view to systematically understand Library Search experience as a whole.

In addition, we were intrigued by the opportunity to compare with results from other institutions’ search applications. Our colleagues at Harvard who devised this survey method were also curious about our findings, as many larger academic libraries continue to explore the trade-offs between vendor-provided search tools, such as ExLibris’ Primo, and locally designed systems, such as U-M’s Library Search.

Survey Questions and Responses 

Demographics and Reported Library Search Usage

We distributed the survey through our network of subject librarians as well as directly to those campus members who had previously signed up to participate in library studies.

We had 435 total survey participants. 89% of all participants reported that they used Library Search and 11% reported that they hadn’t used it. Undergraduates, University staff, and Masters' students participant segments had the highest percentages (all under 3%) of participants who said they had not used Library Search. 

The top survey participants by role were:

  • (20%) Employed by a Library, Museum, or Archive 
  • (19%) PhD, Post Doc, or Fellow
  • (16%) Masters students
  • (16%) Undergraduate students
  • (14%) Faculty member or instructor
  • (6%) University staff

90% of survey participants indicated they were affiliated with the Ann Arbor campus, 9% from the Flint campus, and less than 1% from the Dearborn campus. Of the participants affiliated with Ann Arbor, about one third of respondents reported affiliation with LSA, just over 20% were affiliated with a library, archive or museum, and approximately 18% were affiliated with the School of Information.

We asked participants about their frequency of use of Library Search and 64% of respondents said they use Library Search once a week or more. Respondents who said they use Library Search daily were evenly split between those that were employed by a library, museum or archive, and those who were students, faculty or University staff. 

SUPR-Q Questions

The SUPR-Q (short for Standardized User Experience Percentile Rank Questionnaire) is a standardized set of questions to assess usability, trust, appearance, and loyalty for digital platforms and services. By employing a percentile ranking system, the SUPR-Q provides a clear benchmark, allowing comparison to other web sites/applications or comparisons of the same web site/application over time.

The questions and average ratings from our survey are listed below. 

  • Library Search is easy to use. (Average: 3.87)
  • It is easy to navigate within Library Search. (Average: 3.75)
  • I feel comfortable finding library materials on Library Search. (Average: 3.97)
  • I feel confident finding library materials on Library Search. (Average: 3.81)
  • I will likely visit Library Search in the future. (Average: 4.54)
  • I find Library Search to be attractive. (Average: 3.43)
  • Library Search has a clean and simple presentation. (Average: 3.69)
  • I am satisfied using Library Search to find library resources. (Average: 3.68)

We also calculated the average ratings by type of user (undergraduate, graduate student, faculty/researcher, and library employee, which you can view in the full report linked at the end of this blog post.)

Comparison with Harvard’s Results

We compared our survey results with Harvard’s and we found an amazing amount of similarity in people’s responses. Survey participants in both studies had nearly identical agreement (within 2 percentage points) for the following SUPR-Q areas. 

  • Over 70% of respondents at both libraries found the respective Search interfaces were easy to use, that they were comfortable finding materials, that they were confident finding materials, and were likely to visit the Search interface in the future. 
  • 64% of participants from both libraries agreed that the Search interfaces had a clean and simple presentation.  

The two SUPR-Q areas where Harvard and the University of Michigan’s survey responses diverged the most were the following prompts: 

  • I find it (Search interface) to be attractive - 8% more respondents at Harvard agreed that they found the Search interface attractive (54%) compared to respondents at U-M (46%). 
  • I’m satisfied using it (Search interface) to find library materials - 10% more respondents at Harvard agreed that they were satisfied (77%) compared to respondents at U-M (67%). 

With such high levels of agreement in so many categories, one of our questions coming out of the survey was: Why are reported levels of satisfaction and attractiveness so different between Harvard’s Search interface and ours? Some of our assumptions are: Our interface is nearly 6 years old and is due for a design uplift, and we know Harvard uses the Primo interface and have had more time adjusting and maturing their cataloging and vendor data with Alma (the underlying resource description system). We’re excited to see how these scores change in the future. 

Open-Response Questions

Our benchmark survey had seven open-ended questions or prompts that allowed participants to share how they felt about different aspects of their experience with Library Search. For example, one prompt asked survey participants to tell us what they wished worked differently about Library Search and another asked them to share what they thought worked well. We had over 1000 total responses to our seven questions/prompts. Some of the most interesting themes were around parts of Library Search people liked and aspects of Library Search where people wanted to see improvements.  

Some themes that participants liked about Library Search included: 

  • Clean and organized design and results display
  • Intuitive and easy to use interface, and 
  • It provides a good starting point with a large number of results from a variety of sources that can be filtered down to the most specific and promising results.

Aspects of Library Search that people wished for were:

  • Fewer “duplicate” search results, which encompasses multiple bibliographic records for the same item or multiple access links for the same item. 
  • Easier access to electronic resources in the catalog and in articles results, including a clearer path to electronic full text regardless of different vendor interfaces, databases, etc. 
  • Removal of links that didn’t lead to full-text materials, especially HathiTrust search-only records.
  • A way to filter to only physical materials in the Library Search catalog, which would complement the existing filter for materials available online.
  • Improved search indexing and relevance for topical or known item searches. 
  • More tolerance for spelling errors, typos, etc. in searching. 

Outcomes and Follow-up Projects 

Although we positioned this research as a benchmarking study, we found that the responses were a goldmine for understanding pain points for Library Search users. As we performed analysis, clear themes emerged to help us set priorities for changes within Library Search. Some of the top changes we felt confident moving forward with after the survey were:

  • Implementation of direct to article linking with Implementation of LibKey Direct-to-PDF API in Articles Search
  • Investigating and refining the kinds of results we show in Library catalog with a focus on physical materials we own and electronic materials we own or license
  • Additional research and design to improve people’s experience perceiving between and filtering to physical or electronic materials in the catalog 
  • Improved “Get This” functionality for more consistent and reliable experience
  • Awareness of the heavy use of Everything search results and planning for a redesign

 

Additional details can be found: Findings: 2022 U-M Library Search Benchmarking Survey