U-M Library’s Library Search launched in late summer 2018 as a unified search engine application containing five previously distinct interfaces: Catalog, Articles, Databases, Online Journals, and Library Websites. This system marked a big change for users compared to what had come before, especially with regards to the Catalog and presentation of relevance-ranked results. An increase in user support requests after launch suggested that further exploration was needed to pinpoint user pain points.
In order to better understand how users were interacting with Library Search, we began an exploratory study involving user interviews of U-M librarians and library staff; staff at the Bentley Historical Library, Clements Library, and U-M Flint Thompson Library; and faculty and graduate students. (Note: we focused on advanced users, rather than undergraduates, because user support channels suggested advanced users had a higher rate of engagement and more challenges with the new tool.) In total, we interviewed over 50 people in winter/spring 2019 and, through that process, identified areas for continued work in Library Search.
Our Research Process
Based on the range of needs served by the system and variations across needs among the academic disciplines practiced by our users, we chose to conduct one-on-one interviews to learn about their individual experiences with Library Search. Although exploratory user research, such as interviews, are often performed at the start of a project, these techniques can also be used at the end of a project to uncover remaining issues that aren’t well defined, or to help identify unaddressed pain points, which is especially helpful in projects that “iterate” more than “end” at a specific moment. Exploratory, or generative research, is open-ended and qualitative, and is useful for helping us to deeply understand user needs without the assumptions or preconceptions that may get locked in when using more structured questions or user testing tasks.
Let’s delve a bit more into user interviews as a technique. Before performing user interviews, we developed a set of questions general enough to allow the participant to drive the conversation, but with enough direction to focus attention and keep the conversation moving. Our questions were:
- Typically what is your goal when searching in Library Search?
- Tell me about your experience using Library Search since its launch. What have you liked or not liked about using the tool?
- With changes to Library Search since launch, are you noticing any improvements in the tool? Are you noticing any worsening of the tool?
- Do you use workarounds in the current Library Search, or go outside it to other systems, to perform functions of your job? If so, what are they?
We also were prepared with followup questions, such as “Can you tell me more about that?” or “When you say ____, what did you mean?” These prompts were used to draw out more information if the participant touched upon something briefly.
To conduct the interviews, we met with participants in their usual workspaces and encouraged them to show us examples in Library Search as they answered each question. This approach helped us collect specific examples and clarify what the participants meant when describing specific screens or interactions. For example, when discussing catalog item records and where items were available, it was helpful to bring up the site and talk through which information was helpful or distracting.
After the interviews, we collected all the observations in a Google Sheet with columns for each participant (using an anonymized ID), for categories, and for comments. We quickly realized that working wholly in Google Sheets to compare and cluster comments was more mentally taxing than sorting physical cards into physical piles, so we used Mail Merge to create “cards” for each row of the Sheet, including a unique card ID number and participant ID to ensure we could match the sorted cards back to their rows in the Sheet. We then grouped similar cards together and made labels for each category. While categorizing observations, we adjusted groupings as we became more familiar with the data, finding that some categories split into narrower clusters, while others were combined when they related to a shared underlying theme.
With the cards sorted and categorized, we added these categories back to the Google Sheet where we could sort and text-search while writing up our findings. We also used a notes column to include reminders to ourselves to verify an example or to email the participant if greater detail on a specific example was needed. By using this hybrid digital/physical approach to affinity diagramming, we were able to gain a big picture view of user perspectives and keep the individual user comments at hand for review.
Findings
When we reviewed our categorized observations and users comments, many themes emerged, as shown in the tag cloud above. (You can read more details in our research report.) In order to communicate these themes effectively, we had to find the right level at which to synthesize our findings. We grouped together issues that seemed to be driven by the same underlying issues, even if individuals had described the problems differently. For example, our finding about the holdings section of a catalog record contained comments about the type, number, order, and presentation of the holdings, but centered around the common problem that when users viewed a record, they were confused about what holdings we had. After identifying areas for further work, we prioritized them by how critical they were to user success.
We also began to notice patterns about how users were using Library Search as a tool—that is, what they were trying to accomplish when they conducted a search. We found that our users conducted three main types of searches:
- Searches for a specific known item with whatever details are known about it, such as words in the title or the author name
- Searches for a known set of items using a list of keywords, such as sheet music for violin concertos by Mozart
- Searches of an exploratory nature around a subject area
These different search types influenced the users’ expectations about how the tool should behave. For example, when reviewing the results listed for a query, someone conducting a known item search may be frustrated when the item they had in mind isn’t returned at the top of the results, and those searching for a known set may be frustrated if the results are overly broad or don’t seem to reflect all the constraints they tried to set, while exploratory searchers are more accepting of a broad results list, likely because they already intended to scan many results and choose promising options. Different search types are best supported by different system behavior, but it’s challenging to identify which type of search a user intends from just their query. An understanding of search types is important for understanding the other findings, and for translating them into improvements.
Outcomes
Some of our findings noted straightforward problems with clear solutions or relatively easy fixes, but others only highlighted areas of concern requiring more research; sometimes user research doesn't tell you what to do next so much as help define the problem. A few visible improvements to Library Search that were made directly in response to our user interviews include:
- Redesign of Holding Display - We learned during interviews that users had a difficult time taking in all the information returned on a results page. To correct this, we reworked the results display to collapse listings of all the locations that offered each item. Although this change added an extra click to in order to reveal location information, the benefit gained by this use of progressive disclosure was more scannable results as the onscreen text is less overwhelming, and the length of records in their collapsed state is more uniform than the original display.
- Choose Campus Affiliation - Our U-M Flint campus users reported challenges associated with being correctly matched to the proxy links for eresources, which in some cases differ from resources available on other U-M campuses. Exacerbating this issue, many U-M Flint students are distance learners where our initial method of relying on detection of campus IP addresses was not correctly assigning their location. Work was performed to allow users to intentionally set their campus affiliation, which not only gave transparency to this previously hidden setting but also made it easier for staff to walk users through changing their affiliation if a problem arises.
- Reordered Filter Sets - Many people talked to us about how result filters worked, with questions ranging from how filters are named to their presentation order. While the data used to create filters is drawn from the item records, we were able to reorder the filter sets to prioritize those most used. We also opened a previously collapsed filter set called “Availability,” that despite its vague label provides a useful way to partition out circulating items.
In addition to these more visible changes to the user interface, many other changes were also made in the relevance of results, the information displayed, and the general performance of the system. Work on Library Search is continuing, as is our user research, and we appreciate all those who have contributed to deepening our understanding of user needs and pain points.
(Submitted by Robyn Ness and Katherine King.)