Library Gateway Usability Testing

The Usability Group & its Usability Task Force conducted a series of evaluations of the Library Gateway during the Fall 2009 and Winter 2010 semesters. We used a number of different methods, some new to us, to conduct our evaluations.

Participatory Design

This method was designed to gain a better understanding of which parts of the Gateway users find most and least useful, and to help inform our follow-up evaluations. (Discussed more fully in a later post.)

Card Sorting

This method was designed to help us re-categorize content currently grouped under Services, Departments and Libraries. For the card sorting, we purchased a license to OptimalSort that would allow us to place a card sorting exercise in front of many individual users. We sent this exercise to all of our Library staff and received 104 responses to the exercise, an excellent rate of return.

We also ran group card sorting sessions, a new method for us, with undergraduates and graduate students. Groups of up to 5 people sorted paper cards into categories through consensus. Several similarities between categories surfaced across the various user groups performing the card sort, whether performing a paper sort or using the online tool.

  • Physical Locations: libraries and/or services with a physical location and hours of operation.
  • Publishing: MPublishing, SPO and University of Michigan Press.
  • Services: a broad category used by all groups which ranged from getting help with library resources to internal services for library staff.
  • Administration: background support for library staff or as one student said, "Stuff that students wouldn’t necessarily need."

As a group, the Task Force also came up with "unified" categories that carried the general scope of the categories suggested by our participants. Our categories were based on the categories the participants created, as well as the comments they made during the card sort. Both the similar groupings and the "unified" categories were suggested as bases for further tests.

Guerrilla Tests

This method was designed a) to help determine the order of the headings on our search results and browse results pages, and b) to fine-tune the contents & labels for our Quick Links section. We have used this method for many years. We call this "guerrilla testing" because we hope to get quick and short answers to quick and short questions. Five minutes is our goal!

For the search and browse results pages, we found that the section labels were confusing and inconsistent across the results templates, and that there was not enough metadata available for users to make informed choices. Participants in our guerrilla tests also wanted to see sections in a different order (e.g., Databases before Catalog). Our recommendations were to add more metadata to the catalog results (e.g., author, publication information, format) and to change the order on the results pages according to participant consensus.

For the Quick Links section, we found that our Library Outages link (when databases are inactive or not working correctly) was not understood or considered to be useful inside this section. More than half of users also requested the addition of a University-wide Webmail link. The Quick Links section was modified to take into account what we heard from participants. You may access the full reports of the evaluations:

​We were also fortunate enough to have a poster accepted at ALA Annual 2010 detailing our year's work: "Budget Usability without a Usability Budget". Many thanks to the Task Force project managers-- Kat Hagedorn & Ken Varnum-- and the group members-- Gillian Mayman, Devon Persing, Val Waldron, Sue Wortman, and Karen Reiman-Sendi-- for all their hard work!