How Undergraduates Evaluate Sources

During the 2019-2020 school year, the IMLS-funded Library Assessment in Student Learning Lab designed and executed a survey and role-playing interviews that sought to more deeply understand undergraduate student information literacy strategies and skill sets before and after librarian-led instruction sessions held during first-year writing courses. 

This survey asked students to rate their confidence in their abilities to determine source credibility, determine whether one of three randomly-selected sources was appropriate for use in a research paper, and justify their decision in an open-ended response. After collecting data from over 300 unique respondents, we followed the survey with in-person interviews, that included a role-playing scenario, with 22 students as a way to gain further insight into students' thought processes when analyzing and assessing online information.

As a student member of the Library Assessment in Student Learning Lab, I spent the 2019-2020 school year collecting data for this project. When the Winter semester ended, I continued work with the lab in a data analysis capacity, assessing the survey and interview data with the specific intention of trying to understand how students think and learn about information in relation to their demographic background, grade received in the course, confidence, and source evaluation strategy. As I learned new assessment skills during this data analysis stage, I identified several themes. The Lab Team hope that these themes can help us further develop evidence-based and student-responsive teaching practices that librarians can use in the information literacy instruction sessions they conduct as part of various university courses, and, personally, I hope that these assessment skills can help me become an effective early-career academic librarian. 

In an effort to explore the potential influences on the information literacy practices of surveyed undergraduates, we were able to cross analyze our data with information from both the Learning Analytics Architecture (LARC) and the Michigan Data Warehouse with the help of Assessment Specialist, Craig Smith.  This demographic data became increasingly more relevant as I sought to characterize the population that took our survey (and the students that we subsequently interviewed), and to understand if or how any patterns emerged in conjunction with the race, sex, course grade, program, or year of the surveyed students. 

We wanted to be cautious, given our sample size, to not draw concrete conclusions about any patterns or themes that we saw, but rather try to understand the potential implications for teaching practice that we could glean from the analysis. We also kept at the forefront of our minds that some demographic groups were underrepresented in our data, and that concrete conclusions about these groups should also be avoided until we can hopefully gain larger samplings of these groups in future research. As a novice professional researcher, I have developed a mindset around assessment and instruction to carry into future projects. 

After establishing demographic details for each survey participant, we began using cross-tabs to look at the ways in which themes, patterns, or data stories emerged. This involved examining the data from many different angles, including comparing various combinations of survey data, interview data, and demographic data to get a clearer picture of the ways in which students explore, assess, and feel about information. While we cross-analyzed a great deal of data points that informed a full picture of how students understand source evaluation, I’ve chosen to highlight a few of my favorite analytical surprises, frustrations, and implications.

One of the first places we started was the exploration of possible connections between student confidence in their ability to assess a source, and other data points we had for each student. During the survey, we asked students to rank their confidence in their own ability to determine the credibility of a source, and the scholarliness of a source. The questions read: 

“I can tell if a source I find is credible” 

and 

“I can tell the difference between scholarly and non-scholarly articles”

Students then had to rank their agreement with this statement, from “strongly disagree” to “strongly agree.” We found that the increase in confidence from pre- to post-instruction surveys for both questions was statistically significant, pointing to the potential impact instruction has on bolstering student faith in their own source evaluation abilities. 

This increase in confidence was an example of a surprising emergence across our data: the manifestation of the way that students feel about their source evaluation skills. In doing a variety of activities like participating in librarian-led instruction, roleplaying as a librarian, and articulating their evaluation process in a variety of ways, students changed perceptions of their own skills. Although a very general observation, the implication for practice is clear: intervening activities that focus on skill-building, evaluation process articulation, and information literacies do indeed influence students’ feelings about their own evaluation skills. 

Another of our survey questions asked respondents to review a source and then respond to this:

“How do you feel about using this source in a research paper?”

Students could answer positively or negatively, but they had to justify their reasoning in an open-ended response. Our Lab Team then coded these open-ended responses.

I found that the ways in which students describe their evaluation process was fairly uniform, with several key criteria rising to the top regardless of source evaluated, determination of scholarly appropriateness, pre- or post-instruction, or demographic factors. This is particularly interesting given the significant confidence change in pre- to post-instruction surveys; although confidence in evaluation ability increased, the actual ways in which students described and evaluated source material stayed wholly the same. For instance, students greatly relied on preconceived notions of credibility in publishing organizations when evaluating a source, regardless of taking the survey in a pre- or post-instruction mindset. 

As this pattern of changing feelings but unchanging actions emerged, I’ve been thinking about one question: how do we both bolster student confidence in their source evaluation abilities while also helping shift those actual abilities toward a more critical approach?  This is something that can and should be explored in a deeper context, but our research and my later analysis has indicated a few ways that instructional librarians can enhance the efficacy of their teaching: 

  1. When appropriate, emphasize the evaluation and possible use of credible non-scholarly sources in research papers 

When deeming a source inappropriate, students often seemed to prioritize the non-scholarliness of certain sources (like news articles or blogs) over actually evaluating the evidentiary content of the source 

  1. Indicate the necessity of analyzing content along with container 

Students also were quick to emphasize the container in their evaluative process (e.g. information format or publishing organization) rather than the content itself in their process, especially in pre-instruction data. Librarians should continue to emphasize analyzing source content and context (looking for bias, adequate evidence, etc.) 

  1. Student confidence shifts more quickly than actual skills

Keeping this in mind can help define perspective on the instructional “big picture.”

In such a tumultuous environment -- amidst a pandemic-induced digital semester, a presidential election season, and monumental civil action against injustice, it has been very rewarding to understand more about how librarians can facilitate critical evaluative mindsets in undergraduates. While we continue to analyze our data, I hope that we can draw an even fuller picture of student perceptions, feelings, and actions surrounding source evaluation, and that that picture can inform future library instruction practices.

Submitted by Julia Maxwell (juliamax@umich.edu).