Report

Using Course Evaluations as Early Indicators of Student Success

Course Evaluations as a New Indicator of Student Success

End-of-term course evaluations are a nearly universal tool for gathering student feedback about courses and instructors in higher education. These results are used to inform curriculum development, teaching improvements, and instructor promotions/tenures. The body of research around evaluations tends to focus on these outcomes as a result. While much has been studied on the fairness of evaluation gradings in tenure processes, and the interaction between course difficulty and evaluation responses, there is less research on how the evaluations reflect student behavior. Due to newly available data from Watermark’s Course Evaluations & Surveys and Student Success & Engagement solutions, evaluation data may help inform institutional outreach strategies, enabling success coaches and advisors to engage specific populations before they stop attending. This approach uses student voice directly to understand student outcomes, combining the strategies and frameworks of institutional effectiveness and student success departments to deepen our understanding of student outcomes.

Due to newly available data from Watermark’s Course Evaluations & Surveys and Student Success & Engagement solutions, evaluation data may help inform institutional outreach strategies, enabling success coaches and advisors to engage specific populations before they stop attending.

Course evaluation responses as a proxy for student engagement

Since the response rates of these evaluations are very rarely 100%, there are missing perspectives in the data for the students who do not respond to course evaluations. These “missing” data points were proxied as a lack of engagement at the institution. It was hypothesized that students who are more engaged with the culture and faculty of the institution are those who are more likely to give feedback, and thus, regardless of the scores in an evaluation response, are more likely to have successful outcomes. From this, it was decided to review whether there were differences in future outcomes for students who historically had not responded to evaluations versus those who had.

Data in Watermark’s Course Evaluations & Surveys and Student Success & Engagement solutions were joined together to acquire the information needed.

Defining success

For the purposes of the research, student persistence and course completion rates were used as our two success outcomes, as defined below.

  • Persistence – Often called term-to-term retention, is the percentage of students who graduate, earn a credential, or attempt credit in 1 academic term within the next 6 months of the term being evaluated.
  • Course completion – The percentage of students who earn an A, B, C, P, or S in the enrolled course. D grades are excluded because they do not provide ample academic progress toward a degree even if they are considered “passing.”
Defining the data

The dataset used consisted of partner institutions that agreed to allow data from both Course Evaluations & Surveys and Student Success & Engagement to be tied together for research purposes, and that had more than 3 years of historical data in each system. Data after 2020 was used that could be tied together from both systems, and included for student data that could be linked to their unique, institution-provided email addresses.

This provided a cohort of 9 institutions, two 4-year institutions and seven 2-year institutions, with 3 years of data in each system.

Within this cohort, three sets of data were used for each term and course a student was enrolled (for persistence and course completion tracking respectively). The three datasets we tracked were:

  • The response rates for the first term a student was enrolled in at the institution
  • The response rates for the term previous to the currently enrolled term
  • The cumulative response rates for all the previous terms the student was enrolled in

From there the dataset was simplified to put each of those students into one of two cohorts:

  • Those who responded to none of the evaluations in the given time period
  • Those who responded to at least one of the evaluations in the given time period

Grouped into the above categories, the following trends in the overall success rates of each cohort were observed.

The above table shows that there is a significant gap in the success rates of students using this proxy for engagement, which supports the initial hypothesis that engaged students who are passing classes and planning to continue their education are also those who are more likely to spend the time to respond to evaluations.

Given that this data only uses inputs that are available before the start of a term, the data can be used as a leading indicator to show potential risk to course completion and persistence to any following term.

The first year engagement gap

Next, previous term response rates were selected for further analysis and interaction effects with other data on the SS&E platform were explored. Selecting only the previous term data allowed
the evaluation of new student behavior. Using the previous term rates allowed for the analysis of relatively recent student data, and had a larger percentage of non-responding student data than the cumulative response rate data.

The following populations were then selected and analyzed independently: Enrollment type (Dual enrollment, Graduate, and Undergraduate), Race, Ethnicity, Gender, Gender and minority pairing, Enrollment status (Part-Time vs. Full-Time), Terms completed, Institution type, Current enrolled course type, FAFSA demographics, and Age. Of these, we found the largest interactions were with Student population, Enrollment status, Terms completed, and Institution type.

Impact of course evaluation responses on term persistence for identified student populations

Impact of course evaluation responses on course completion for identified student populations

As shown above, the success gap is very large for new students. One potential explanation is that less engaged students would be less likely to complete course evaluations in their first enrolled term, typically a Fall term. The following Spring, due to lack of engagement at the institution, they have less concrete plans to continue and finish their education, so have a lower course completion rate, and are less likely to enroll in the next fall term’s classes.

This story is only one potential scenario to explain the gap in first-year attainment among these two groups. More qualitative analysis would have to be done to confirm, however the gap remains large regardless. Whatever the scenario, there needs to be an intervention to close that gap.

Potential responses as a leading indicator

The above indicators are only useful if they can lead to action. In the current research, data is unavailable as to which types of interventions are most useful. Future study into the effectiveness of different types of interventions is necessary. Example interventions that can be explored with this type of data include, but are not limited to:

  • Interviews with non-responding students
  • Alerts sent out to instructors
  • Proactive engagement from an institution’s Tutoring services department
  • Course/career counseling reaching out to identify the classes needed for graduation after the first year
  • First-year resources prioritizing students who shows signs of disengagement
Future areas of study

Using course evaluations as a way to study gaps in student success is not as deeply studied as the effectiveness of course evaluations on faculty behavior and curriculum building. The aforementioned research opens a new way of using a combined dataset to further these research efforts. Further, using the quantitative values within the evaluations could potentially allow further grouping within the students, perhaps showing an even larger gap between students who do not respond to evaluations and the different types of responses given from others. AI and sentiment analysis could be used to determine if there were indicators of risk to student outcomes based on qualitative feedback within the course evaluations. As can be seen, the research areas outlined in this paper are just the beginning. Watermark looks forward to working with our clients
to learn more about how course evaluation insights can be applied to student success outreach efforts and making that work available to all of higher education to improve overall student outcomes.

Sources:
  • Wolfgang Stroebe (2020) Student Evaluations of Teaching Encourages Poor Teaching and Contributes to Grade Inflation: A Theoretical and Empirical Analysis, Basic and Applied Social Psychology, 42:4, 276-294, DOI: 10.1080/01973533.2020.1756817
  • Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching rating and student learning are not related. Studies in Educational Evaluation, 54, 22–42.
  • Ya-Ling Chiu, Ku-Hsieh Chen, Yuan-Teng Hsu & Jying-Nan Wang (2019) Understanding the perceived quality of professors’ teaching effectiveness in various disciplines: the moderating effects of teaching at top colleges, Assessment & Evaluation in Higher Education, 44:3, 449-462, DOI: 10.1080/02602938.2018.1520193
  • Shawn R. Simonson, Brittnee Earl & Megan Frary (2022) Establishing a Framework for Assessing Teaching Effectiveness, College Teaching, 70:2, 164-180, DOI: 10.1080/87567555.2021.1909528
man at his desk using a calculator
We’re ready to help

Learn more about the Watermark EIS.

Female graduate smiling at the camera
Table of contents
View our EIS