Assessment in higher education seeks to understand what students learn and what they get from the larger experience that wraps around that learning: life in residence halls, study-abroad programs, internships, and student clubs. But for many faculty members, assessment is a word with drab connotations, associated with spreadsheets, long forms to fill out, and rigid standards not under their control.
Assessment proponents say the process doesn’t have to be painful. They say assessment can be a chance for reflection led by faculty members on what they are trying to teach, what students are actually learning, and how to map a road through the curriculum so students’ educational journeys leave them with the knowledge and skills needed for employment and life writ large.
As higher education comes under increasingly fierce attack from parents and policymakers about its value, assessment professionals say they have the key tools to prove the worth of college degrees.
Given the central role of assessment, The Chronicle of Higher Education, with support from Watermark, conducted a national survey about the topic with administrators and faculty members at two-and four-year institutions. The survey was conducted from March 25 to April 8, 2024, and 871 people responded, with an almost-even split between administrators and faculty members. Additional research and more than a dozen follow-up interviews helped provide fresh insights on the role of assessment on college campuses today.
The Chronicle survey found generally positive views of assessment at colleges but also revealed a sharp divide between faculty members and administrators’ views of the process.
Seventy-three percent of administrators agreed or strongly agreed with the statement that assessment develops “data that is insightful and helpful to the institution, including individual departments or academic units.” But only 52 percent of faculty members agreed or strongly agreed with that statement.
Sixty-two percent of administrators believed assessment had improved at their institution over the last five years, while only 48 percent of faculty members felt that way.
In follow-up interviews, academics had either a “glass-half-full” or “glass-half-empty” view of the survey’s results. Some said that assessment had come a long way in winning over faculty members in the last five or 10 years and lauded the largely positive views of it in the survey. Other interviewees, particularly administrators, were disappointed that more faculty members are not seeing assessment in a positive light.
“Faculty tend to see assessment as a bureaucratic nightmare,” says Laura Palucki Blake, assistant vice president for institutional research and effectiveness at Harvey Mudd College, in Claremont, Calif., “and it just doesn’t have to be that way.”
“In an ideal world,” she adds, “assessment is a collaborative inquiry into student learning — and I think both administrators and faculty members could get behind that.”
Part of the faculty-administrator rift appears to be caused by poor communication and a lack of knowledge among academics about the state of assessment at their institutions. Many survey questions had a high proportion of administrators or faculty members who said they were “unsure” about their response. Thirty-one percent of faculty members said they were “unsure,” for instance, if administrators at their institution found assessment outcomes to be insightful and helpful.
“Faculty tend to see assessment as a bureaucratic nightmare,
and it just doesn’t have to be that way.”
Likewise, 25 percent of administrators were unsure if faculty members found assessment outcomes to be insightful and helpful. Twenty-two percent of administrators and 17 percent of faculty members weren’t sure if assessment had improved at their institution over the last five years. (Please see chart, page 6.)
Commenting on the results, Kate Drezek McConnell, vice president for curricular and pedagogical innovation at the American Association of Colleges and Universities, says: “It is striking the number of faculty respondents who honestly didn’t know or didn’t have an opinion on some of it, which I think speaks to how assessment hasn’t necessarily permeated institutions.”
Those who track assessment across multiple institutions say it’s easy to become obsessed with data collection and analysis, but harder to follow up with communication and reflection. “Often, we think that collecting data to satisfy an accreditation requirement or compliance report is sufficient,” says Darlena Jones, senior director of analytics, research, and education for the Association for Institutional Research, an organization that supports the use of data and analytics in higher education. “But we’ve lost an amazing opportunity to be better stewards of our institution’s resources and to better serve our students.”
Institutions, she says, need “to shine a light on everything we do, to not have ‘sacred cow’ programs, to be willing to examine and ask questions of everything. When we have accomplished that, then we can safely say we have achieved a culture of assessment.”
Catherine Wehlburg, president of Athens State University, in Alabama, calls herself “an assessment geek” who thoroughly enjoys the process and thinks it can do “fabulous, wonderful things.” But she believes many institutions become obsessed with the process and don’t shape it so that it can give colleges priorities to move forward with. “Assessment,” she says, “has become more important than learning from assessment.”
Some experts trace the birth of assessment back to the First National Conference on Assessment in Higher Education, held in 1985. Later, accreditors began requiring colleges to measure student learning, a move that had both positive and negative consequences. Assessment soon became something all colleges had to do, but faculty members resented the extra work it required when they saw no goal other than satisfying accreditors.
In the Chronicle’s survey, those original attitudes seem to still pervade some institutions. When asked whether assessment at their institutions was “weighted more toward compliance with accreditors or toward institutional improvement,” 31 percent of administrators and 41 percent of faculty members said “compliance.” “I find that disappointing,” Wehlburg says.
Taken as a whole, the survey results and subsequent interviews pointed to three major themes: the search for ways to make faculty members enthusiastic assessment participants, the efforts to streamline and improve assessment so it is less burdensome and more powerful, and the attempts to use assessment to actually improve learning.
Jeremy P. Reich, assistant director for assessment and accreditation at the New Jersey Institute of Technology, said that when he looked at the results of the Chronicle survey, he felt that “both sides want the same thing, but we’re not quite achieving it.”
Assessment administrators most want to measure learning. The key to improving assessment, then, is faculty members’ willingness to participate, since they are closest to students.
Assessment’s roots in accreditation mean many faculty members view assessment as an effort to comply with external standards they had no part in shaping. One respondent to the Chronicle survey spoke for many others when he wrote in a survey comment box that assessment “should be eliminated. … It is a big waste of time, instituted only to placate accrediting organizations.”
“Assessment data has often been used as a stick against people,” says Jones, of the Association for Institutional Research. “If administrators create a culture where assessment is being used for improvement — as a way to lift up faculty members, to give them autonomy and ways to improve what they’re teaching, then I think you would see a much more positive outlook on assessment.”
Some administrators try to compel faculty participation by requiring assessment work in employment contracts and including it in performance evaluations. Some administrators use the “compliance hammer” by saying data is needed by a certain deadline or the institution risks losing accreditation. Other administrators encourage assessment by recognizing and rewarding it, including it as part of faculty members’ service obligations, and discussing innovation or improvements that faculty members have made campuswide.
“If administrators create a culture where assessment is being used for improvement — as a way to lift up faculty members, to give them autonomy and ways to improve what they’re teaching, then I think you would see a much more positive outlook on assessment.”
Better communication is an often-recommended solution to the faculty-administrator divide. “Communication sounds so easy, but it is so challenging,” says Terry Barmann, executive director of institutional effectiveness at Arapahoe Community College, near Denver. “The mode of communication, the content of communication: Should this be a meeting, or should this be an email? I think that’s a challenge in higher ed across a whole lot of the work that we do.”
In the Chronicle survey, 55 percent of faculty members disagreed or strongly disagreed with the idea that “administrators do a good job communicating why assessment is important to the institution.” The midlevel academic managers laboring on assessment seek campuswide messaging from those higher up the management chain that their work is worthwhile. “It’s incumbent on the leadership,” says McConnell, of the American Association of Colleges and Universities, “to help explain the value of the work and the value of the information that comes out of assessment to multiple campus constituencies.” That includes explaining that assessment should be a key part of strategic planning, she says.
“We’ve really had to work on emphasizing that this has nothing to do with faculty’s teaching evaluations. It’s evaluating the learning.”
One of the key messages academic leaders need to communicate, administrators say, is that learning, not the professors themselves, is what’s under the microscope. “At the beginning, faculty are very nervous that they’re being evaluated,” says Kathleen Gorski, dean of learning outcomes, curriculum, and program development at Waubonsee Community College, which has three campuses in Illinois. “We’ve really had to work on emphasizing that this has nothing to do with faculty’s teaching evaluations. It’s evaluating the learning.”
Barmann says when he began working to use assessment to seek areas for improvement at Arapahoe Community College, he got quiet pushback. Faculty members, he says, seemed to think, “Wait a minute, if I’m identifying areas for improvement, I’m shooting myself in the foot, because I’m admitting a weakness.”
Starting assessment with what matters most to faculty members in their departments, majors, courses or programs helps bridge the divide, says Palucki Blake at Harvey Mudd College. Administrators also need to give faculty members time, she says. “We all know in our own disciplines that research takes time,” she says. Faculty can’t come up with a new agenda every year for studying learning outcomes; fewer, but more meaningful research projects would be better, she adds.
Along with more time, faculty members also want more help. Forty-five percent of faculty members responding to the Chronicle survey said they felt their institution didn’t help them enough with assessment.
Assessment directors working in institutions where the process is regarded positively make complying with accreditation standards their responsibility. They seek to make it invisible to faculty members except the handful who need to play an active role. “I’ll worry about the accreditors,” says Colleen Karnas-Haines, director of assessment, planning, and accreditation in the College of Computing and Informatics at the University of North Carolina at Charlotte. Instead, she tries to stimulate debate about faculty work with students. “If we have all those important discussions,” she says, “we will meet all of our accreditors’ requirements.”
Integrating faculty into assessment from the beginning is crucial, she and other assessment directors say, because when suggestions for adjusting learning outcomes or changing curriculum arise later on, faculty members will be leading the charge.
At Washington State University, William B. Davis has been on both sides of assessment. He is a bio- chemistry professor and has worked in administrative posts. He’s currently the interim vice provost for academic engagement and student achievement. Assessment is a difficult task at Washington State, with its large array of colleges, campuses, and academic departments, he says. As at other larger research institutions, some departments — such as engineering and nursing — have outside accreditors. ABET, the organization that accredits engineering programs worldwide, for example, provides its own set of desired student-learning outcomes and has strong ideas about how assessment should work. Humanities departments, however, are freer to draft their own desired outcomes.
Davis sees both positive and negative aspects of disciplines having department-specific accreditors. It can make the assessment more informed and structured, he says, but the risk is that it becomes a “box-checking exercise” without discussion of what additional bespoke assessment might be valuable, such as examining how students feel about their experience at a particular institution.
Davis and other administrators say that once faculty members fully get behind assessment — a process that can be as slow as turning the proverbial aircraft carrier — assessment can steer into calm and even pleasant waters.
Some of my “most productive and joyful moments in assessment are when I’ve worked closely with faculty on a project designed to understand something that matters to them.”
Some of the “most productive and joyful moments in assessment” that Palucki Blake has experienced at Harvey Mudd, she says, “are when I’ve worked closely with faculty on a project designed to understand something that matters to them.”
Having helped develop a culture of trust and innovation, she says, she is now reaping the rewards. “Nothing makes me feel better than when a faculty member comes into my office, sits down, and says, ‘I’ve got an assessment question for you.’”
Five years ago, she says, “People weren’t stopping by my office saying, ‘Hey, do we have any data on that?’”
At many colleges, assessment directors are refining their data collection to ease the burden on those who have to gather information and make sure the institution only collects data it can use productively. It’s easy to get into a rut, they say, with the same data being collected year after year just because the task is on someone’s calendar.
Faculty members and administrators both agree that assessment could use some streamlining. In the Chronicle survey, only 46 percent of administrators and 41 percent of faculty members agreed or strongly agreed that “the assessment process at my institution is efficient and not terribly time consuming for faculty members.” On the positive side, solid majorities of both faculty members and administrators believed their institution, department, or academic unit has developed new tools to evaluate student learning.
Assessment directors are now more often trying to obtain a broad portrait of student life. Student wellness, both physical and mental, is front and center in the post-Covid era. Assessment officials are trying to discover when students become deeply engaged, where they find joy, and how they balance life and work.
One of the ways administrators are trying to ease the data-collection burden while still getting a wide lens on students’ lives is to set up assessment in three- or four-year cycles, so faculty members don’t repeat the same tasks year after year. One year might be dedicated to collecting data, the next year to designing an intervention to remedy a poor learning outcome, and a third year to analyzing and reflecting on new data.
At the College of Computing and Informatics at the University of North Carolina at Charlotte, administrators say moving to a three-year assessment cycle revolutionized their process. “It’s seen more as an opportunity, rather than an obligation,” says Karnas-Haines. “That has been a big change in our college, and assessment has gotten a lot more participation.” In study years, faculty members decide if they are satisfied with the learning outcomes, note where the learning is taking place in the curriculum, and decide if they are satisfied with how the outcome is being assessed.
The longer cycle of data collection also reassures faculty that the assessment is accurate and a new pattern in the data is not just a fluke. “We know after two years, it’s not an aberration,” says Karnas-Haines. “It’s not a blip.”
Technology can ease the burden of assessment or introduce obfuscation and frustration. “There’s a lot of really great software out there to help in the recording and the warehousing of information,” says McConnell at the American Association of Colleges and Universities. “But what you never want is the tail wagging the dog.” She says institutions risk using default software settings, which in turn become the institutions’ default approach to assessment. That is not the outcome that the software designers intend, she says.
“There’s a lot of really great software out there to help in the recording and the warehousing of information. But what you never want is the tail wagging the dog.”
In the Chronicle survey, support for technology was underwhelming. Only 56 percent of administrators and 37 percent of faculty members agreed or strongly agreed that “new technology tools are making the assessment process more efficient.”
Assessment professionals say they hear complaints at conferences that technology vendors overpromise but underdeliver, particularly in their products’ ability to connect to other university data systems and disaggregate data so that administrators can discover gaps in student performance.
At Arapahoe Community College, Barmann decided to ditch complex assessment-related software because faculty members only dealt with it once a year and had to relearn how to use it each time. He now has faculty members work with routine word-processing and spreadsheet software that they are more comfortable using.
Administrators in the University of North Carolina at Charlotte’s College of Computing and Informatics say they have automated 80 to 90 percent of their assessment data collection. Administrators planted “probes” in the learning-management system that collect and output data to the assessment database. The administrators say they set the system up so the data they collect are direct measurements of individual learning outcomes and not vague measures like grades on midterm exams. “What we actually found is that grades are highly subjective,” says Karnas-Haines. “If you dig below the surface of the grades, you get more meaningful information.”
Students might be learning the right skills and knowledge but get poor grades because their class attendance was poor, they didn’t participate in discussions, or they didn’t turn in their work on time. By avoiding relying on grades and focusing more on direct, detailed assessment of learning outcomes, she says, “We can actually see how our students are performing with skill acquisition and understanding of concepts.”
The hottest topic in technology is generative artificial intelligence. Discussion about its possible uses and risks pervades the assessment world as it does the rest of higher education. But the biggest signal sent out about AI in the Chronicle survey was one of uncertainty: Fifty-one percent of administrators and 47 percent of faculty members said they didn’t know if generative AI tools could improve assessment.
Wehlburg at Athens State says AI might be useful in analyzing large amounts of text, such as student comments on course evaluations. Academics could ask a generative-AI tool for highlights, such as, “What are three things we need to do more of or less of?” While such results would need to be checked by humans, she and others say, they could provide a useful starting point for discussion.
Analyzing qualitative data might be AI’s first, best use, experts say, since strong statistical methods for analyzing quantitative data already exist. “If I give my students an open-ended question, and I get 100 responses, how do I start to make sense of that?” says Davis at Washington State. “AI can help me start to synthesize.”
“If AI can help us lower some of those barriers for people to start to engage with data at a deeper level,” he adds, “then that’s a real value added.”
Responses to the Chronicle survey also showed contrast between faculty members and administration about whether assessment was helping to develop ideas to close equity gaps at colleges. Fifty-nine percent of administrators and 37 percent of faculty members felt that such development was taking place.
“I wouldn’t say assessment is the way to close equity gaps,” says Constance Tucker, president-elect of the Association for the Assessment of Learning in Higher Education and vice provost of educational improvement and innovation at Oregon Health and Science University. “But it is an essential part of identifying if one even exists.” Closing gaps, she says, takes multiple cycles of trying interventions, measuring them, working with academic partners, and celebrating positive change when it happens.
But even identifying gaps can be challenging. Many campuses struggle to effectively connect demographic data with assessment results, says McConnell. “They have the data,” she says. “It is just in different buckets on campus.”
At Washington State, Davis says it can be difficult to close gaps once they are discovered. Enough data may exist to know there is a problem, he says, but there is not always enough data to get insights into causes and contributing factors.
He experienced this himself when he was teaching an introductory biology class that routinely had about 500 students from as many as 50 majors. He could find achievement gaps between groups of students, such as first-generation students and students with parents who had attended college. But fixing the gaps was another matter.
His only home run, he says, was getting rid of formulaic “cookbook labs” which, while they might teach the necessary skills, were not that exciting for many students. He switched to project-based labs and had his students participate in a national undergraduate research program known as Sea Phages, in which students do environmental sampling — digging in soil to find new viruses, including those known as bacteriophages that infect bacteria, and then identifying and naming those viruses. Reflecting on why that approach made such a difference in closing equity gaps, he says “it was the real-world application. It was making it real. This is what it’s like to be a scientist.”
“I wouldn’t say assessment is the way to close equity gaps. But it is an essential part of identifying if one even exists.”
Other successful innovations in introductory science classes have been “argumentation sections,” during which students debate about data, and shaping faculty messaging to students. He says if professors effectively communicate to students, “I believe every one of you can succeed, and I’m here to support you,” it encourages disadvantaged students to persist.
When he talks to faculty members about improvements that can be made through assessment, he emphasizes that their nature is “small, incremental changes. … It takes endurance, persistence, and patience.”
At Harvey Mudd College, faculty members noticed a gender gap in achievement in a mandatory introductory engineering course. The faculty redesigned the course with a “flipped classroom” approach, in which material that once was covered in lectures was now learned by students outside of the classroom. Time in class was spent more on experiential learning. In addition, professors sought out a practical engineering experience that would have broad appeal. Building a rocket, a previous project, was tossed out in favor of working with an underwater robot. As a result of those changes, faculty found that students learned more of what they needed to know, and the gender gap was closed.
Increasingly, assessment directors are trying to find a way to measure the learning that happens outside of formal courses, or “co-curricular learning.” This includes student time spent at conferences, in career centers, in student clubs, on “alternative spring breaks,” or in undergraduate research. Those activities often don’t have traditional assignments attached to them, such as papers or tests, so it can be trickier to measure results.
The New Jersey Institute of Technology’s strategic plan seeks to make sure all students will have at least one pass through experiential learning. But Reich, the assistant assessment director there, is not yet sure how he will measure success. “It’s an open question,” he says.\
“Do you have some sort of common outcomes for experiential learning that you can assess them all on? Honestly, no,” he says.
The Council for the Advancement of Standards in Higher Education has developed standards for university offices and programs and developed self-assessment guides that can be a starting point for institutions.
Increasingly, assessment directors are trying to find a way to measure the learning that happens outside of formal courses, or “co-curricular learning.”
This includes student time spent at conferences, in career centers, in student clubs, on “alternative spring breaks,” or in undergraduate research.
Some activities can be more easily measured than others: Tracking students who attend debt-management counseling can be linked to later data about the proportion of students who default on their loans.
John C. Crepeau, a professor of mechanical engineering at the University of Idaho, says that institution has good relations with industrial partners, alumni, and an advisory board. Those partners are not shy about telling faculty members or deans when student performance in internships or jobs comes up short, he says, and he believes such feed- back from stakeholders is a valuable kind of assessment.
At Washington State, many activities that were once “co-curricular” are becoming part of the formal curriculum. Introductory courses are building partnerships with the university’s Center for Civic Engagement to embed what might once have been internships, volunteer opportunities, or service learning. Not only does that widen access to experiential learning, Davis says, but it places formerly co-curricular activities into a con- text where it can be easier to measure their contribution.
Assessment, when coupled with reflection, can result in new courses, the reshuffling of curriculum, and new teaching and learning content that can be introduced across a wide number of courses.
Engineering students at the University of Idaho responding to a graduating senior survey said they wished they had more training in responding to ethical issues they might encounter in their profession. Professors, considering such topics as the speed-over-quality debates at the Boeing Company and the crashes of self-driving cars that have resulted in fatalities, readily agreed to introduce more ethics discussions into engineering classrooms.
When it comes to writing up assessment data….
a vivid portrait can encourage change.
At Arapahoe Community College, an open-access institution, assessment revealed that many students struggled with comprehending graphs and understanding how they showed relationships among variables. The math department produced some explanatory content to help with that, says Barmann, which was then distributed widely.
Wehlburg at Athens State says she thinks many substantial curricular improvements don’t come through formal assessment — but that such informal means should still be regarded as part of assessment. At an institution where she worked previously, she recalls the economics faculty noticing students’ methodology in their capstone projects was weak. The faculty created a new methodology course and made sure that the topic of methodology was scaffolded throughout the economics curriculum. She says the faculty told her they didn’t have time to do assessment because they had been too busy reworking the curricular foundations for the capstone project. To which her response was — “That is assessment.”
When it comes to writing up assessment data, Wehlburg likes to see a vivid portrait that can encourage change. When she became assessment director at Athens State, before she was president, she read past assessment reports. “I knew amazing things were happening across cam- pus,” she says, “And I read the assessment reports, and they were boring.”
To get to that level of excitement, she and others say, faculty members need the freedom to consider what they want students to be able to do, instead of just looking at what can be easily measured.
Bojan Cukic, dean of the College of Computing and Informatics at the University of North Carolina at Charlotte, said that when he was a department chair, he worked with his colleagues on assessment of an introductory computer-science class, which was then taught in many sections. Scrutiny of the course found there was no reasonable pattern of student grades across the sections. He and others realized students were being treated differently in different sections of the same course. The goal of giving faculty members independence seemed to result in too much variety in how students were taught — and even, to some extent, what they were taught.
The solution was to create a class of about 1,000 students who met just once a week but broke into labs of roughly 50 students. Lots of graduate teaching assistants, prepared and supervised closely, taught those labs, and other Ph.D. students supported the undergraduates. “Now all of a sudden, the grades make sense,” says Cukic. “The material is highly synchronized; the experience and teamwork is easier.”
If assessment at its best is informed self-reflection, then when, if ever, does assessment reflect on itself? In the Chronicle survey, 49 percent of administrators said their institutions evaluated assessment, while 22 percent said they didn’t. Thirty percent, surprisingly, were unsure.
“What we have not done well in higher ed is to evaluate the process itself,” says Jones, of the Association for Institutional Research.
At the American Association of Colleges and Universities, McConnell says institutions haven’t always “built in reflective space about what to do with the evidence that we have,” versus just wanting to collect more data. And in analysis, she believes, administrators can lean too much toward looking at the average student, and miss analysis of outliers: Who is “knocking it out of the park”? Who is struggling at the bottom?
There is also the risk that institutions are being too easy on themselves. “I’ve talked to fellow presidents and provosts,” Wehlburg says, “and they’ll say, yeah, all of our learning outcomes are met and that’s a good thing.” But she thinks to herself, “That’s a terrible thing,” because those leaders can’t set a direction for improvement.
“One of the biggest problems that I see in assessment,” she says, “is that we’re not asking the difficult and hard questions.”
In the end of March and the beginning of April, the Chronicle emailed surveys to administrators and faculty members working at two- and four-year colleges in the United States. Eight hundred and seventy-one people responded. Of those, 455 were faculty members and 416 were administrators.
See how our tools are helping clients right now, get in-depth information on topics that matter, and stay up-to-date on trends in higher ed.