If you want to see some innovative assessment work, check in with community colleges. Because of their student population, program structure, and relationship with stakeholders, they have a fresh perspective on assessment, and have developed exciting practices related to pathways to completion and competency-based education programs. Recently, Watermark and NILOA held a panel discussion with community college assessment leaders to discuss the unique challenges and opportunities for assessment at community colleges. Here’s an excerpt of that conversation with Kathy Adair, Director of Development & Assessment and Social Science Department Chair, Bay Mills Community College; Jacob Ashby, Assistant Dean for Assessment & Articulation, Frederick Community College; and Jill Millard, Associate Vice President of Planning & Institutional Effectiveness, South Piedmont Community College. The conversation was moderated by Natasha Jankowski, Director of the National Institute for Learning Outcomes Assessment (NILOA) and Research Associate Professor with the Department of Education Policy, Organization and Leadership at the University of Illinois Urbana-Champaign.
In the first part of the conversation, the panel discussed what’s unique about assessment of student learning at community colleges, and the innovations those differences drive. Here, the panel digs in on quality assurance, self-study, and assessment technology at community colleges.
Community colleges exist in this web of a quality assurance mandate, different licensure requirements, professional certifications, as well as accreditation. How do you manage those various points of connection while ensuring that student learning is relevant, timely, and portable?
Jacob Ashby: When we have any discussions around assessment, I make sure that the focus is on student learning. What we do is not scientific research, and I know that’s not always popular to say. We’re collecting data to see where we are. It’s important to keep the focus on continuous improvement and not get caught up in the data. I come from a business background, and continuous improvement is part of business. Assessment is really about making sure you’re focusing on what you’re going to do with the data you capture and how you’re going to improve. So I think if we focus on that, then we allow ourselves to use assessment data effectively. The accountability frameworks from accreditation bodies focus on improvement, so that’s what I focus on when I have discussions with faculty and it keeps student learning in the forefront.
Kathy Adair: We focus a great deal on student learning and continuous improvement. One of the benefits of coming from a small institution is there’s not a lot of hoops to jump through to get things accomplished. So if you’ve got a good idea, you can come up to the president’s office and knock on his door and say, hey, you know, I’d really like to see this happen. And if it’s a good idea and we have enough support for it, then it’s going to happen. It really makes a big difference when everybody’s on the same page.
Jill Millard: We have a few programs, not a high number, but we have a few programs that are accredited by program-specific accreditors. Thinking about quality assurance and alignment, we look at program accreditation requirements, what type of learning outcomes do those program accreditors look for, completion requirements and so on. We also work with our businesses and the community to look at what their needs are. We have annual advisory committee meetings for each of our programs and pull from all of those sources to make sure that the learning that we’re measuring the outcomes for our programs is relevant. Those groups are going to have the most current learning needs at the forefront. So I think working closely with them and trying to measure learning outcomes in that way has really been beneficial for us.
What are your thoughts about assessment within student affairs?
Jacob Ashby: Our academic program review is like a mini self-study or mini accreditation—it’s a self-reflection document. We revised that to fit our nonacademic areas. They do a little self reflection, and as part of that, we identify measures to determine their effectiveness and then they develop an action plan. We track that action plan throughout the five-year cycle.
If you’re interested in anything for support units, the Council for the Advancement of Standards (CAS) has really nice standards and learning outcomes.
Kathy Adair: We also do co-curricular assessment. For example, our student support services assess quantitatively by measuring the number of students that come in for things like tutoring and we do surveys to capture student feedback. We also do some measurement on our student clubs, including participation and what they feel about the clubs. Being a commuter campus, we don’t have a lot going on, but what we do have going on, we try to measure.
Jill Millard: We have that same model as Kathy where we’re supporting unit outcome assessment rather than student learning outcome assessment for service areas.
What is the role of technology in supporting meaningful assessment of student learning at your institution? Do you use technology to provide feedback to people on assessment planning about the types of data that they’re collecting?
Jacob Ashby: We use TK20 by Watermark, which has a feedback function in it, so it allows us to collect feedback from the department. They take their data and work through it and what it means to them. They enter feedback to us, then myself as well as the deans can provide feedback back to them. It’s a form that we personalize within the system, and we’re able to report out on that.
Jill Millard: We use a couple of Watermark solutions. One thing I really like is that we can determine in real time whether or not assessment has been conducted, or if it’s at least scheduled. I can immediately see if a student has submitted their artifacts, and if the scoring has been done. That’s really helpful for keeping my audit reports up to date. We can run reports for analysis, and during annual meetings with the faculty, we share the data with them and allow them to reflect on that data. It helps to have verbal discussions about what they’re learning based on their data. We capture those reflections right in our Watermark solution, so it’s there and we can generate reports that we send to the faculty. We can also send reports to accreditors. It’s been a big help for us in generating our body of evidence for our SACSCOC reporting. To be honest with you, I don’t really know how I would do it without the technology.
Kathy Adair: In the past we’ve had binders and binders and binders of assessment data. We use two Watermark products: Taskstream AMS and Course Evaluations & Surveys (formerly EvaluationKIT). Within the AMS system we have our course assessment, our program assessment, strategic planning, co-curricular assessment, and faculty and staff qualifications, and we’re developing a platform for program review. It’s been very, very beneficial for us. For example, I can send out a report to the department chairs, letting them know who as and hasn’t submitted assessment data at the course level. I can send out to our vice president of academics who has or has not submitted assessment data at the program level. As a Higher Learning Commission (HLC) reviewer, I know when we had our review a couple of years ago, they were pretty impressed with the role of technology and the reports that we’re able to present to them for continuous improvement within our courses and our programs. It’s just been, it’s just been great, and I don’t want to go back to binders.
Could you share a bit about how you chose your technology solution and what you needed and wanted it to do to really be of value to you?
Jill Millard: We had looked at an in-house solution using our Sharepoint site and other technologies, as well as another company. Then we looked at Taskstream, knowing that we needed a place that could house the student work, collect all the information, score the work, tie up all the summary Information, and generate your reports. The other thing we really liked about Taskstream is we have a number of programs that use external evaluators for clinical sites. Our medical stenography program really liked the option of providing access to offsite evaluators so that they could assess students directly from the online. We had been using a paper-based solution and getting numerous paper-based assessments of 20 to 30 students per semester. Then we put it into Taskstream and it was really a godsend. We weren’t losing any data by hand transfer paper-based artifacts. It was a real easy decision for us and we really have been pleased ever since.
Kathy Adair: We discovered Taskstream at the HLC conference, and I was impressed with it right off the bat. Other solutions we looked at were very, very expensive. And for a small school with limited needs and limited budgets, the price really got us interested. Once we got into the system and realized what it could do, it took the place of so many other systems that we had that didn’t talk to each other. So now we have everything in one place, everybody’s on the same page, and it’s just working out great for us.
Please share any additional thoughts or some words of wisdom and encouragement for others.
Jill Millard: We have a lot of high contact hour courses in our AST and health programs, and faculty are stretched to do that, and they’re out in the communities promoting their programs, so their time really is limited. We also have dual credit programs, and about 45 CTE pathways, which I think are unique to community colleges. We integrate that assessment throughout our routine assessment. I would like to eventually make assessment seamless, rather than being considered something that’s done on top of everything else or in addition to everything else.
Jacob Ashby: My words of wisdom would just be to enjoy the relationships. Everything I do at my institution is because of relationships. We all spend our days plugged into a computer, but realistically it’s those interactions with faculty—whether they be good, bad, or indifferent—about assessment that really help you get the work done. If it weren’t for the relationships that I’ve built, I wouldn’t get any data and I wouldn’t get any work done. So I think that’s the most important part of what we do. We focus a lot on the numbers and the data and the technical stuff, but I think that the relationships are really important.
Kathy Adair: I would have to agree that relationships are important. I would add that the simpler we can make it for faculty, the less time it takes, the more buy-in we’re going to get.
To learn more, please see the first post in this series, or listen to the full conversation.