Two Lessons from the Assessment Movement
Are students learning what we think we’re teaching? Are they learning the concepts and abilities that matter most? These are the questions that assessment seeks to answer, and they are central to the BVA’s emphasis on advancing the use of “evidence-based” practices in the classroom.
As even a quick tour through the literature on assessment makes clear, using evidence about student learning to make (and then re-assess) improvements isn’t easy. There are methodological challenges: designing appropriate instruments and assignments, for instance. Logistical challenges arise as well: finding time and space to administer assessments in already jam-packed courses and programs. And most of all there are cultural challenges–because although academics routinely seek out and value evidence in their research, systematically using evidence to improve teaching and curriculum has not traditionally been something faculty or departments have been prepared to do or rewarded for.
That said, assessment is increasingly an expectation for institutions in both the U.S. and Canada, driven in large part by accreditation and quality assurance schemes, but also by a sense that changes in the context of higher education–particularly in the composition of the student body–call on educators to do better. Two lessons stand out from a look at current and emerging assessment practice.
First, it’s hard to improve unless you have a goal, so an initial step is having clear, shared expectations (“outcomes,” to use the language of assessment) for student learning. Getting agreement about those expectations can be a heavy lift: judgments about what knowledge and skills matter most depend on who is making that judgment and for what purpose (for future employment? for deep understanding? for success in graduate school?). But there’s now notable progress on this front. According to a recent survey conducted by the National Institute for Learning Outcomes Assessment (NILOA), 82% of U.S. colleges and universities have adopted an explicit set of student learning outcomes common to all undergraduates, across all majors. And as Harvey Weingarten, president of the Higher Education Quality Consortium of Ontario argues, having those expectations clearly in view is the first step in improvement-driven assessment. (Click here for the full report from the NILOA survey of campus assessment practices.)
The next, much harder step, is to drive those outcomes down into individual courses and programs–and to make sure that students understand what they are expected to know and be able to do. In his new book, Carl Wieman has forceful things to say about how hard faculty find it, at least initially, to think in terms of learning goals or outcomes—and that process is made even more challenging when it comes to articulating outcomes in ways that align with the goals colleagues have for other courses in a sequence or program, or with those cross-institutional outcomes documented by NILOA. Again, however, there’s progress worth noting. NILOA’s report points to the value of evidence from the classroom (rather than from external instruments) for making improvements that matter for students. Accordingly, a growing number of campuses are creating opportunities for faculty to work together on the design of assignments. Doing so raises all the right questions about intended outcomes and goals, and about the relationship between “my goals” and “your goals” for students moving through the program.
A second lesson follows: improvement is more likely when assessment begins with goals and questions that faculty care about. That is, just having evidence–even good evidence and a lot of it–does not ensure that improvements will happen. The people who actually work with students–faculty members most notably, but also advisors, lab assistants, and student affairs professionals–have to care about the evidence, see it as relevant to questions and challenges that matter to them, and have opportunities to talk about it together and to think about what it means and what actions it implies.
In this sense, being “evidence-based” is not simply a matter of adopting what educational research has shown to be effective (though there is much to be learned there); it’s also a matter of asking and exploring locally consequential questions that educators care about. That’s assessment at its best, an ongoing process of gathering and using evidence for continuous improvement.
That process is also at the heart of much of the work of the BVA: working to build community and leadership around processes like course design; supporting faculty in the use of new kinds of evidence available, for instance, through learning analytics; exploring a range of ways to document teaching improvement; and, yes, putting in place learning outcomes and assessment that lead to real improvements.
For more information:
Hutchings, P., Jankowski, N. A., & Ewell, P. T. (2014). Catalyzing assignment design activity on your campus: Lessons from NILOA’s assignment library initiative. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
Wieman, C. (2017). Improving how universities teach science: Lessons from the Science Education Initiative. Cambridge, MA: Harvard University Press.
Weingarten, H. P. (2017, July). The evolution of learning outcomes: Now comes the exciting part. Higher Education Quality Consortium of Ontario blog post, July 6, 2017.