I am happy to report that the “alignment” experiment mentioned in my previous blog post was a qualified success, on two fronts. First, test scores were notably higher (about a letter grade in aggregate) than in the past. Second, students like the class better, and found it less daunting. The feedback from the first round was so positive that I extended the experiment through the entire semester, and the positive results continued pretty consistently.
A quick recap, in the event you’ve slept since reading my earlier post. For four years I have been tasked with teaching and improving a core course in the Aerospace Engineering curriculum in applied physics which we call “Dynamics.” Historically saddled with a higher-than-we’d-like drop/withdraw/fail rate the experience of students (my late-nineties-self included) is one of nearly uniform disorientation and a sense of being overwhelmed by the breadth of material in the course and how to apply it to solve problems (and pass tests). I have taught this course using traditional, online, and interactive-engagement techniques, and have yet to see the kind of improvement to outcomes which I wish to see. However, going through the “Quality Matters” (QM) training myself and having an explicit standard against which to assess my own course allowed me to consider the course experience in a rigorous manner and from the perspective of the students.
I ended up restructuring the entire course presentation and the assessments to make sure that they explicitly and completely “aligned” (a QM concept) – that is to say that I codified the course material into very high-resolution learning objectives (well over a hundred for the semester), outlined my lectures based on the learning objective codes, and explicitly structured my assessments around the very same learning objective codes so that there was no question about the connection between lecture content and assessment. I provided the catalog of learning objectives to the students progressively, and encouraged them to use it as a checklist when preparing for tests to make sure that they had covered everything they would be tested on.
Not only did this produce a marked improvement in performance, but students reported less disorientation, and were able to more clearly articulate where they needed help. That alone is worth the price of admission – in the past, my students either (1) thought they understood, when in fact they didn’t, or (2) knew they didn’t understand, but couldn’t articulate what they didn’t understand to me. The codification of the entire course largely alleviated that issue.
One semester does not a hypothesis prove, but I am sufficiently persuaded that I am going to refine and re-implement this basic approach again in the fall and see if the results are similar. Another advantage of the high-resolution “alignment” is that it will inform my real-time classroom assessments (I use the stone-age version of clickers, the color-coded notecard). My plan for the fall is to add an in-class assessment for every learning objective, to make sure that I don’t proceed before every building block is in place.
In the end, “Alignment” is a common-sense objective which we all probably tacitly implement. But I have found that making it explicit, for myself and for my students, really clarified my own thinking on my subject, and appears to be helping my students as well.