In a recent LA Times article, John Deasy, the incoming superintendent in Los Angeles, reframed the teacher evaluation debate with telling words: "We are not questing for perfect. We are questing for much better."
Instead of using new value-added tools in lock-step mechanical ways, the nation’s second-largest school district (700,000+) is working to find a way to use the data in helpful ways to both identify effective teaching and improve it.
It’s a challenging task. Teachers’ unions appear to be adamantly opposed to the use of any unstable data in any high stakes decision. And district leaders often counter the union’s stance by claiming that the VAM measures are more reliable than spotty classroom observations conducted by ill-trained administrators who do not have time to assess teachers (even if they are well prepared to do so).
In this LA Times piece reporting on LAUSD’s new evaluation initiative, journalist Teresa Watanabe offers a much more nuanced view of the inherent problems with VAM — but does not go so far as to suggest that the uses of the statistical tool have to be couched in either-or terms. Watanabe also notes that “Many teachers and union leaders say they are not necessarily opposed to value-added methods but want to understand them and have a say in how they're used.”
In a recent paper penned with classroom experts, my CTQ colleague Alesha Daughtrey and I make the claim that VAM can be effectively used in performance evaluation if teacher leaders are engaged deeply “in efforts to sharpen (the) models and their underlying assessments.” We suggest that “if more opportunities are made available for teachers to understand as well as use the results, in light of both the assessments upon which they are built and the classroom context in which they are generated, many more of them will be responsive to the evidence.”
Now a group of young teachers from our New Millennium Initiative have taken the idea further, suggesting that teachers should be assessed with VAM data, not in absolute terms, but as evidence that can be used to improve their teaching and student learning. They point out that a VAM data point is not static — and teachers should be assessed on how they interpret and use evidence to advance student learning. (Watch for their upcoming new report here).
Lee Shulman, president emeritus of the Carnegie Foundation for the Advancement of Teaching, has said: “One of the most dangerous ideas in assessment is the myth of a ‘magic bullet,’ some powerful test with psychometric properties so outstanding that we can base high-stakes decisions on the results of performance on that measure alone.”
Teaching effectiveness must be defined broadly and measured with a variety of tools. There are many, many teacher leaders, like the ones I am working with on pushing out our vision of the future of teaching (our TEACHING 2030 team, pictured here), who have some remarkable ideas about how to do so.
Dr. Deasy is correct. We must quest for better. If we listen to expert teachers, we'll get there.