A short while back I wrote a post about the use of Value Added Measurement (VAM) within the context of educational reforms. I mentioned President Obama speaking of the need to reward effective teachers financially. Indeed the impact of effective teachers is well established. There is certainly value in identifying which instructors have the most impact on the learning of their students. However, just as with any set of tools that might be employed to the ultimate task of raising student achievement, there are certain limitations to VAM and merit pay as a source of guidance for policy. One of the most notable is that it provides no insight into what effective teaching actually looks like. My task here will be to make good on my promise from the last post to talk a bit about some additional tools that can be added to the arsenal. As I said before, it makes sense to make use of any tool that we can to tackle this important task.
VAM asks the question who is the most effective in the classroom. What if we added some additional questions such as: what is the most effective way of teaching a given skill? What are the specific needs of students who need additional help? How are they progressing as they receive instruction? These might be thought of as "bottom up" questions as opposed to the “top down” type of inquiries that characterize VAM. The notion is that specific identification of the components of effective construction will support the construction of a larger program. This type of approach could provide a nice complement to the gains that can be achieved from VAM.
What is needed to effectively ask these types of questions? One necessary ingredient is the ability to work collaboratively on the implementation of common objectives, assessments, and instructional approaches across different classes and schools. It must be possible to distribute necessary materials to all the teachers who need them. It must be possible to monitor the delivery of that instruction so that differences across teachers can be identified and, where necessary, they can be addressed. Highly consistent implementation is needed in order to make strong conclusions about what works or doesn’t work.
It also must be possible to gather accurate and reliable assessment data on a frequent basis. Assessments must be shared so that information may be reliably aggregated. Assessments should also be well integrated into instruction so that the picture of learning is highly detailed.
This approach is a nice complement to VAM because it positions us to answer the question of what can be done when differences in outcomes are identified across classrooms. The implicit assumption is that teacher effectiveness can be taught once the components of effective instruction are identified. In a recent article, Stephen Raudenbush describes the successful implementation of a literacy program based on what he terms a shared systemic approach to instruction. Central to the approach are shared goals, instructional content, and assessments. Differences in teacher expertise are expected and the system encourages mentoring by those whose skills are more advanced. Raudenbush argues that this sort of collaborative approach is key to effectively identifying and then implementing the kinds of systemic changes that will ultimately advance instruction and improve schools.
The tools within Galileo have been designed to support the process of determining what strategies are effective in helping students to meet goals. As we described in our recent seminar, the intervention model positions districts to do that sort of collaborative work. We would be interested in hearing responses from those who have worked in a district where such an approach was implemented. How did it seem to work? What sort of approach was taken to implementation? What kinds of problems came up?
Post a Comment