Thursday, March 5, 2009

Questions from the Arizona Forum

Hi all,

A couple of questions came up in the Arizona Forum that I thought warranted a blog post. The first concerned the identification of intervention initiatives that are in place. The second question concerned how data might be collected to prove that a particular instructional strategy works. These two issues are pretty fundamental to the intervention model that we were presenting in the forum, so it was good to hear them come up in the conversation.

Let’s start with the first issue. It is a pretty straight forward idea that it is impossible to draw any conclusions about what instructional approaches have been successful without knowing which have actually been delivered to the students. However, the devil is in the details. What is simple to say can be far from simple to actually do. How might an administrator be able to know whether a given piece of instructional content was actually used in a wide array of classrooms housed in different buildings being run by teachers with a lot of other responsibilities on their plate day in and day out? Answering this sort of question can be a labor intensive effort, particularly if the instructional plan that is being evaluated is large and spanning several weeks or months. In the forum we raised the idea that evaluation of intervention can focus on very small blocks of instruction. Focusing the evaluation on a block of instruction that lasts only 30 minutes greatly simplifies the task of determining what has actually been implemented with which students.

Proving that an educational intervention has been successful would require that an experiment be run. Conducting an experiment with a single activity that takes only 30 minutes to run can make conducting such experiments truly practical. The number of students that must be considered is smaller and the number of measures that must be employed to measure outcomes is far less extensive. A single focused quiz can be given to 20 or 30 students who have completed the lesson that is the focus of the study. The work required to manage the entire process is greatly reduced from it would be to conduct a large scale study. These same benefits can also be enjoyed if a district is simply interested in determining whether students that complete an instructional activity meet the goal of demonstrating mastery of the standards that the activity has been designed to target.

While the practical benefits of such a small scale evaluation are easy to see, it does beg the question about what one can actually conclude from such seemingly “lightweight” data collection efforts. How in the world can the outcomes from an evaluation that took only 30 minutes and was run with only 25 kids stack up against the power of a multi-school study involving hundreds of children and spanning months? The answer is that just as David was able to make short work of Goliath, the power of short limited scope studies can beat the massive evaluation efforts nearly every time. This is particularly true if one considers that the practicality of the approach means that the evaluations may be easily replicated at different sites and with different children. It is also more likely that the instruction being evaluated will be fully implemented as designed, if for no other reason than the number of kids and the work involved is minimal.
We would be interested in knowing what kinds of procedures you all have used in your districts to determine how interventions are rolled out and to determine whether they are having the intended effect.

No comments: