Many of us who are involved in education today have heard about the value added approach to analyzing assessment data coming from a classroom. In a nutshell, this is a method of analyzing test results that attempts to answer the question of whether something that is in the classroom, typically the teacher, adds to the growth that the students make above and beyond what would otherwise be expected. The approach made its way onto the educational main stage in the state of Tennessee as the Tennessee Valued Added Assessment System (TVAAS).
This rather dry topic would likely not have been something that was well known outside the ivory towers were it not for the growing question of merit pay for educators. One of my statistics professors used to love to say that “Statistics isn’t a topic for polite conversation”. The introduction of pay into the conversation definitely casts it in a different light. In NYC, consideration is being given to utilizing a value added analyses in tenure decisions for principals. Value added models have been used for determining teacher bonus pay in Tennessee and Texas. Michelle Rhee has argued for using a value added approach to determining teacher performance in the DC school system. One might say it is all the rage, both for the size of the spotlight shining its way and the emotion that its use for this purpose has brought forth.
I will not be using this post to venture into the turbulent waters of discussing who should be getting paid based on results and who shouldn’t. I’ll leave it to others to opine on that very difficult and complicated question. My purpose here is to introduce the idea that the type of questions one asks from a value added perspective, the mindset if you will, can greatly inform instructional decision making through creative application. The thoughts that I will write about here are not intended to say that current applications of the value added type approach are wrong or misguided. I intend only to offer a different twist for everyone’s consideration.
The fundamental question in the value added mind set is whether something that has been added to the classroom positively impacts student learning above and beyond the status quo. One could easily ask this question of new instructional strategies introduced to the classroom that are intended to teach a certain skill. For instance, one might evaluate a new instructional activity designed to teach finding the lowest common denominator between two fractions. Given the limited scope of the activity, this evaluation could be conducted with a great deal of efficiency in very short time by the administration of a few test questions. This sort of evaluation will provide the sort of data that could be used immediately to guide instruction. If the activity is successful then teachers can move on to the next topic. If it is unsuccessful then a new approach may be utilized. The immediacy of the results puts one in a position of being able to make decisions informed by data without having to wait for the year or the semester to end.
Conducting short term small scale evaluations is different from the typical approach in value added analysis of being concerned about impact over a long period. The question of long term impact over time could easily be asked of collection of instructional activities or lessons. In an earlier post, Christine Burnham discusses some of the ways that impact over time could be tested.
As always, we look forward to hearing your thoughts about these issues.
Whether or not intervention is provided online, data must be collected to determine if the intervention is working. Certainly, if the intervention is conducted online, then it will be that much easier to collect and analyze assessment data regarding the effectiveness of the intervention in a timely fashion.
But, I have another related topic for discussion. With so much online data now being available through formative assessments and benchmark assessments, I see that many districts are becoming more and more inclined to use this data not as a means to inform instruction, but rather as a means to put grades on report cards and to determine retention and promotion. Is this wise? Suddenly the purpose of the assessments shifts entirely, does it not? The assessments become summative in nature. For example, one district I know of has the requirement for promotion in grades 2-8 that students "pass" two out of three reading benchmark assessments (given about 10 weeks apart during the school year prior to state testing). Having read your recent white paper, it is clear that although there is a high risk of not passing the AIMS (state test) for students who only “met” (passed) one out of three benchmarks, a significant number of students do go on to pass the AIMS. And, in fact, if the one benchmark that a student passed was the third benchmark, their risk of not passing the AIMS drops considerably. So, it seems unwise to tie promotion or retention to a simple 2 out of 3 benchmark pass rate. Intervention yes, but punitive measures no. Anyone’s thoughts on this matter would be greatly appreciated.
Post a Comment