One of the comments following Chuck Brainerd’s Forum presentation on experimental research in standards-based education raised a number of interesting questions involving the generalization of experimental findings across groups. The fundamental question is whether or not experimental findings obtained with one sample of students will generalize to intervention findings involving another sample of students. A related question is whether a large-scale experiment involving several hundred students will yield greater likelihood of generalizability across groups than a small-scale study.
These are complicated questions. To begin the discussion, let’s make some simplifying assumptions. Let’s assume that the experiment involves one experimental treatment group and one control group and that the results reveal a significant difference favoring the treatment group. Assume further that the experimental treatment can be applied without modification as the instructional procedure in the intervention. Under these conditions, we would expect the level of learning to be similar in the intervention group to that observed for the treatment group in the experiment if the two samples of students were drawn at random from the same population.
Given an appropriate sampling procedure, there would be no advantage for a large-scale experiment over a small-scale study. In fact, the small-scale study would have an advantage because random selection from the population would be less costly and less time consuming for a small-scale study than for a large-scale study. As Chuck pointed out, the only advantage associated with large numbers is an increase in statistical power. Since the power curve approaches the asymptote quickly, the benefits of increased sample size are rapidly outweighed by the increases in cost and time associated with large-scale studies.
What can we conclude in the absence of an appropriate sampling procedure? It is no secret that participants in experiments are rarely if ever selected at random from a defined population. Experimental studies are generally carried out using samples of convenience. In the best of circumstances, participants are assigned at random to experimental and control conditions. However, the step of selecting all participants at random from a defined population is almost always omitted. When this step is omitted, the basis for assuming generalizability of findings across groups is compromised. Moreover, there is no reason to assume that the degree of compromise is mitigated by increasing sample size.
In the case of small-scale studies, the question of whether or not findings from a particular group will generalize to other groups is often addressed by conducting multiple studies involving a variety of groups. This approach is supported by the ease of conducting small-scale studies and by the invariable need for additional research to address questions stemming from initial findings. As the number of replications of the initial findings rises, evidence for generalizability mounts. Historically, the conduct of multiple small-scale studies has been highly successful in supporting the generalizability of initial findings. For example, small-scale studies on observational learning have revealed generalizability not only across different groups of people, but also across species. For instance, dolphins and monkeys as well as people can learn simply by observing the behavior of a model.
In the case of large-scale studies, the conduct of multiple studies is generally impractical. As a consequence, the conduct of multiple studies does not provide an effective approach for establishing generalizability.
When research is closely linked to intervention practice involving implementation of an intervention system, the problem of generalizability is likely to be manageable for a small-scale study even when evidence regarding generalizability is lacking. If a small experimental study reveals a benefit for a particular treatment, our best guess is that it will have a benefit in practice. Thus, it is reasonable to implement the experimental treatment in a small-scale intervention. In an intervention system, implementation will be monitored to ensure fidelity of the treatment. Results will also be monitored to determine whether or not students are mastering the material presented at expected levels. If expected levels are achieved, intervention performance is consistent with expectations based on experimental findings. If expected levels are not achieved, intervention performance is not consistent with expectations. Under these conditions, an analysis of the intervention is conducted and because the intervention is short, adjustments can typically be made. Analyses of small-scale interventions may inform future research and guide practice toward the achievement of intervention goals.
As indicated at the beginning of this post, I have simplified the problems of linking experimental research to practice. For example, I have ignored issues often addressed through hierarchical linear modeling such as the nesting of school effects within districts and the nesting of class effects within schools. I have also avoided discussion of the contributions of generalizability theory to cross-group generalization. These issues I leave to another day.
Thursday, February 19, 2009
Tuesday, February 17, 2009
Focus on Enrichment: What I Learned at the Educational Interventions Forum
The forum was a great success. We ended up having some wonderful conversations with the participants about the kinds of things that are needed for effective intervention efforts.
I wanted to write about one point that was particularly salient for me. During one of the question and answer sessions following one of the papers I presented, a participant asked what the ATI Instructional Dialogs could do for enrichment. I confessed that I tend to focus on identifying students who are in need of extra help and that I tend to forget the enrichment end of the spectrum, even though I know better. After all, Jack (Bergan) is fond of saying that if education in the United States is to improve, we can’t just focus on bringing lower performing students up to acceptable levels of performance. Rather, intervention has to be the “tide that raises all boats,” meaning that we must improve education for the high-performing students as well. State standards should be viewed as minimum requirements. Students should be given the opportunity to exceed these levels to the greatest extent possible.
In response to the participant’s question, I pointed out that the Instructional Dialogs are designed so that they can be used as independent assignments, and that students who have demonstrated proficiency in the regular curriculum can be assigned further dialogs in other topics as well. Jonathan Frank from WestEd expressed gentle disagreement with my response during his presentation. His point was that it’s not good enough to just give students who have mastered the required material “something else to do.” They must be given greater challenges. I whole-heartedly agree. That is why we include recommendations for enrichment on the Intervention Planning Report. These recommendations indicate what the student should learn next to promote learning beyond meeting the standard.
During our breakout session, participants from Pueblo City Schools discussed the success that they’ve been having with enrichment. I think enrichment is a great topic for further discussion on this blog. What kinds of enrichment programs have been successful? What are the challenges associated with enrichment? What additional data or technology would assist in providing effective enrichment programs?
We look forward to everyone’s ideas about how to improve education for higher-performing students.
I wanted to write about one point that was particularly salient for me. During one of the question and answer sessions following one of the papers I presented, a participant asked what the ATI Instructional Dialogs could do for enrichment. I confessed that I tend to focus on identifying students who are in need of extra help and that I tend to forget the enrichment end of the spectrum, even though I know better. After all, Jack (Bergan) is fond of saying that if education in the United States is to improve, we can’t just focus on bringing lower performing students up to acceptable levels of performance. Rather, intervention has to be the “tide that raises all boats,” meaning that we must improve education for the high-performing students as well. State standards should be viewed as minimum requirements. Students should be given the opportunity to exceed these levels to the greatest extent possible.
In response to the participant’s question, I pointed out that the Instructional Dialogs are designed so that they can be used as independent assignments, and that students who have demonstrated proficiency in the regular curriculum can be assigned further dialogs in other topics as well. Jonathan Frank from WestEd expressed gentle disagreement with my response during his presentation. His point was that it’s not good enough to just give students who have mastered the required material “something else to do.” They must be given greater challenges. I whole-heartedly agree. That is why we include recommendations for enrichment on the Intervention Planning Report. These recommendations indicate what the student should learn next to promote learning beyond meeting the standard.
During our breakout session, participants from Pueblo City Schools discussed the success that they’ve been having with enrichment. I think enrichment is a great topic for further discussion on this blog. What kinds of enrichment programs have been successful? What are the challenges associated with enrichment? What additional data or technology would assist in providing effective enrichment programs?
We look forward to everyone’s ideas about how to improve education for higher-performing students.
Labels:
Educational intervention,
enrichment
Wednesday, February 11, 2009
Curricular & Re-Teaching Interventions in Your District
Do you have curricular or re-teaching intervention strategies that are currently succeeding in your disrict? Curricular interventions are designed to assess the effectiveness of all or parts of the district curriculum. Re-teaching interventions are designed to increase standards mastery of specific students. If the district has an intervention initiative, what are the ways in which its success is being measured or documented?
Next Instructional Intervention Steps
What are the next steps that need to be taken in your district and at the district, school, and class level to implement interventions to elevate student achievement beyond current levels?
Helpful Technologies in Intervention Implementation
Select from the following list those technologies that you believe would be supportive in intervention implementation? Technology that:
- Provides instructional activities and outcome reporting capabilities.
- Makes it possible to build you own instructional activities and monitor their effectiveness.
- Makes it possible to customize online instructional content.
- Provides a record of the extent to which an intervention has actually been implemented.
- Provides formative and benchmark test data to inform instruction.
- Makes it easy to search for and link to online instructional resources and align those resources to standards.
Professional Development Supporting Intervention Success
In your view, what types of professional development services are needed to effectively facilitate intervention initiatives in your district?
Additional Intervention Topics for Discussion
To further your participation in the dialog on educational interventions, please provide additional intervention topics for discussion.
Subscribe to:
Posts (Atom)