Monday, March 30, 2009

How Can Galileo Assist in Interventions?

As I interacted with school districts during the Intervention Forum, a number of people wanted to know exactly how Galileo could assist educators with interventions. You already know Galileo provides assessment data which is broken down by standard. It allows educators to see which students have made progress and which students still need additional instruction. While this information helps to identify which students may need additional help, the more challenging task is finding content to use in an intervention. Galileo not only helps identify and group students for you, but Galileo suggests Instructional Dialogs that can be used in your re-teaching initiatives.


The Risk Level report on the Benchmark Results page will group a class of students based on how at-risk the student is for not passing the state assessment. You will want to identify the group of students that you would like to expose to an intervention: High Risk, Moderate Risk, Low Risk or On Course students.


Once you have identified a group of students to work with, you will be presented with an intervention strategy for that group of students. Galileo will organize all of the standards tested into steps for re-teaching. Determine which instructional step you’d like to focus on. Then click on the Assignments button to see recommended Instructional Dialogs for each of the state standards that make up that step of the intervention strategy.





The Dialogs listed are links, so you can preview the lesson and see if it is something you’d like to use with the group of students. Teachers have implemented Dialogs by having students do them online, they have given students hand-held responders to use as they presented the Dialog, and teachers have simply presented the Dialog and have asked students to verbally respond. Notice as you preview a Dialog, each one has a quiz or a formative assessment attached. This quiz is meant to help teachers determine if students learned the standard during the intervention.

If you see a Dialog you’d like to use with students, just continue scrolling down the page and complete the online form to schedule the Dialog.



You are now ready to proceed with your intervention. As you can see, Galileo automatically links assessment data to instruction. Your benchmark data assesses the instruction that has occurred. You can run and analyze reports broken down by individual standards and individual students to determine what students need help with. You can then group students and assign already-made instructional Dialogs to aid in re-teaching students. And finally, to ensure students have learned the content of a re-teaching intervention, there is a follow-up quiz or formative assessment that can be administered automatically to students.
Have you had a chance to use or implement Dialogs? Tell us about the experience.

Tuesday, March 24, 2009

Are Instructional Dialogs a Good Teaching Methodology?

In one of the many discussions during the breakout session in the forum, the question was posed as to whether Instructional Dialogs should be considered a good teaching methodology or whether Instructional Dialogs were just an easy way for a teacher to get through the day. First, the question, “What makes a great teacher?” needs to be considered. This question has endless answers and has been greatly researched. Responses vary and are numerous. Here are just a few examples.

Great teachers:
· Clearly state a daily learning objective, refer back to it, and check for mastery.
· Are organized and prepared for class.
· Understand the subject matter they are teaching.
· Involve students and encourage them to think at a higher level.
· Consider student’s current academic level and instruct students based on their specific needs.
· Communicate with parents on a regular basis.
· Expect big things for all students.
· Build relationships with their students and care about them as people first.

Instructional dialogs share many of the characteristics of a great teacher. Each Instructional Dialog clearly states a learning objective and consistently refers back to the objective. Throughout the instruction, the child is checked for understanding using instructional questions and feedback. Finally, the formative quiz at the end of the instructional dialog shows whether the student has mastered the skill.

Preparation and organization is key for a great teacher. The instructional dialog is completed, perfected, and scheduled before the beginning of class. This allows the teachers to be well prepared for instruction.

Having the ability to link to experts all over the Internet and allowing the teacher to give students access to the best resources available in order to develop a thorough understanding of the topic is a huge plus when using Instructional Dialogs. Instructional Dialogs also provide teachers the ability to get help in explaining and understanding more complex topics.

The feedback portion of the instructional dialogs pinpoints the student mistakes and provides specific direction as to what the learner needs to do differently to master the standard. This feedback not only teaches students at their current level, but it encourages learners to think at a higher level. In other words, it forces students to analyze their own mistakes.

When a teacher posts the results from an Instructional Dialog, Galileo allows parents to see student academic progress. This tool helps teachers easily communicate with parents. Coupling instructional dialogs, formative assessments, and benchmark assessments using Galileo creates a record for parents to see constant academic development of their child.

A couple of intangible characteristics must be added to instructional dialogs to perfect this exciting teaching methodology. These include but are not limited to love of learning, love of people, expectations for success, and fun. We all know that computers will never be able to replace a great teacher, but instructional dialogs can definitely make great teachers even greater!

Friday, March 20, 2009

The National Call to Measure Teacher Effectiveness

On March 9th President Obama made a speech about the vision of his administration for education. The speech included calls for some controversial things. One of the hottest topics was the idea of rewarding more effective teachers with extra pay. Conversations around this topic quite rightly raise questions. What measures are fair to determine which teachers are the most effective? How do we account for the fact that teachers don’t get a randomly selected group of students? What kinds of stats are the most fair to evaluate the results?

All of these questions warrant careful consideration. In that light, a quick look at what we already know about some of the issues involved in answering these questions is in order.

The first issue that should be considered is the objective research data which speaks to whether there is a teacher effect on student achievement that can be quantified. In short, the research shows that an effect for teachers can be demonstrated and that the effect lasts beyond the time the student is in that teacher’s class. Some findings have shown that the impact of having an effective teacher can still be measurable 3-4 years later. This finding would suggest that students who are assigned to a particularly effective teacher for several years in a row will likely be far ahead of a student who hasn’t been assigned to equally effective instructors. Interestingly, the links between teacher variables such as credentialing have been at best weakly associated with achievement. This research is nicely, and thoroughly, summarized in a monograph prepared by the RAND Corporation.


While the research world has pretty consistently shown that a teacher effect on student outcomes can be measured, it gets a whole lot more complicated when you dive into the specifics. One of the first nitty gritty questions that must be considered is how exactly should a teacher effect be quantified? Many researchers have employed Value Added Modeling (VAM) to address this question. In short, VAM asks the question of how much variance in a student measure of achievement can be attributed to the teacher. Estimates of teacher impact obtained by utilizing VAM can be influenced by a number of variables including the impact of the way in which teacher variables and other possible confounds are modeled, the measure of student achievement that is selected, and the impact of missing data. The RAND monograph provides a very useful summary of the impact of some of these issues and their impact on conclusions that might be drawn.

All of these questions might lead one to the conclusion that developing a VAM based approach to measuring teacher effectiveness is too complicated to be able to pull off effectively. Such a conclusion would be misplaced. Many papers on the topic of VAM have consistently shown that the data provided by this approach can be useful in answering the types of questions that would face policy makers charged with delivering on President Obama’s call to design a system that can be used to reward teachers who are effective. While VAM is certainly useful, we would suggest that several things happen if it is to be employed for this type of work. The first is that skilled researchers who are familiar with the types of issues that impact VAM estimates should be involved in the design of the system on which policy makers make their decisions. Second, these same researchers also need to be able to engage in research to further understanding of the impact of various issues on VAM estimates.

Before I sign off for this post I want to raise a different but related issue for consideration. Ill bring it up here as an introduction for a post that will follow shortly

Implicit to the merit pay discussion is the idea that providing such incentives will ultimately elevate the level of instruction and lead to higher student achievement. It is our view that the goal of elevating student achievement should be looked at with all the tools that are at our disposal. Just like any other tool, VAM type analyses have certain strengths and notable weaknesses. One of the most notable shortcomings of VAM is that it can tell us little about what effective instruction looks like. It can’t provide any information about what the more effective teachers do that makes a difference. Other types of approaches that I will talk about in subsequent posts can nicely complement findings from VAM based work by addressing this very issue.

As always we look forward to hearing the thoughts of our readers

Thursday, March 19, 2009

Galileo Interventions Trial Offer/WebEx

Following the Educational Intervention Forum, many districts not currently clients of ATI have expressed an interest in a free trial which would allow them to sample Instructional Dialogs. Dialogs are instructional materials including questions to assist in checking for understanding during instruction and concluded with optional short formative tests to document student participation/performance. Districts who would like to participate in the trial offer will be setup with a district account in Galileo that will allow them to access ATI developed dialogs and use them for instruction in a few classes.

In addition, if you are interested in learning more about other components of the system, a WebEx can be set up. A WebEx is a guided tour of the system over the Internet. Please contact the Field Services department at 800-367-4762 Ext. 124 to obtain more information about the free trial or WebEx.

Attention Current Clients: You can obtain assistance in Instructional Dialog use by calling the Educational Management Services department at 800-367-4762 Ext. 138.

Monday, March 16, 2009

A Forum Follow-up…

…Shouldn’t we assess our assessments before we use them to evaluate our intervention investments?

In a nutshell – Yes!

An interesting and highly pertinent question came up during the recent multi-state Educational Interventions Forum. It was this – “If we are going to use benchmark assessments as part of our efforts to evaluate the educational return on our intervention investments, then, is it not the case that we must first evaluate the credibility of our assessments?”

What a great question! Much like 21st century educational interventions, benchmark assessment tools are proliferating at a rapid rate. This broad array of assessments can potentially serve a wide range of educational needs and goals. Consequently, when it comes to evaluating the credibility of a benchmark assessment tool for use in documenting intervention impact, a basic question to ask is: Does the tool have a credible “fit to function”?

Appropriately designed, credible benchmark assessments can provide valuable information for not only determining the impact of intervention investments on learning, but also provide valuable and timely data to help guide instruction throughout the course of the school year. Benchmark assessments are locally relevant, district-wide assessments designed to measure student achievement of standards for the primary purpose of informing instruction. In assessing “fit to function” as it relates to use of benchmark assessments in evaluating intervention investments, a number of issues should be addressed:

1. Do your benchmark assessments provide reliable information on student learning as it relates to mastery of standards? Reliability has to do with the consistency of information provided by an assessment. A particularly important form of reliability for benchmark assessment is internal consistency. Measures of internal consistency provide information regarding the extent to which all of the items on a benchmark assessment are related to the underlying ability (e.g., math) that the assessment is designed to measure.

Reliability is directly affected by the length of the benchmark assessment. Longer assessments tend to be more reliable than shorter assessments. Based on our research in developing and analyzing customized benchmark assessments for school districts in several states we have found that benchmark assessments consistently begin to reach an acceptable level of reliability at a length of about 35 to 40 items.

2. Do your benchmark assessments provide valid information on student learning as it relates to mastery of standards? Since an important function of benchmark assessment is to measure the achievement of state standards, it is reasonable to expect significant correlations between benchmark assessments in a particular state and the statewide test for that state. A finding revealing such correlations provides important evidence of the validity of the benchmark assessments.

Although significant correlations support the validity of benchmark assessments, it is important to recognize that the two forms of assessment serve different purposes. Statewide tests are typically administered toward the end of the school year to provide accountability information.

Benchmark assessments are administered periodically during the school year to guide instruction. The skills assessed on a benchmark test are typically selected to match skills targeted for intervention at a particular time during the school year. For these and other reasons, benchmark assessments should not be thought of as replicas of statewide tests.

Correlations among benchmark assessments provide another source of evidence of the validity of benchmark assessments. This is because multiple benchmark tests administered during the school year measure student achievement in the same or related knowledge areas. As a result, it is reasonable to expect benchmark tests to correlate well with each other.

3. Do your benchmark assessments accurately forecast state classifications of standards mastery? The validity of local school district customized benchmark assessments is supported not only by the correlations among benchmark assessments and statewide tests, but also by their accuracy in forecasting state classifications of standards mastery. Since determining whether or not students have mastered standards is, for all intents and purposes, a categorical decision (i.e., they did or they didn’t), research on the accuracy of forecasted classifications can provide validity evidence for benchmark assessments.


In addition to the fundamental kinds of research oriented “fit to function” questions raised above, it is also essential to consider a number of other issues in assessing your benchmark assessments. These might include:

What kinds of procedures are in place to ensure that your benchmark assessments are aligned to state and district standards and tailored to reasonably accommodate your district pacing guides?

What kinds of procedures are in place to ensure that items utilized in your benchmark assessments have gone through a rigorous process of development including alignment with standards and/or performance objectives, review, and certification?

What kinds of procedures are in place to ensure that the psychometric properties of your benchmarks assessments including Item Response Theory (IRT) item parameter estimates such as difficulty, discrimination, and guessing are continuously calibrated on your local student population?

Clearly, whether you plan to use benchmark assessments for evaluating the impact of your intervention investments, or to inform data-driven instructional decision making, or to monitor student progress and level of risk for meeting or not meeting state standards, the basic issues discussed her likely deserve some discussion within your district. I will conclude by asking you a few questions:

First, from your perspective, are these valuable questions to consider and why?

Second, what kinds of activities are currently occurring within your district to help you ensure that your benchmark assessment system has a credible “fit to function”?

Third, what other kinds of questions do you think we should be asking about our assessment systems as we evaluate their strengths and limitations in helping us to meet our educational goals for students?

Thursday, March 5, 2009

Questions from the Arizona Forum

Hi all,

A couple of questions came up in the Arizona Forum that I thought warranted a blog post. The first concerned the identification of intervention initiatives that are in place. The second question concerned how data might be collected to prove that a particular instructional strategy works. These two issues are pretty fundamental to the intervention model that we were presenting in the forum, so it was good to hear them come up in the conversation.

Let’s start with the first issue. It is a pretty straight forward idea that it is impossible to draw any conclusions about what instructional approaches have been successful without knowing which have actually been delivered to the students. However, the devil is in the details. What is simple to say can be far from simple to actually do. How might an administrator be able to know whether a given piece of instructional content was actually used in a wide array of classrooms housed in different buildings being run by teachers with a lot of other responsibilities on their plate day in and day out? Answering this sort of question can be a labor intensive effort, particularly if the instructional plan that is being evaluated is large and spanning several weeks or months. In the forum we raised the idea that evaluation of intervention can focus on very small blocks of instruction. Focusing the evaluation on a block of instruction that lasts only 30 minutes greatly simplifies the task of determining what has actually been implemented with which students.

Proving that an educational intervention has been successful would require that an experiment be run. Conducting an experiment with a single activity that takes only 30 minutes to run can make conducting such experiments truly practical. The number of students that must be considered is smaller and the number of measures that must be employed to measure outcomes is far less extensive. A single focused quiz can be given to 20 or 30 students who have completed the lesson that is the focus of the study. The work required to manage the entire process is greatly reduced from it would be to conduct a large scale study. These same benefits can also be enjoyed if a district is simply interested in determining whether students that complete an instructional activity meet the goal of demonstrating mastery of the standards that the activity has been designed to target.

While the practical benefits of such a small scale evaluation are easy to see, it does beg the question about what one can actually conclude from such seemingly “lightweight” data collection efforts. How in the world can the outcomes from an evaluation that took only 30 minutes and was run with only 25 kids stack up against the power of a multi-school study involving hundreds of children and spanning months? The answer is that just as David was able to make short work of Goliath, the power of short limited scope studies can beat the massive evaluation efforts nearly every time. This is particularly true if one considers that the practicality of the approach means that the evaluations may be easily replicated at different sites and with different children. It is also more likely that the instruction being evaluated will be fully implemented as designed, if for no other reason than the number of kids and the work involved is minimal.
We would be interested in knowing what kinds of procedures you all have used in your districts to determine how interventions are rolled out and to determine whether they are having the intended effect.