Monday, December 28, 2009

A Closer Look at the Benchmark Results Page

When looking at the Benchmark Results page, which is the page that teachers generally go to when analyzing assessment results, you are encouraged to focus on the student’s Developmental Level (DL) or Scale Score rather than the student’s raw score.

This is because the DL score, and the student’s associated mastery level, provides a much better picture of a student’s ability. A raw score will simply tell a user what a student got right and what a student got wrong. The DL score factors in, not only what items a student got right and wrong, but also the difficulty and discrimination value (how well does this item discriminate between students of different ability levels) of the items.

When a student takes an ATI Benchmark Assessment, they will earn a particular DL score. A DL score is a score that takes the relative difficulty of the assessment into consideration. DL scores on two assessments can be compared in a meaningful way whereas raw scores cannot. For example, 70% correct on a very easy assessment does not mean the same thing as 70% correct on a very difficult assessment. However, a DL (scale) score of 954 on one assessment means the same thing as a DL score of 954 on another assessment, as long as the two assessments have been placed on the same scale. The particular DL score a student earns places them in a particular mastery category. Each state has its own mastery categories (Below the Standard/Unsatisfactory, Approaches the Standard/Partially Proficient, Meets the Standard/Proficient, Exceeds the Standard/Advanced) but they are similar in nature. For example:

Cut scores are then established to determine which Mastery category a student will be placed based on his or her performance on the assessment. The cut scores that define the mastery categories are established for Bencmark 1 using equipercentile equating to align students’ scores to their scores on last year’s state assessment results. The cut scores on all other assessments administered in a school year are established based on the amount of growth in terms of scale scores that is expected from one benchmark to another.

What does this mean for the user? Users can rely on DL scores and their associated mastery catagories, to help identify students to target for interventions, even after one benchmark assessment is given. A teacher’s goal should be to see an increse in DL scores (and mastery catagories) as the year progresses and students learn more of the standards. To acheive this goal, teachers will want to analyze the Class Development Profile Grid, Item Analysis and Risk-Level Report to identified standards on which to focus their re-teaching instruction. Click here to learn more about how these reports can assist with interventions.

Tuesday, December 22, 2009

Lesson Plan Documentation: A Great Use of Instructional Dialogs

Galileo Instructional Dialogs can serve as a unique recordkeeping tool for teacher documentation of which standards are covered during each teaching day as well as very detailed notes about the actual lessons or activities used in the classroom.

Start with a template Instructional Dialog with just the title on each slide. This Instructional Dialog may be created at the beginning of the year using whatever lesson plan format the teacher or the district advocates. Copy the template dialog.

Once the template is copied, fill in the blanks.

The lesson plan is preserved electronically and attached to a standard. The teacher only needs to view the resource in order to see the lesson plan.

A completed lesson plan in a different electronic format may also be attached as a resource.
Generate a short quiz at the end of the Instructional Dialog. This can check the effectiveness of the lesson.

Finally, schedule the dialog which will allow the Instructional Dialog/lesson plan to show up on Galileo’s class calendar for an effortless view of what has been accomplished in the classroom.

Thursday, December 17, 2009

Intervention Alert Report

The Intervention Alert Report is quickly becoming one of the more popular reports in Galileo. I have recently visited a number of districts and received positive feedback from many teachers. This report lists all of the learning standards on a given assessment and displays the percentage of students who have demonstrated mastery of the learning standards. Teachers can quickly identify which standards are not mastered since they are highlighted in red. This allows the users to easily identify standards to address during interventions. ATI has recently enhanced this report by listing the performance band (i.e., meets standard, approaches standard…) within each cell. It also gives the teacher information on how students performed on each standard at the school and the district levels. In addition to teachers, principals and district administrators can run this report at a school or district level. This is an actionable report that allows the user to schedule Assignments or use Quiz Builder, or drill-down through the data to view individual Student Results. It is available in the Reports area for district-, center-, and class-level users. It may also be accessed from the class dashboard page.

Tuesday, December 8, 2009

Thoughts on Race to the Top: Collaboration and local control

By now, grant applications for the federal Race to the Top (RTT) program are in preparation across the country. There are a lot of state department of education employees who are likely to have a hardworking holiday season as applications for the first wave of funding are due in January. With the 4.3 billion dollars that has been allocated, states will have significant resources that can be brought to bear to make the kind of sweeping and dramatic updates and changes to school systems that are called for as part of this federal initiative.

The guidelines presented to the states to prepare their RTT applications contained two clear themes. On the one hand, state education initiatives are supposed to preserve the “flexibility and autonomy” of LEAs. There is clear recognition of the need for districts to be supported in their efforts to make decisions about curriculum assessments and other issues that are in the best interests of their staff and students. In addition to the call for local control there is also a clear mandate for collaboration. States are encouraged to adopt common standards and collaborate to produce common assessments. One of the questions that state governments face in their preparation of their proposals is how best to balance these two, at times seemingly contradictory, objectives.

One of the ways that collaboration could be facilitated, while at the same time preserving the decision making power of the districts is for the state to make available to districts an item bank in which all of the items are on the same scale. These items could be used on both district interim assessments and the state test. What would this mean for the sake of districts you ask? Such an item bank would afford the opportunity to make sure that the assessments composed of these items, either entirely or in part, can be placed on the same scale. This means that the scores are directly comparable. The 500 on the math test given in the middle of the year could be compared directly to the 550 on the state test at the end of the year. Put another way, in this case, the statement could be made that the ability level required to achieve 550 on the benchmark test is higher than the ability level required to achieve 500 on the state test. Without tests that are on a common scale, such comparisons are not possible. The 550 might represent higher ability than the 500 and then again it might not. Having a common measuring stick could go a long way towards facilitating collaborative work.

A common item bank could also greatly assist smaller districts in their efforts to implement valid and reliable interim assessments for the purpose of informing instruction. The utility of assessment results is greatly aided to the extent that they reflect the kids that actually attend the district schools and the instructional priorities of the district. Research has consistently shown that the items behave differently when students change or when instruction changes. Ongoing analysis of test behavior is critical to making sure results are reliable and valid for the kids with whom it will be used. Such analysis is difficult with districts that have only small numbers of kids. Having a common item bank from which to draw could make it much easier to do the necessary analyses to back up an assessment initiative with a small district.

Achieving these beneficial results does not require that both the state and district tests are comprised entirely of items from this bank. The only requirement is that they both contain a sample of items from the bank. This would leave the district free to include local items reflecting content which may not be of interest to other districts in the state. Easy communication and collaboration need not be sacrificed in order to continue to achieve the flexibility and autonomy that allows districts to make sure that instructional improvement systems meet their priorities.

Anyhow, I had best sign off at this point. This post is already rather lengthy. We would, as always, be interested in hearing the feedback of others about these ideas or about other topics.