Monday, June 27, 2011

Raw scores are not what they seem

The score reports from state testing have arrived. Your class has scored an average of 680 on Math and 700 on reading. Anyone who has been involved in education either as a teacher, a student, or the parent of a student in the schools has seen testing reports that include these kinds of scores. What do these scores mean? Why are all the decisions that are made from state tests such as AIMS or MCAS based on scores like these instead of something more straightforward such as percent correct? Why not simply tell students only that they got 85% correct?

Unfortunately scores like percent correct aren’t as straightforward as they seem. Say, for instance, that a new math curriculum has been implemented in the 5th grade. It’s very reasonable to want to track its effectiveness by looking at test scores on district tests. What would we make of an observation that the average score was 65% correct on district exams prior to implementation of the new curriculum and 85% following? It would be very temping to infer that the new approach was a success. Unfortunately, it is in fact likely that the two exams are not equivalent in difficulty. The difference in scores between the two tests may be entirely the result of easier test questions rather than more skilled students.

The scoring approach that has been applied to statewide testing has an answer to this problem. Wrapped up in the complicated sounding label of Item Response Theory (IRT) is a technique for analyzing test results that makes answering questions like whether there a measurable change in learning from one year to the next simpler. Difficulty is evaluated so that 90% correct on a harder test would result in a higher score than 90% correct on an easier test. The process of accounting for difficulty may also be taken into account in a fashion that makes scores from two tests directly comparable. When this is done, one can effectively address questions about just how effective that new curriculum actually is by direct comparison of test scores. Otherwise, you don’t really know what you have.

In addition to providing information about difficulty and providing the capability to place scores on the same scale, IRT also provides information about what skills children likely need to master first in order to develop to the next level. Imagine that results from that 5th grade math test indicated that students were struggling with probability and fractions. IRT provides a way of looking at performance that takes into account performance on all the other items on a test to determine the likelihood that a student will perform successfully on a given skill. This means that that all the information available can be brought to bear in answering the question of what should be planned next.

All of these benefits are why we have chosen to make extensive use of IRT for scoring assessments within Galileo. Our objective in designing the reports and tools that make use of IRT based scores is how to make the benefits of IRT for simplifying educational decision making apparent. Toward that end we are designing an increasing number of graphical presentations as well as a number planning tools that make it easy to bring all the information at hand together to assist in planning.

Monday, June 20, 2011

Instructional Effectiveness Assessment

You spoke of a need to determine the effectiveness of instruction and we listened. After many conversations with our users, ATI has taken a proactive approach to client needs by expanding its comprehensive assessment system with the development of Instructional Effectiveness Assessment. The assessments used for instructional effectiveness are standards-based and incorporated into ongoing assessment planning, test construction, test scheduling, administration, data analysis, and reporting activities.

Galileo’s Instructional Effectiveness Assessment offers users several major advantages thanks to ATI’s many decades of experience in educational research and work with a variety of clients.

Benefits of Galileo K-12 Online Instructional Effectiveness Assessment:
• Provides instructional effectiveness information that can assist educational programs to elevate student learning;
• Provides a flexible approach supporting local control of instructional evaluation;
• Relates the evaluation of instruction to the mastery of state standards;
• Provides K-12 coverage for the Instructional Effectiveness Assessment approach;
• Uses advanced mathematical models to explain class and school variations in student academic progress;
• Provides reliable and valid Instructional Effectiveness Assessments that are effective in forecasting standards mastery;
• Minimizes the level of effort required to implement an Instructional Effectiveness Assessment initiative using automation procedures supporting construction, scheduling, scoring, and reporting
• Uses data on the performance of hundreds of thousands of students to identify the influence on learning of contextual variables such as poverty, attendance, and mobility;
• Maintains history supporting instructional effectiveness analyses over multiple years
• Provides independent evaluations of each class and school that support and encourage the success of all educators in elevating student achievement;
• Provides mathematical models capable of documenting academic progress for small samples of students.

For more information on ATI’s approach to Instructional Effectiveness and our pilot project, click here.

Monday, June 6, 2011

New CAT Assessments Part of ATI’s Comprehensive Assessment System

It’s time to put down paper-and-pencil tests. Computerized Adaptive Testing (CAT) assessments encourage productive use of time by enabling educators to use the same ability level test with all students. CAT assessments administer items based on the ability levels of the students. The goal of selection is to increase the precision of measurement while saving test administration time.

In CAT, prior information about student ability is used as a basis of item selection – meaning, as a student answers individual items or sets of items, the upcoming items are selected based on the responses of the already answered items. If the student responds correctly to an item or a group of items, a more challenging item or group of items will be presented. If the student incorrectly responds to the item or group of items, a less challenging item or group of items will be administered. Because of this, CAT provides high levels of efficiency in the assessment of student ability.

Construction of adaptive tests is automated through the Galileo Assessment Planner, which defines the item pool to be used in selecting items for the adaptive assessment. Automated construction allows a district or school to construct customized adaptive tests to meet unique local needs. For example, an educational system may construct an adaptive test to be used in determining placement in a locally designed advanced algebra course. Automated construction increases the testing options available for adaptive testing. In addition, it supports accommodations to continually changing standards which are a hallmark of contemporary education.

ATI’s approach to Computerized Adaptive Testing is explored in greater depth in Composition of a Comprehensive Assessment System or experience Galileo for yourself. There are a number of ways to learn first-hand about Galileo K-12 Online. You can:

  • visit the Assessment Technology Incorporated website (ati-online.com)

  • participate in an online overview by registering either through the website or by calling 1.877.442.5453 to speak with a Field Services Coordinator

  • visit us at
    o the Arizona Department of Education Leading the Change Conference June 27 through 29 at the Westin La Paloma in Tucson, Arizona;
    o the Massachusetts Association of School Superintendents’ 17th Annual Executive Institute July 13 through 14 at Mashpee High School in Mashpee, Massachusetts;
    o the Arizona Association of School Business Officials’ 58th Annual Conference and Exposition July 20 through 23 at the JW Marriott Starr Pass Resort, Tucson, Arizona; and
    o the Colorado Association of School Executives 42nd Annual Conference July 26 through 29 at the Beaver Run Resort in Breckenridge, Colorado.

We look forward to chatting with you online and at events.