Monday, November 26, 2012

Customizable Rating Scales and Actionable Dashboard Reporting Technology

School districts across several states have expressed the need to develop, customize, administer and manage implementation of their own teacher and principal evaluation tools. The Galileo Instructional Effectiveness Assessment System (IEAS) addresses this need by giving school districts online access to the Administrator Dashboard with instructional effectiveness widgets.

Among other tools, the dashboard includes: 1) an Educator Proficiency Scale Builder; 2) Proficiency Rating Scale Administration tools; 3) an Educator Proficiency Ratings Progress Report; 4) a Proficiency Rating Results Report; and 5) Staff File Import and View functionality.

These features make it possible for a district to take control over the development and use of teacher and principal performance rating scales aligned to the Interstate New Teachers Assessment and Support Consortium (INTASC) Professional Teaching Standards and the Professional Administrative Standards from the Interstate School Leaders Licensure Consortium (ISLLC).

In addition, if the district already has a measure of teacher performance, they may upload data from this existing measure collected outside of Galileo or enter the existing measure (with appropriate copyright permissions if applicable) into Galileo for administration and data collection within the system.

Galileo IEAS Dashboard Reports provide school districts with the capability to manage the implementation of rating scales and to gain rapid access to a continuous flow of actionable information not only to identify the proficiency levels of teachers and administrators at the end of the year, but also, to inform professional development decisions to enhance student learning throughout the year.

For more information of the components of the Galileo IEAS, click here.

Monday, November 19, 2012

Tips for Effective Test Reviews

ATI provides districts the opportunity to review district created assessments. This process can be an easy and a  valuable tool to ensure that assessments suit specific district and student needs.  Listed below are tips for effective test reviews.

1)      Keep the purpose of the assessment in mind.

There are numerous purposes for assessments.  The purpose of the assessment should dictate the review.  Look at an example from the kindergarten Common Core math standards.

 ·         K.OA.2 Solve addition and subtraction word problems.   

 If the purpose of the benchmark is to test instruction, the items on the assessment should match the district approved pacing guide.  If the pacing guide states that only addition will be taught during the first quarter then only items for addition should be included in the first quarter assessment. 

 On the other hand, if the purpose of the assessment is summative, the entire range of the standard should be assessed.
2)      Trust the data.

 ATI provides parameters estimates on items for test reviews.  The parameters provide information related to the discrimination, difficulty, and guessing  for each item. Since ATI analyzes item performance based on thousands of students,  the data can provide an objective view of an item and how it is likely to function on the district assessment.

3)      Consider the bigger picture.

Every teacher teaches concepts using favorite vocabulary and language. For this reason, reviewers approach item review expecting to see specific target words on items or wanting to see questions asked in specific formats.  Keep in mind that the target words may change as the tests and teaching methodologies vary over time.  The bigger picture for students is that they are able to perform the skill and demonstrate knowledge no matter how the item is presented on the high stakes assessments.  This is going to be especially important as the traditional state tests transition into the new common assessments created by the consortiums SMARTTER and PARCC. As vocabulary and item formats change, it will be beneficial for students to have been exposed to multiple formats for testing specific standards. Consider keeping an item on an assessment even if it contains new or different vocabulary and allowing students to gain valuable experience in test taking and at the same time expand their knowledge. Reviewers may be surprised at how adaptable students really are.

4)      Allow the students the opportunity to excel.

One trap districts fall into is creating assessments that are too easy.  If assessments are too easy, there is no way for the data to show growth from one test to the next.  In addition, if all students receive 100 percent on the assessment, the data  will not provide information about how to help students get to the next level. Avoiding this pitfall is relatively easy.  Reviewers should make sure that items having a full range of difficulty are included on the assessment. 


Monday, November 12, 2012

ATI Findings on Predictive Validity and Forecasting Accuracy for the 2011-12 School Year

ATI has released a research brief summarizing current research on the predictive validity of Galileo K-12 Online assessments administered in the 2011-12 school year and the forecasting accuracy of Galileo risk levels based on student performance on these assessments.

The research summarized in the brief was based on data for individual students in grades three through high school in math, reading/English language arts, and science. The sample consisted of the first 26 districts in Arizona, Colorado, and Massachusetts to provide ATI with their statewide assessment data. Collectively, these districts administered 1,105 district-wide assessments.

ATI conducts an Item Response Theory (IRT) analysis for each district-wide assessment which produces a scale score for each student, the Developmental Level (DL) score. Each student is also classified as to their level of risk of failing the statewide assessment based on their performance on all the district-wide assessments they have taken within a given school year. In order of highest to lowest risk of failing the statewide assessment, the possible risk levels comprise “High Risk,” “Moderate Risk,” “Low Risk,” and “On Course.” ATI then evaluates predictive validity by examining the correlation between student DL scores on each district-wide assessment and student scores on the statewide assessment. ATI evaluates forecasting accuracy by examining how students classified at different levels of risk ultimately performed on the statewide assessment.

“Predictive validity analyses examine the strength of the relationship between two measures of student performance, in this case the student DL scores on an assessment in a given grade and content area and the student scores on the statewide assessment in the same grade and content area,” says brief author Sarah Callahan, Ph.D., Research Scientist of Assessment Technology Incorporated. She further states that “The observed correlations in the 26 districts studied suggest that student scores on the 2011-12 Galileo district-wide assessments were strongly related to student scores on the 2012 statewide assessment.”

Key findings include:
  • The mean correlations range from 0.69 to 0.78 across grades and content areas with an overall mean of 0.75 which is considered a high correlation.
  • As student risk level increased the likelihood of failure on the statewide assessment increased, as illustrated in Figure 1.
  • Overall Galileo risk levels accurately forecast statewide test performance for 84 percent of students as shown in Figure 2. 
  • Forecasting accuracy was highest in cases where student performance was most consistent.
Dr. Callahan concludes that, based on this research, the 2011-12 Galileo assessments demonstrated adequate levels of predictive validity. The results also suggest that the 2011-12 Galileo risk levels displayed adequate levels of accuracy in forecasting student performance on the statewide assessment. This research is consistent with similar research investigations performed in previous years and suggests that Galileo assessments and risk levels continue to demonstrate adequate levels of predictive validity and forecasting accuracy. Learn more by reading the full three-page brief

Monday, November 5, 2012

Striving for Educational Excellence: The Roles of Innovation, Collaboration, and Empowerment

Educational excellence is essential when maintaining and expanding the nation’s competitiveness in today’s rapidly changing global community. The challenges associated with attaining and maintaining educational excellence call for innovative and collaborative initiatives empowering teachers as educators, administrators as leaders, and students as learners. 

In support of attaining and maintaining educational excellence, ATI provides educators with a comprehensive instructional effectiveness system built in collaboration with school districts and in recognition that it is within the local school district and community where the broad sweeping ideas of educational reform are carefully vetted within the context of local reality. It is here where educational stakeholders have an in-depth understanding of the educational needs of students and the professional development aspirations of teachers. And it is here where new policies are transformed into everyday practice. Reform is, after all, a local phenomenon, occurring within each school and within each classroom, one student and one teacher at a time.

Click here to learn more about the ways in which the ATI Instructional Effectiveness Assessment System embedded in the Galileo K-12 Online Instructional Improvement System incorporates innovation and collaboration in empowering educators seeking educational excellence