Monday, April 27, 2015

Galileo Technology Enhanced Items

Students across the nation are currently experiencing or will soon experience new forms of assessment involving a wide variety of technology enhanced (TE) items. In an effort to assist districts and charters to prepare students for the next generation of assessments, ATI is introducing the new forms of assessment in the context of digital curriculums that support instruction aligned to new and changing standards and provide opportunities to learn to respond to new forms of assessment.

Within this context, ATI has developed several distinct types of TE items to be introduced in Instructional Dialogs. These include the: multi-part item, selectable text item, sequencing item, expanded selected response item, performance-based item, and customized TE item. The customized TE item can be used to build a broad array of item types. Moreover, the ATI customized TE item type can be used to rapidly emulate new item types that may emerge in the future.

TE items are available in various subjects for kindergarten through high school with ATI item construction occurring at an accelerated rate of producing over 400 new TE items every month. TE items can be incorporated into formative, benchmark, pre- and post-tests, dialogs and in digital curriculums. Corrective feedback is available for many TE practice items presented in dialogs. Additionally, item level psychometrics can be established on the Galileo TE items on a continuous basis.  A sample of a customized TE item for kindergarten containing an audio recording can be viewed at: http://www.ati-online.com/testnew/G5StopSign/G5StopSign.html or by “Control” “Click” on the following graphic.

http://www.ati-online.com/testnew/G5StopSign/G5StopSign.html
Technology enhanced item - click to play the sample item.

The development of ATI item types has been informed by SBAC and PARCC examples as well as by information from state departments of education. At a more basic level, they have been influenced by the new waves of technology forging the rapidly developing movement toward digital education. Consequently, the ATI TE item types reflect the wide range of TE items likely to be encountered on statewide assessments such as SBAC and PARCC.

TE items are designed to accommodate different levels of cognitive complexity. Those items that call for a single correct answer to a question requiring recall or recognition memory are used to assess knowledge and skills involving low levels of complexity. Items requiring the application of multiple interrelated cognitive processes are used to assess knowledge and skills reflecting high levels of cognitive complexity. The concept of cognitive complexity has important implications related to test-taking skills. The importance of test-taking skills derives from the lack of student familiarity with the new items. Lack of familiarity may increase the initial difficulty of TE items, which is why we believe that it is useful for students to become familiar with TE item types in an instructional environment. Instructional Dialogs provide the kind of instructional environment needed to introduce TE items.

To learn more about the benefits of technology enhanced items and digital curriculums.

Monday, April 20, 2015

How to Use Item Parameters to Make Decisions During Test Review

Assessment Technology’s use of Item Response Theory (IRT) provides clients with rich information about items. This information can be available for items written by district professionals as well as those written by ATI item writers.

Explanations of the provided data points is provided below.

Parameter Definitions
Understanding IRT Item Parameters
The best way to understand what item parameters refer to is to look at an Item Characteristic Curve. On an item characteristic curve, which presents the data for one, specific item, student ability (based on their performance on the assessment as a whole) is plotted on the horizontal axis, with a mean of 0 and a standard deviation of 1. The probability of answering the item correctly is plotted on the vertical axis. Typically the probability of answering correctly is relatively low for low ability students and relatively high for students of higher ability.


Difficulty Parameter
The first example (Item 4) is an example of a great item as far as the parameters go. The b-value (difficulty) for that item was 0.689, which is a bit on the difficult side, but not too bad. The important point here is that b-values (item difficulty) are on the same scale as student ability. So what this example is telling us is that students at or above 0.689 standard deviations above the mean are likely to get the answer correct. Students below that point on the ability scale are more likely to answer incorrectly. The b-parameter is also known as the location parameter, because it locates the point on the ability scale where students start demonstrating mastery of the concept.

 
Figure 1
Item 4 with a difficult b-value.


Tip: This parameter should have a wide range, generally between -3 and +3, across a test.

Discrimination Parameter
The a-value (discrimination) refers to how well the item discriminates between different ability levels. It’s how steep the rise is in the curve that shows the probability of answering correctly. Ideally, there is a nice, steep rise in the probability of answering correctly like the one for test Item 4. That indicates that there is a dramatic change in how likely it is that a student has mastered the concept that’s pin-pointed within a very narrow range of the ability scale. You can be pretty confident that students above 0.689 standard deviations above the mean “get it” and that students below that point generally don’t. The discrimination parameter for test item 4 is 1.459.

The next example, Item 5, shows an item that doesn’t discriminate quite as well as Item 4. The a-value on that one is 0.53. It’s also a pretty easy item, with a b-value of -1.07. So, on this one, most students are likely to get it correct, unless they’re more than one standard deviation below the mean of the ability scale.
 
Figure 2
Item 5 with a lower b-value.


It’s not necessarily the case that an easy item automatically has poor discrimination. The final example, Item 8, is an easy item that discriminates very well. Although students at most ability levels are likely to select the correct answer, there is still a dramatic increase in likelihood of answering correctly within a relatively narrow range of the ability scale.
 

Figure 3
Item 8, is an easy item that discriminates well.


Tip:  This parameter should be near 1 or above.

Guessing Parameter
The guessing parameter (the c-value) is the probability of getting the item correct by just guessing. It defines the lower limit of the item characteristic curve. For a multiple choice item with four answer choices, the guessing parameter should be around 0.25 or, preferably, a bit lower.


Tip:  This parameter should be .25 or below for a four alternative multiple-choice test item.

As the Director of Educational Management Services, I have spent the past nine years working with teachers and reviewers in creating assessments using this information provided by IRT. I have found that this information although important is not the only consideration I use when I complete a test review. I find that the best use of item parameters is to be informed about what the data means, but at the same time, use knowledge of a district’s students, teachers, and curriculum to pick the best items to suit a specific population’s needs.

How do I use item parameters to inform my decision in test review? I believe that the best use of this data is to inform opinions and choices of a test reviewer. Let me give you an example.

I receive a comment from an initial review that says the item is too difficult for students. I check the b-value provided on the Test Review page. The item’s difficulty is a 3.00 or above, this shows that the reviewer’s intuition about the item is correct. I look to see what the b-values of the other items are on this assessment. If I find that there are alternative items with high b-values already on this test, I may decide to replace the item and place an easier item on the test.

On the other hand, if I check the b-value and find that the item has a b-value closer to 1.00, I know that other students have handled this item without a lot of difficulty. In this case, I may decide to leave the item on the assessment and see how students do on this item.

In other words, I believe that reviewer opinions and data should be used equally to inform decisions during test review.

Karyn White                                                        
Director of Educational Management Services
   

Monday, April 13, 2015

Innovative Technology Supporting Common Core and Instructional Effectiveness Implementation

Galileo K-12 Online Instructional Improvement and Effectiveness System from ATI is comprehensive, standards-based, and research supported. The system provides an array of curriculum, assessment, instructional effectiveness, and reporting tools aligned to Common Core Standards and facilitates advancements in teaching strategies, assessments, and implementation of  instructional effectiveness initiatives.

Galileo technology offers rapid and flexible access to innovative assessment, reporting, curriculum and instructional effectiveness tools. Assessments are valid, reliable, and aligned to standards such as Common Core and individual state versions with items reflective of consortium assessment approaches. Dashboard information provides ready access to actionable, reliable, and valid data. Galileo’s Digital Curriculum Platform supports building and implementing local online district/charter specific curricula. The instructional effectiveness component offers educator ratings and reliable and valid measures of student progress for both state- and non-state-tested content areas and contains an Evaluation Score Compiler that compiles ratings, student performance and other information into one overall evaluation score.
System management tools assist educators in establishing instructional goals reflecting local curriculum, assessing goal attainment, forecasting standards mastery, and using assessment information to guide differentiated instruction and professional development.
Learn more about Galileo’s innovative technology.  Contact us today for an online demonstration.  Check the ATI events calendar for a conference or seminar near you.






Attention: Illinois educators, register for a regional seminar April 24 focusing on Galileo K-12 Online innovative technology addressing Common Core and educator effectiveness implementation co-hosted by Meredosia-Chambersburg School District #11 (MCSD) and Assessment Technology Incorporated (ATI).

Monday, April 6, 2015

Upcoming Seminar: Common Core and Educator Effectiveness Implementation

ATI and Meredosia-Chambersburg School District #11 are co-hosting the complimentary “A Galileo K-12 Online Journey through Common Core and Educator Effectiveness Implementation” seminar on April 24th, 2015.  The seminar will begin with an open forum discussion about the challenges and Successes of school districts in Illinois and other states implementing Common Core State Standards and educator effectiveness legislation.  The discussion will be followed by an in-depth exploration of the Galileo K-12 Online Instructional Improvement and Educator Effectiveness System and how technology and research innovations within Galileo are being utilized to support locally designed, standards-based education and educator effectiveness initiatives. 

Next, MCSD will discuss their first year of Galileo implementation, highlighting District goals, approach to implementation, success, and plans for 2015-16.  A question and answer period will conclude the seminar.

For more information or to register, click here.