Literaturnachweis - Detailanzeige
Autor/inn/en | O'Neill, Thomas R.; Lunz, Mary E. |
---|---|
Titel | Examining the Invariance of Rater and Project Calibrations Using a Multi-facet Rasch Model. |
Quelle | (1996), (14 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Ability; Benchmarking; Comparative Analysis; Difficulty Level; Equated Scores; Estimation (Mathematics); Interrater Reliability; Item Response Theory; Judges; Probability; Scoring; Test Items; Test Results |
Abstract | To generalize test results beyond the particular test administration, an examinee's ability estimate must be independent of the particular items attempted, and the item difficulty calibrations must be independent of the particular sample of people attempting the items. This stability is a key concept of the Rasch model, a latent trait model of probabilities that permits items and persons to be analyzed independently, yet still be compared using a common frame of reference. An extension of the Rasch model, the multi-facet Rasch model, can estimate examinee ability, item difficulty, and other facets for polytomous data. It was hypothesized that the multi-facet Rasch model would yield invariant, sample-free, slide and judge calibrations for a certification test for histology completed by 364 candidates. Eighteen qualified judges graded the test, which required examinees to prepare laboratory slides. Results of the study confirm that the slide and judge calibrations were essentially stable across diverse samples of examinees. This indicates that slide and judge calibrations can be used to anchor test administrations to a benchmark scale, making the equating of two administrations of the examination possible and supporting the hypothesis. (Contains 2 figures, 2 tables, and 15 references.) (SLD) |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |