Literaturnachweis - Detailanzeige
Autor/inn/en | Way, Walter D.; Murphy, Daniel; Powers, Sonya; Keng, Leslie |
---|---|
Institution | Pearson |
Titel | The Case for Performance-Based Tasks without Equating |
Quelle | (2012), (27 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Monographie |
Schlagwörter | Task Analysis; Performance Based Assessment; Technology Uses in Education; Models; Evaluation Methods; Methods Research; Computer Simulation; Program Validation; Growth Models; Mathematics Tests; Science Tests; Grade 10; Grade 11; Multiple Choice Tests; Summative Evaluation; Generalizability Theory; Test Reliability; Test Validity; Replication (Evaluation) Aufgabenanalyse; Leistungsermittlung; Technology enhanced learning; Technology aided learning; Technologieunterstütztes Lernen; Analogiemodell; Methodenforschung; Computergrafik; Computersimulation; School year 11; 11. Schuljahr; Schuljahr 11; Multiple choice examinations; Multiple-choice tests, Multiple-choice examinations; Multiple-Choice-Verfahren; Testreliabilität; Testvalidität |
Abstract | Significant momentum exists for next-generation assessments to increasingly utilize technology to develop and deliver performance-based assessments. Many traditional challenges with this assessment approach still apply, including psychometric concerns related to performance-based tasks (PBTs), which include low reliability, efficiency of measurement and the comparability of different tasks. This paper proposes a model for performance-based assessments that assumes random selection of PBTs from a large pool, and that assumes tasks are comparable without equating PBTs. The model assumes that if a large number of PBTs can be randomly assigned, then task-to-task variation across individuals will average out at the group (i.e., classroom and school) level. The model was evaluated empirically using simulations involving a re-analysis of data from a statewide assessment. A set of G-theory analyses was conducted to assess the reliability of average school performance on PBTs and evaluate how variance due to the randomly-assigned tasks compared to other sources of variation. Analysis based on the linear student growth percentiles (SGP) model was used to assess the degree to which the model assumption of randomly-equivalent tasks held by comparing school classifications based on PBT growth estimates with three alternative school-level measures. The study findings support the viability of the proposed model to support next-generation performance-based assessments for uses related to group-level inferences. (As Provided). |
Anmerkungen | Pearson. One Lake Street, Upper Saddle River, New Jersey 07458. Tel: 800-848-9500; Web site: http://www.pearsoned.com/ |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2018/2/04 |