Literaturnachweis - Detailanzeige
Autor/inn/en | Gnambs, Timo; Fischer, Luise; Rohm, Theresa |
---|---|
Institution | Leibniz-Institut für Bildungsverläufe |
Titel | NEPS technical report for reading. Scaling results of starting cohort 4 for grade 12. |
Quelle | Bamberg: LIfBi Leibniz-Institut für Bildungsverläufe (2017), 38 S.
PDF als Volltext |
Reihe | NEPS Survey paper. 13 |
Beigaben | Literaturangaben |
Sprache | englisch |
Dokumenttyp | online; Monographie; Graue Literatur |
Schlagwörter | Rasch-Modell; Evaluation; Quantitative Analyse; Kompetenzerwerb; Itemanalyse; Testentwicklung; Testverfahren; Schuljahr 12; Lesekompetenz; Leseverstehen; Rasch Analysis; NEPS (National Educational Panel Study) |
Abstract | The National Educational Panel Study (NEPS) investigates the development of competencies across the life span and develops tests for the assessment of different competence domains. In order to evaluate the quality of the competence tests, a range of analyses based on item response theory (IRT) were performed. This paper describes the data and scaling procedures for the reading competence test in grade 12 of starting cohort 4 (ninth grade). The reading competence test contained 29 items (distributed among an easy and a difficult booklets) with different response formats representing different cognitive requirements and text functions. The test was administered to 5,805 students. Their responses were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch-homogeneity, the test's dimensionality, and local item independence were evaluated to ensure the quality of the test. These analyses showed that the test exhibited an acceptable reliability and that the items fitted the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. Limitations of the test were the large number of items targeted toward a lower reading ability as well as the large percentage of items at the end of the test that were not reached due to time limits. Further challenges related to the dimensionality analyses based on both text functions and cognitive requirements. Overall, the reading test had acceptable psychometric properties that allowed for an estimation of reliable reading competence scores. Besides the scaling results, this paper also describes the data available in the scientific use file and presents the ConQuest syntax for scaling the data (Orig.). |
Erfasst von | DIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation, Frankfurt am Main |
Update | 2020/2 |