Literaturnachweis - Detailanzeige
|Autoren||Rohm, Theresa; Freund, Micha; Gnambs, Timo; Fischer, Luise|
|Institution||LIfBi Leibniz-Institut für Bildungsverläufe (Bamberg)|
|Titel||NEPS technical report for listening comprehension.|
Scaling results of starting cohort 3 for grade 9.
|Quelle||Bamberg: LIfBi Leibniz-Institut für Bildungsverläufe (2017), 27 S.
PDF als Volltext
|Reihe||NEPS Survey Paper. 21|
|Beigaben||Literaturangaben, Abbildungen, Tabellen, Anhang|
|Dokumenttyp||online; Monographie; Graue Literatur|
|Schlagwörter||Kompetenzerwerb; Kompetenzentwicklung; Evaluation; Item-Response-Theorie; Hörverstehensübung; 5. Schuljahr; Multiple-Choice-Verfahren; Rasch-Modell; Rasch Analysis; Kognitive Leistung; Hörverständnis; NEPS (National Educational Panel Study);|
|Abstract||The National Educational Panel Study (NEPS) investigates the development of competencies from early childhood to late adulthood. Therefore, tests for the assessment of different competence domains are developed. To evaluate the quality of these tests, various analyses based on item response theory (IRT) are performed. This report describes the data and scaling procedures for the listening comprehension test in Starting Cohort 3 (fifth grade) for Grade 9. The listening comprehension test contained 16 items with complex multiple choice response formats that asked respondents about details on two spoken texts. The test was administered to 4,588 students. Their responses were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch-homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. These analyses showed that the test exhibited an acceptable reliability and that the items fitted the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. There was a negligible amount of missing responses; particularly, items that were not reached by the respondents were rare. Challenges of the test included the large number of items targeted toward a lower ability in listening comprehension. Further challenges arose from dimensionality analyses based on different cognitive requirements for the items. Overall, the listening comprehension test had acceptable psychometric properties that supported the estimation of reliable listening comprehension scores. Besides the scaling results, this paper also describes the data available in the scientific use file and presents the ConQuest syntax for scaling the data (Orig.).|
|Erfasst von||DIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation, Frankfurt am Main|