Literaturnachweis - Detailanzeige
Autor/inn/en | Koller, Ingrid; Haberkorn, Kerstin; Rohm, Theresa |
---|---|
Institution | Leibniz-Institut für Bildungsverläufe |
Titel | NEPS technical report for reading. Scaling results of starting cohort 6 for adults in main study 2012. |
Quelle | Bamberg: Leibniz Institute for Educational Trajectories (LIfBi) (2014), 27 S.
PDF als Volltext |
Reihe | NEPS working paper. 48 |
Beigaben | Literaturangaben |
Sprache | englisch |
Dokumenttyp | online; Monographie; Graue Literatur |
Schlagwörter | Kompetenzmessung; Testentwicklung; Bewertung; Qualität; Student; Grundstudium; Lesekompetenz; Lesekompetenz; Grundstudium; Bewertung; Qualität; Student; NEPS (National Educational Panel Study) |
Abstract | The National Educational Panel Study (NEPS) aims to investigate the development of competencies across the whole life span. It also develops tests for assessing the different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on Item Response Theory (IRT). This paper describes the data and results of reanalyzing the adult reading competence test. The adult reading test was first administered in the main study 2010/11. In 2012, the same test was administered to a refreshment sample, that is, it was presented to subjects who had not taken the test in the first study. As in the main study of 2010/2011, the reading competence test for the adult cohort consisted of 32 items, which represented different cognitive requirements and text functions and used different response formats. The test was administered to 3,156 persons. Because this paper describes the reanalysis of an existing reading competence test in NEPS, the detailed description of the test and the scaling procedure are given in the NEPS Working Paper No. 25 (see Hardt, Pohl, Haberkorn, & Wiegand, 2013). Thus, the description in the present paper is kept as short as possible. After reporting descriptive statistics of the data, the partial credit model was applied to investigate the quality of the scale. The results showed that the test exhibits high reliability and that the items fit the model. Moreover, measurement invariance could be confirmed for various subgroups. Dimensionality analyses showed that the different cognitive requirements foster a unidimensional construct, and there is some evidence for multidimensionality based on text functions. It should be noted that a considerable amount of items were not reached by the test takers within the given assessment time and that many items were targeted toward a lower reading ability. Altogether, as in the main study 2010/2011, the results show good psychometric properties of the reading competence test and support the estimation of a reliable reading competence score. Furthermore, measurement invariance between the two main studies could be confirmed. Therefore, the competence scores for the main study 2012 were estimated with fixed item parameters from the main study 2010/11 in order to place the subjects of the two studies on the same scale. At the end of the paper, the data available in the Scientific Use File are described and the ConQuest-Syntax for scaling the data is provided. (Orig.). |
Erfasst von | DIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation, Frankfurt am Main |
Update | 2020/3 |