Literaturnachweis - Detailanzeige
|Autoren||Schöps, Katrin; Saß, Steffani|
|Titel||NEPS technical report for science.|
Scaling results of starting cohort 4 in ninth grade.
|Quelle||Bamberg: Universität Bamberg (2013), 26 S.
PDF als Volltext
|Reihe||Neps Working Paper. 23|
|Beigaben||Literaturangaben, Abbildungen, Tabellen, Anhang|
|Dokumenttyp||online; Monographie; Graue Literatur|
|Schlagwörter||Naturwissenschaftliche Bildung; Schuljahr 09; Schüler; Kompetenzmessung; Kohortenanalyse; Item-Response-Theorie; Skalierung; Empirische Untersuchung; Quantitative Methode; Bildungsforschung; Deutschland; NEPS (National Educational Panel Study);|
|Abstract||The National Educational Panel Study (NEPS) aims at investigating the development of competences across the whole life span and designs tests for assessing these different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on item response theory (IRT). This paper describes the data on scientific literacy for starting cohort 4 in grade 9. Besides presenting descriptive statistics for the data, the scaling model applied to estimate competence scores and analyses performed to investigate the quality of the scale, as well as the results of these analyses are also explained. The science test in grade 9 consisted of 28 multiple choice and complex multiple choice items and covers two knowledge domains as well as three different contexts. The test was administered to 14,475 students. A Partial Credit Model was used for scaling the data. Item fit statistics, differential item functioning, Rasch-homogeneity, and the tests dimensionality were evaluated to ensure the quality of the test. The results illustrate good item fit values and measurement invariance across various subgroups. Moreover, the test showed a high reliability. As the correlations between the two knowledge domains are very high in a multidimensional model, the assumption of unidimensionality seems adequate. Among the challenges of this test is the lack of very difficult items. But overall, the results emphasize the good psychometric properties of the science test, thus supporting the estimation of reliable scientific literacy scores. In this paper, the data available in the Scientific Use File, are described and the ConQuest-Syntax for scaling the data is provided. (Orig.).|
|Erfasst von||DIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation, Frankfurt am Main|