Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Startseite

Literaturnachweis - Detailanzeige

 
AutorenHaberkorn, Kerstin; Pohl, Steffi; Carstensen, Claus H.
TitelScoring of complex multiple choice items in NEPS competence tests.
QuelleAus: Blossfeld, Hans-Peter (Hrsg.); Maurice, Jutta von (Hrsg.); Bayer, Michael (Hrsg.); Skopek, Jan (Hrsg.): Methodological issues of longitudinal surveys. Wiesbaden: Springer VS (2016) S. 523-540
PDF als Volltext  Link als defekt melden    Verfügbarkeit 
BeigabenLiteraturangaben
Spracheenglisch
Dokumenttypgedruckt; online; Sammelwerksbeitrag
ISBN978-3-658-11992-8; 978-3-658-11994-2
DOI10.1007/978-3-658-11994-2_29
SchlagwörterLeistungsmessung; Test; Multiple-Choice-Verfahren; Skalierung; Item-Response-Theorie; Kompetenzmessung; NEPS (National Educational Panel Study);
AbstractIn order to precisely assess the cognitive achievement and abilities of students, different types of items are often used in competence tests. In the National Educational Panel Study (NEPS), test instruments also consist of items with different response formats, mainly simple multiple choice (MC) items in which one answer out of four is correct and complex multiple choice (CMC) items comprising several dichotomous “yes/no” subtasks. The different subtasks of CMC items are usually aggregated to a polytomous variable and analyzed via a partial credit model. When developing an appropriate scaling model for the NEPS competence tests, different questions arose concerning the response formats in the partial credit model. Two relevant issues were how the response categories of polytomous CMC variables should be scored in the scaling model and how the different item formats should be weighted. In order to examine which aggregation of item response categories and which item format weighting best models the two response formats of CMC and MC items, different procedures of aggregating response categories and weighting item formats were analyzed in the NEPS, and the appropriateness of these procedures to model the data was evaluated using certain item fit and test fit indices. Results suggest that a differentiated scoring without an aggregation of categories of CMC items best discriminates between persons. Additionally, for the NEPS competence data, an item format weighting of one point for MC items and half a point for each subtask of CMC items yields the best item fit for both MC and CMC items. In this paper, we summarize important results of the research on the implementation of different response formats conducted in the NEPS. (Orig.).
Erfasst vonDIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation, Frankfurt am Main
UpdateNeueintrag 2018-06
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Die Wikipedia-ISBN-Suche verweist direkt auf eine Bezugsquelle Ihrer Wahl.
Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)