Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Startseite

Literaturnachweis - Detailanzeige

 
AutorenHaberkorn, Kerstin; Pohl, Steffi; Carstensen, Claus H.
TitelIncorporating different response formats of competence tests in an IRT model.
QuelleIn: Psychological Test and Assessment Modeling, 58 (2016) 2, S. 223-252
PDF als Volltext  Link als defekt melden    Verfügbarkeit 
BeigabenLiteraturangaben, Abbildungen, Tabellen
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN2190-0493; 2190-0507
SchlagwörterKompetenzmessung; Testaufgabe; Antwortbogen; Skalierung; Item-Response-Theorie; Itemanalyse; Dimensionsanalyse; Multiple-Choice-Verfahren;
AbstractCompetence tests within large-scale assessments usually contain various task formats to measure the participants’ knowledge. Two response formats that are frequently used are simple multiple choice (MC) items and complex multiple choice (CMC) items. Whereas simple MC items comprise a number of response options with one being correct, CMC items consist of several dichotomous true-false subtasks. When incorporating these response formats in a scaling model, they are mostly assumed to be unidimensional. In empirical studies different empirical and theoretical schemes of weighting CMC items in relation to MC items have been applied to construct the overall competence score. However, the dimensionality of the two response formats and the different weighting schemes have only rarely been evaluated. The present study, thus, addressed two questions of particular importance when implementing MC and CMC items in a scaling model: Do the different response formats form a unidimensional construct and, if so, which of the weighting schemes considered for MC and CMC items appropriately models the empirical competence data? Using data of the National Educational Panel Study, we analyzed scientific literacy tests embedding MC and CMC items. We cross-validated the findings on another competence domain and on another large-scale assessment. The analyses revealed that the different response formats form a unidimensional measure across contents and studies. Additionally, the a priori weighting scheme of one point for MC items and half points for each subtask of CMC items best modeled the response formats’ impact on the competence score and resembled the empirical competence data well. (Orig.).
Erfasst vonDIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation
UpdateNeueintrag 2018-07
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Psychological Test and Assessment Modeling" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)