Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enHerrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E.
TitelExploring the Comparability of Multiple-Choice and Constructed-Response Versions of Scenario-Based Assessment Tasks
Quelle(2022), (14 Seiten)
PDF als Volltext kostenfreie Datei Verfügbarkeit 
ZusatzinformationORCID (Herrmann-Abell, Cari F.)
Weitere Informationen
Spracheenglisch
Dokumenttypgedruckt; online; Monographie
SchlagwörterMultiple Choice Tests; Conditioning; Test Items; Item Response Theory; Vignettes; Difficulty Level; Student Evaluation; Scores; Persuasive Discourse; Test Format; Comparative Analysis
AbstractAs implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including constructed-response and multiple-choice. However, little guidance has been provided for determining the relative value or cost effectiveness of those two formats. In this study, students were randomly assigned assessment tasks that contained either a constructed-response or a multiple-choice version of an otherwise equivalent item. Rasch analysis was used to compare the difficulty of these items on the same construct scale. We found that the set of items formed a broad unidimensional scale, but the constructed-response versions were more difficult than their multiple-choice counterparts. This difficulty was found to be partially due to the increased writing demand and the reasoning element in the constructed-response rubric. Students were more likely to recognize a clearly reasoned argument in a multiple-choice item than they were to create that reasoning themselves and communicate it in writing. Our findings can help instrument developers select a set of items that balances the time and effort students must provide during testing and the time and effort scorers need to spend to evaluate and score students' responses. In cases where constructing a response is an essential part of the targeted understanding, as when the target learning goal is to be able to construct an argument or generate a model, CR items are needed, but in other cases, MC items may be more efficient. (As Provided).
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Da keine ISBN zur Verfügung steht, konnte leider kein (weiterer) URL generiert werden.
Bitte rufen Sie die Eingabemaske des Karlsruher Virtuellen Katalogs (KVK) auf
Dort haben Sie die Möglichkeit, in zahlreichen Bibliothekskatalogen selbst zu recherchieren.
Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: