Literaturnachweis - Detailanzeige
Autor/in | Woods, Carol M. |
---|---|
Titel | Empirical Selection of Anchors for Tests of Differential Item Functioning |
Quelle | In: Applied Psychological Measurement, 33 (2009) 1, S.42-57 (16 Seiten)
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0146-6216 |
DOI | 10.1177/0146621607314044 |
Schlagwörter | Test Results; Testing; Item Response Theory; Test Bias; Statistical Bias; Maximum Likelihood Statistics; Error of Measurement; Simulation; Multiple Regression Analysis; Evaluation Methods |
Abstract | Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is based on a subset of group-invariant items called designated anchors. In this research, a quick and easy strategy for empirically selecting designated anchors is proposed and evaluated in simulations. Although the proposed rank-based approach is applicable to any method for DIF testing, this article focuses on likelihood-ratio (LR) comparisons between nested two-group item response models. The rank-based strategy frequently identified a group-invariant designated anchor set that produced more accurate LR test results than those using all other items as anchors. Group-invariant anchors were more difficult to identify as the percentage of differentially functioning items increased. Advice for practitioners is offered. (Contains 4 tables.) (As Provided). |
Anmerkungen | SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: http://sagepub.com |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2017/4/10 |