Literaturnachweis - Detailanzeige
Autor/inn/en | Marks, Anthony M.; Cronje, Johannes C. |
---|---|
Titel | Randomised Items in Computer-Based Tests: Russian Roulette in Assessment? |
Quelle | In: Educational Technology & Society, 11 (2008) 4, S.41-50 (10 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1436-4522 |
Schlagwörter | Educational Assessment; Educational Testing; Research Needs; Test Items; Computer Assisted Testing; Computer Software; Educational Technology; Cheating; Testing Problems; Test Bias; Test Construction; Multiple Choice Tests; Foreign Countries; Evaluation Methods; Student Evaluation; Test Anxiety; College Students; Veterinary Medical Education; Evaluation Research; South Africa Education; assessment; Bewertungssystem; Forschungsbedarf; Test content; Testaufgabe; Unterrichtsmedien; Prellen; Testkritik; Testaufbau; Multiple choice examinations; Multiple-choice tests, Multiple-choice examinations; Multiple-Choice-Verfahren; Ausland; Schulnote; Studentische Bewertung; Examination phobia; Testangst; Prüfungsangst; Collegestudent; Evaluationsforschung; Südafrika; Süd-Afrika; Republik Südafrika; Südafrikanische Republik |
Abstract | Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically programmed into testing software, such that neighbouring screens present different items to the test-taker. This article argues that randomisation of test items can be a disadvantage to students who were randomly presented with difficult items first. Such disadvantage would violate the American Psychological Association's published guidelines concerning testing and assessment that call for the principle of fairness for test-takers across diverse test modes. Owing to the smallness of the chance of a student being randomly assigned difficult items first, it may be hard to prove such disadvantage. However, even if only one test-taker is affected once during a high-stakes test, the principle of fairness is compromised. This article reports on four instances out of about 400 in which students may either have been unfairly advantaged or disadvantaged by being given a series of easy or difficult items at the beginning of the test. Although the results are not statistically significant, we conclude that more research needs to be done before one can ignore what we have named the Item Randomisation Effect. (Contains 1 table and 4 figures.) (As Provided). |
Anmerkungen | International Forum of Educational Technology & Society. Athabasca University, School of Computing & Information Systems, 1 University Drive, Athabasca, AB T9S 3A3, Canada. Tel: 780-675-6812; Fax: 780-675-6973; Web site: http://www.ifets.info |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2017/4/10 |