Literaturnachweis - Detailanzeige
Autor/inn/en | Pokropek, Artur; Marks, Gary N.; Borgonovi, Francesca |
---|---|
Titel | How Much Do Students' Scores in PISA Reflect General Intelligence and How Much Do They Reflect Specific Abilities? |
Quelle | In: Journal of Educational Psychology, 114 (2022) 5, S.1121-1135 (15 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Pokropek, Artur) ORCID (Marks, Gary N.) ORCID (Borgonovi, Francesca) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0022-0663 |
DOI | 10.1037/edu0000687 |
Schlagwörter | Scores; Intelligence; International Assessment; Secondary School Students; Achievement Tests; Foreign Countries; Cognitive Ability; Reading Achievement; Mathematics Achievement; Science Achievement; Item Response Theory; Poland; Program for International Student Assessment Intelligenz; Klugheit; Sekundarschüler; Achievement test; Achievement; Testing; Test; Tests; Leistungsbeurteilung; Leistungsüberprüfung; Leistung; Testdurchführung; Testen; Ausland; Denkfähigkeit; Leseleistung; Mathmatics sikills; Mathmatics achievement; Mathematical ability; Mathematische Kompetenz; Item-Response-Theorie; Polen |
Abstract | International Large-Scale Assessments (LSA) allow comparisons of education systems' effectiveness in promoting student learning in specific domains, such as reading, mathematics, and science. However, it has been argued that students' scores in International LSAs mostly reflect general cognitive ability (g). This study examines the extent to which students' scores in reading, mathematics, science, and a Raven's Progressive Matrices test reflect general ability g and domain-specific abilities with data from 3,472 Polish students who participated in the OECD's 2009 Programme for International Student Assessment (PISA) and who were retested with the same PISA instruments, but with a different item set, in 2010. Variance in students' responses to test items is explained better by with a bifactor Item Response Theory (IRT) model than by the multidimensional IRT model routinely used to scale PISA and other LSAs. The bifactor IRT model assumes that non-g factors (reading, math, science, and Raven's test) are uncorrelated with g and with each other. The bifactor model generates specific ability factors with more theoretically credible relationships with criterion variables than the multidimensional standard model. Further analyses of the bifactor model indicate that the domain-specific factors are not reliable enough to be interpreted meaningfully. They lie somewhere between unreliable measures of domain-specific abilities and nuisance factors reflecting measurement error. The finding that PISA achievement scores reflect mostly g, which may arise because PISA aims to test broad abilities in a variety of contexts or may be a general characteristic of LSAs and national achievement tests. (As Provided). |
Anmerkungen | American Psychological Association. Journals Department, 750 First Street NE, Washington, DC 20002. Tel: 800-374-2721; Tel: 202-336-5510; Fax: 202-336-5502; e-mail: order@apa.org; Web site: http://www.apa.org |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |