Literaturnachweis - Detailanzeige
Autor/in | Laufer, Batia |
---|---|
Titel | Lexical Thresholds and Alleged Threats to Validity: A Storm in a Teacup? |
Quelle | In: Reading in a Foreign Language, 33 (2021) 2, S.238-246 (9 Seiten)
PDF als Volltext (1); PDF als Volltext (2) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1539-0578 |
Schlagwörter | Second Language Learning; Second Language Instruction; Vocabulary Development; Inferences; Course Descriptions; Reading Comprehension; Correlation; Reading Research; Incidental Learning; Readability; Mastery Learning; Reading Instruction; Reading Materials; Validity; Language Tests; English (Second Language); English Literature; Scores; Morphology (Languages); Morphemes Zweitsprachenerwerb; Fremdsprachenunterricht; Wortschatzarbeit; Inference; Inferenz; Kursstrukturplan; Leseverstehen; Korrelation; Leseforschung; Inzidentelles Lernen; Lesbarkeit; Leseunterricht; Gültigkeit; Language test; Sprachtest; English as second language; English; Second Language; Englisch als Zweitsprache; Englische literatur; Morphology; Morphologie; Morphem |
Abstract | In the late 1980s Batia Laufer worked with teachers who believed that to understand a text it was enough to understand 80% of the text's word tokens. In response, Laufer set out to calculate the minimal text coverage, i.e., percentage of running words in a text the reader should understand to comprehend it reasonably well. In 1992, she explored the second facet of lexical threshold, the number of words learners should know receptively to reach reasonable comprehension. Two decades later, Laufer examined the relationship between text coverage, vocabulary size of the learners, and reading comprehension, by combining the three variables into one design. Based on the combined evidence, the current suggestion is that an optimal coverage for reading of any text is 98% of word tokens and that the minimal coverage is 95%. Recently (Laufer, 2020), Laufer found that 95% coverage could be reached with an initial knowledge of 90% words in the text and inferring an additional 5%. The findings regarding the lexical thresholds, in terms of coverages and learners' vocabulary size, can be used in planning lexical syllabi, grading reading texts in terms of lexical difficulty, matching particular texts to particular learners, dividing learners by lexical knowledge, and researching comprehension, inferencing and incidental acquisition. Based on her findings, Laufer presents her arguments (specifically responding to Stuart McLean's 2021 article) in three sections: (1) Coverages, thresholds, and comprehension: Incorrect assumptions? (2) Matching texts to learners' level: A need for rigid mastery levels? and (3) The lexical load of texts and the lexical knowledge of learners: An inappropriate word counting unit? She concludes that it is rewarding to see that issues pertaining to lexical difficulty of texts, and learners' lexical knowledge necessary to comprehend them continue generating interest and discussion three decades after she started working on them. Scholarly activity involves revisiting and refining existing concepts and tools and applying different approaches to the same issue. However, older, and newer approaches do not have to be either right or wrong, appropriate or inappropriate, valid or invalid. They are complementary, not mutually exclusive. [For McLean's article, "The Coverage Comprehension Model, Its Importance to Pedagogy and Research, and Threats to the Validity with Which It Is Operationalized," see EJ1296462.] (ERIC). |
Anmerkungen | National Foreign Language Resource Center at University of Hawaii. 1859 East-West Road #106, Honolulu, HI 96822. e-mail: readfl@hawaii.edu; Web site: https://nflrc.hawaii.edu/rfl/ |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |