Literaturnachweis - Detailanzeige
Autor/inn/en | Wei, Yanjun; Jia, Lin; Wang, Jianqin |
---|---|
Titel | Visual-Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification |
Quelle | In: Journal of Speech, Language, and Hearing Research, 65 (2022) 11, S.4096-4111 (16 Seiten)
PDF als Volltext |
Zusatzinformation | ORCID (Wei, Yanjun) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1092-4388 |
Schlagwörter | Mandarin Chinese; Tone Languages; Auditory Perception; Intonation; Speech; Visual Stimuli; Auditory Stimuli; College Students; Second Language Learning; Accuracy; Foreign Countries; Native Language; Pennsylvania (Pittsburgh); China (Beijing) |
Abstract | Purpose: Previous studies have demonstrated that tone identification can be facilitated when auditory tones are integrated with visual information that depicts the pitch contours of the auditory tones (hereafter, visual effect). This study investigates this visual effect in combined visual-auditory integration with high- and low-variability speech and examines whether one's prior tonal-language learning experience shapes the strength of this visual effect. Method: Thirty Mandarin-naïve listeners, 25 Mandarin second language learners, and 30 native Mandarin listeners participated in a tone identification task in which participants judged whether an auditory tone was rising or falling in pitch. Moving arrows depicted the pitch contours of the auditory tones. A priming paradigm was used with the target auditory tones primed by four multimodal conditions: no stimuli (A-V-), visual-only stimuli (A-V+), auditory-only stimuli (A+V-), and both auditory and visual stimuli (A+V+). Results: For Mandarin naïve listeners, the visual effect in accuracy produced under the cross-modal integration (A+V+ vs. A+V-) was superior to a unimodal approach (A-V+ vs. A-V-), as evidenced by a higher d prime of A+V+ as opposed to A+V-. However, this was not the case in response time. Additionally, the visual effect in accuracy and response time under the unimodal approach only occurred for high-variability speech, not for low-variability speech. Across the three groups of listeners, we found that the less tonal-language learning experience one had, the stronger the visual effect. Conclusion: Our study revealed the visual-auditory advantage and disadvantage of the visual effect and the joint contribution of visual-auditory integration and high-variability speech on facilitating tone perception via the process of speech symbolization and categorization. (As Provided). |
Anmerkungen | American Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |