Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enWei, Yanjun; Jia, Lin; Wang, Jianqin
TitelVisual-Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification
QuelleIn: Journal of Speech, Language, and Hearing Research, 65 (2022) 11, S.4096-4111 (16 Seiten)
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Wei, Yanjun)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1092-4388
SchlagwörterMandarin Chinese; Tone Languages; Auditory Perception; Intonation; Speech; Visual Stimuli; Auditory Stimuli; College Students; Second Language Learning; Accuracy; Foreign Countries; Native Language; Pennsylvania (Pittsburgh); China (Beijing)
AbstractPurpose: Previous studies have demonstrated that tone identification can be facilitated when auditory tones are integrated with visual information that depicts the pitch contours of the auditory tones (hereafter, visual effect). This study investigates this visual effect in combined visual-auditory integration with high- and low-variability speech and examines whether one's prior tonal-language learning experience shapes the strength of this visual effect. Method: Thirty Mandarin-naïve listeners, 25 Mandarin second language learners, and 30 native Mandarin listeners participated in a tone identification task in which participants judged whether an auditory tone was rising or falling in pitch. Moving arrows depicted the pitch contours of the auditory tones. A priming paradigm was used with the target auditory tones primed by four multimodal conditions: no stimuli (A-V-), visual-only stimuli (A-V+), auditory-only stimuli (A+V-), and both auditory and visual stimuli (A+V+). Results: For Mandarin naïve listeners, the visual effect in accuracy produced under the cross-modal integration (A+V+ vs. A+V-) was superior to a unimodal approach (A-V+ vs. A-V-), as evidenced by a higher d prime of A+V+ as opposed to A+V-. However, this was not the case in response time. Additionally, the visual effect in accuracy and response time under the unimodal approach only occurred for high-variability speech, not for low-variability speech. Across the three groups of listeners, we found that the less tonal-language learning experience one had, the stronger the visual effect. Conclusion: Our study revealed the visual-auditory advantage and disadvantage of the visual effect and the joint contribution of visual-auditory integration and high-variability speech on facilitating tone perception via the process of speech symbolization and categorization. (As Provided).
AnmerkungenAmerican Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Journal of Speech, Language, and Hearing Research" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: