Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enSmith, Kimberly G.; Fogerty, Daniel
TitelIntegration of Partial Information within and across Modalities: Contributions to Spoken and Written Sentence Recognition
QuelleIn: Journal of Speech, Language, and Hearing Research, 58 (2015) 6, S.1805-1817 (13 Seiten)
PDF als Volltext Verfügbarkeit 
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1092-4388
DOI10.1044/2015_JSLHR-H-14-0272
SchlagwörterOral Language; Written Language; Sentences; Recognition (Psychology); Word Recognition; Young Adults; Verbal Stimuli; Auditory Perception; Language Processing; Cues; Auditory Discrimination
AbstractPurpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions. Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions. Results: Significantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded. Conclusions: The speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals. (As Provided).
AnmerkungenAmerican Speech-Language-Hearing Association (ASHA). 10801 Rockville Pike, Rockville, MD 20852. Tel: 800-638-8255; Fax: 301-571-0457; e-mail: subscribe@asha.org; Web site: http://jslhr.asha.org
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2020/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Journal of Speech, Language, and Hearing Research" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: