Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enColby, Sarah; Orena, Adriel John
TitelRecognizing Voices through a Cochlear Implant: A Systematic Review of Voice Perception, Talker Discrimination, and Talker Identification
QuelleIn: Journal of Speech, Language, and Hearing Research, 65 (2022) 8, S.3165-3194 (30 Seiten)
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Colby, Sarah)
ORCID (Orena, Adriel John)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1092-4388
SchlagwörterAuditory Perception; Recognition (Psychology); Oral Language; Assistive Technology; Hearing Impairments; Auditory Discrimination; Accuracy; Identification
AbstractObjective: Some cochlear implant (CI) users report having difficulty accessing indexical information in the speech signal, presumably due to limitations in the transmission of fine spectrotemporal cues. The purpose of this review article was to systematically review and evaluate the existing research on talker processing in CI users. Specifically, we reviewed the performance of CI users in three types of talker- and voice-related tasks. We also examined the different factors (such as participant, hearing, and device characteristics) that might influence performance in these specific tasks. Design: We completed a systematic search of the literature with select key words using citation aggregation software to search Google Scholar. We included primary reports that tested (a) talker discrimination, (b) voice perception, and (c) talker identification. Each report must have had at least one group of participants with CIs. Each included study was also evaluated for quality of evidence. Results: The searches resulted in 1,561 references, which were first screened for inclusion and then evaluated in full. Forty-three studies examining talker discrimination, voice perception, and talker identification were included in the final review. Most studies were focused on postlingually deafened and implanted adult CI users, with fewer studies focused on prelingual implant users. In general, CI users performed above chance in these tasks. When there was a difference between groups, CI users performed less accurately than their normal-hearing (NH) peers. A subset of CI users reached the same level of performance as NH participants exposed to noise-vocoded stimuli. Some studies found that CI users and NH participants relied on different cues for talker perception. Within groups of CI users, there is moderate evidence for a bimodal benefit for talker processing, and there are mixed findings about the effects of hearing experience. Conclusions: The current review highlights the challenges faced by CI users in tracking and recognizing voices and how they adapt to it. Although large variability exists, there is evidence that CI users can process indexical information from speech, though with less accuracy than their NH peers. Recent work has described some of the factors that might ease the challenges of talker processing in CI users. We conclude by suggesting some future avenues of research to optimize real-world speech outcomes. (As Provided).
AnmerkungenAmerican Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Journal of Speech, Language, and Hearing Research" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: