Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enSalem, Alexandra C.; Gale, Robert; Casilio, Marianne; Fleegle, Mikala; Fergadiotis, Gerasimos; Bedrick, Steven
TitelRefining Semantic Similarity of Paraphasias Using a Contextual Language Model
QuelleIn: Journal of Speech, Language, and Hearing Research, 66 (2023) 1, S.206-220 (15 Seiten)
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Salem, Alexandra C.)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1092-4388
SchlagwörterSemantics; Computer Software; Aphasia; Classification; Naming; Pictorial Stimuli; Tests; Phonology; Morphology (Languages); Models; Comparative Analysis; Item Analysis; Error Patterns; Accuracy; Diagnostic Tests; Computational Linguistics; Ambiguity (Semantics)
AbstractPurpose: ParAlg (Paraphasia Algorithms) is a software that automatically categorizes a person with aphasia's naming error (paraphasia) in relation to its intended target on a picture-naming test. These classifications (based on lexicality as well as semantic, phonological, and morphological similarity to the target) are important for characterizing an individual's word-finding deficits or anomia. In this study, we applied a modern language model called BERT (Bidirectional Encoder Representations from Transformers) as a semantic classifier and evaluated its performance against ParAlg's original word2vec model. Method: We used a set of 11,999 paraphasias produced during the Philadelphia Naming Test. We trained ParAlg with word2vec or BERT and compared their performance to humans. Finally, we evaluated BERT's performance in terms of word-sense selection and conducted an item-level discrepancy analysis to identify which aspects of semantic similarity are most challenging to classify. Results: Compared with word2vec, BERT qualitatively reduced word-sense issues and quantitatively reduced semantic classification errors by almost half. A large percentage of errors were attributable to semantic ambiguity. Of the possible semantic similarity subtypes, responses that were associated with or category coordinates of the intended target were most likely to be misclassified by both models and humans alike. Conclusions: BERT outperforms word2vec as a semantic classifier, partially due to its superior handling of polysemy. This work is an important step for further establishing ParAlg as an accurate assessment tool. (As Provided).
AnmerkungenAmerican Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Journal of Speech, Language, and Hearing Research" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: