Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enGuarin, Diego L.; Taati, Babak; Abrahao, Agessandro; Zinman, Lorne; Yunusova, Yana
TitelVideo-Based Facial Movement Analysis in the Assessment of Bulbar Amyotrophic Lateral Sclerosis: Clinical Validation
QuelleIn: Journal of Speech, Language, and Hearing Research, 65 (2022) 12, S.4667-4678 (12 Seiten)
PDF als Volltext Verfügbarkeit 
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN1092-4388
SchlagwörterVideo Technology; Nonverbal Communication; Diseases; Neurological Impairments; Validity; Medical Evaluation; Classification; Accuracy; Adults
AbstractPurpose: Facial movement analysis during facial gestures and speech provides clinically useful information for assessing bulbar amyotrophic lateral sclerosis (ALS). However, current kinematic methods have limited clinical application due to the equipment costs. Recent advancements in consumer-grade hardware and machine/deep learning made it possible to estimate facial movements from videos. This study aimed to establish the clinical validity of a video-based facial analysis for disease staging classification and estimation of clinical scores. Method: Fifteen individuals with ALS and 11 controls participated in this study. Participants with ALS were stratified into early and late bulbar ALS groups based on their speaking rate. Participants were recorded with a three-dimensional (3D) camera (color + depth) while repeating a simple sentence 10 times. The lips and jaw movements were estimated, and features related to sentence duration and facial movements were used to train a machine learning model for multiclass classification and to predict the Amyotrophic Lateral Sclerosis Functional Rating Scale--Revised (ALSFRS-R) bulbar subscore and speaking rate. Results: The classification model successfully separated healthy controls, the early ALS group, and the late ALS group with an overall accuracy of 96.1%. Video-based features demonstrated a high ability to estimate the speaking rate (adjusted R[superscript 2] = 0.82) and a moderate ability to predict the ALSFRS-R bulbar subscore (adjusted R[superscript 2] = 0.55). Conclusions: The proposed approach based on a 3D camera and machine learning algorithms represents an easy-to-use and inexpensive system that can be included as part of a clinical assessment of bulbar ALS to integrate facial movement analysis with other clinical data seamlessly. (As Provided).
AnmerkungenAmerican Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2024/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Journal of Speech, Language, and Hearing Research" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: