2023
DOI: 10.1089/fpsam.2022.0104
|View full text |Cite
|
Sign up to set email alerts
|

Voluntary and Spontaneous Smile Quantification in Facial Palsy Patients: Validation of a Novel Mobile Application

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…The current literature using AI describes various attempts to integrate various functions into preoperative surgical planning, patient education, and assessment of outcomes. 9–16 No study to our knowledge has assessed the feasibility of a patient interactive AI program for answering questions, troubleshooting complications, and providing clinical recommendations during the perioperative period.…”
Section: Introductionmentioning
confidence: 99%
“…The current literature using AI describes various attempts to integrate various functions into preoperative surgical planning, patient education, and assessment of outcomes. 9–16 No study to our knowledge has assessed the feasibility of a patient interactive AI program for answering questions, troubleshooting complications, and providing clinical recommendations during the perioperative period.…”
Section: Introductionmentioning
confidence: 99%
“…8 Patient-based surveys such as the FACE Instrument pose a similar limitation, although they also uniquely capture the individual's experience. 7 While artificial intelligence (AI) and machine learning (ML) hold promise for improved methods of automated quantification of facial function, most currently available ML algorithms are either limited to static image analysis (such as Emotrics or the auto-eFACE 9,10 ), require proprietary marketing software to estimate facial emotional expressions [11][12][13][14][15] or were limited to a single surgical case report 16 following surgical interventions. An objective, open-source, rigorous quantification of dynamic facial function in videos has remained elusive.…”
Section: Introductionmentioning
confidence: 99%
“…The most advanced method of automated facial analysis in facial paralysis patients, Emotrics, is limited to static photos only and requires significant manual landmark adjustments [23]. Other attempts to quantify facial function have incorporated proprietary marketing software to estimate facial emotional expressions [24] [25] [26] [27] or were limited to a single surgical case report [28] following surgical interventions. An objective, open-source, rigorous quantification of dynamic facial function in videos has remained elusive [28] [29] [30], as there are multiple challenges to accumulating a sufficiently large training dataset to improve landmark accuracy in the facial paralysis population.…”
Section: Introductionmentioning
confidence: 99%