This paper provides a brief analytical review of the current state-of-the-art in relation to the explainability of artificial intelligence in the context of recent advances in machine learning and deep learning. The paper starts with a brief historical introduction and a taxonomy, and formulates the main challenges in terms of explainability building on the recently formulated National Institute of Standards four principles of explainability. Recently published methods related to the topic are then critically reviewed and analyzed. Finally, future directions for research are suggested.
For some professionally, vocationally, or technically oriented careers, curricula delivered in higher education establishments may focus on teaching material related to a single discipline. By contrast, multidisciplinary, interdisciplinary, and transdisciplinary teaching (MITT) results in improved affective and cognitive learning and critical thinking, offering learners/students the opportunity to obtain a broad general knowledge base. Chemistry is a discipline that sits at the interface of science, technology, engineering, mathematics, and medicine (STEMM) subjects (and those aligned with or informed by STEMM subjects). This article discusses the significant potential of inclusion of chemistry in MITT activities in higher education and the real-world importance in personal, organizational, national, and global contexts. It outlines the development and implementation challenges attributed to legacy higher education infrastructures (that call for creative visionary leadership with strong and supportive management and administrative functions), and curriculum design that ensures inclusivity and collaboration and is pitched and balanced appropriately. It concludes with future possibilities, notably highlighting that chemistry, as a discipline, underpins industries that have multibillion dollar turnovers and employ millions of people across the world.
The capability to perform facial analysis from video sequences has significant potential to positively impact in many areas of life. One such area relates to the medical domain to specifically aid in the diagnosis and rehabilitation of patients with facial palsy. With this application in mind, this paper presents an end-to-end framework, named 3DPalsyNet, for the tasks of mouth motion recognition and facial palsy grading. 3DPalsyNet utilizes a 3D CNN architecture with a ResNet backbone for the prediction of these dynamic tasks. Leveraging transfer learning from a 3D CNNs pre-trained on the Kinetics data set for general action recognition, the model is modified to apply joint supervised learning using center and softmax loss concepts. 3DPalsyNet is evaluated on a test set consisting of individuals with varying ranges of facial palsy and mouth motions and the results have shown an attractive level of classification accuracy in these tasks of 82% and 86% respectively. The frame duration and the loss function affect was studied in terms of the predictive qualities of the proposed 3DPalsyNet, where it was found shorter frame duration's of 8 performed best for this specific task. Centre loss and softmax have shown improvements in spatio-temporal feature learning than softmax loss alone, this is in agreement with earlier work involving the spatial domain.
Facial expression verification has been extensively exploited due to its wide application in affective computing, robotic vision, man-machine interaction and medical diagnosis. With the recent development of Internet-of-Things (IoT), there is a need of mobile-targeted facial expression verification, where face scrambling has been proposed for privacy protection during image/video distribution over public network. Consequently, facial expression verification needs to be carried out in a scrambled domain, bringing out new challenges in facial expression recognition. An immediate impact from face scrambling is that conventional semantic facial components become not identifiable, and 3D face models cannot be clearly fitted to a scrambled image. Hence, the classical facial action coding system cannot be applied to facial expression recognition in the scrambled domain. To cope with chaotic signals from face scrambling, this paper proposes an new approach – Many Graph Embedding (MGE) to discover discriminative patterns from the subspaces of chaotic patterns, where the facial expression recognition is carried out as a fuzzy combination from many graph embedding. In our experiments, the proposed MGE was evaluated on three scrambled facial expression datasets: JAFFE, MUG and CK++. The benchmark results demonstrated that the proposed method is able to improve the recognition accuracy, making our method a promising candidate for the scrambled facial expression recognition in the emerging privacy-protected IoT applications
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.