The rapid pace of innovation and advances in technological research has given hope to the visually impaired, to find ways to move around smart cities and have a better quality of life (QoL). There are around 110 million people suffering from visual impairments worldwide and research will continue to be adapted to find innovative solutions to provide a journey closer to total accessibility for the visually impaired. It has been identified through various studies that the requirements of people with visual impairment fall into two major categories. Firstly, the ability to recognise people, leading towards social interactions. Secondly, the ability to carry out routine activities seamlessly without any hindrance. Through the use of artific ia l intelligence, availability of data, high bandwidth, large number of connected devices and the collaboration of the citizens in a smart city, the life of the Visually Impaired Persons (VIPs) can be improved by providing them with more independence and safety. Moreover, smart cities also support the concept of sustainable economic growth as well as well-being of its citizens, therefore its development relies on strong ICT infrastructure. With a rise in smartphones, wearable devices, and the surge in the adoption of Artificial Intelligence (AI), Internet of Things (IoT), and Virtual and Augmented Reality (VR)/(AR) have provided aspiration for VIPs to lead a better QoL. A number of studies have already tested the use of these technologie s and have showed optimistic results. The main sectors that could be improved to cater for the visually impaired in smart cities are public areas, transportation systems, and the home systems. This chapter provides a comprehensive review and recommendations on how a smart city can provide a better QoL for the visually impaired in the near future.
The new ‘normal’ defined during the COVID-19 pandemic has forced us to re-assess how people with special needs thrive in these unprecedented conditions, such as those with Autism Spectrum Disorder (ASD). These changing/challenging conditions have instigated us to revisit the usage of telehealth services to improve the quality of life for people with ASD. This study aims to identify mobile applications that suit the needs of such individuals. This work focuses on identifying features of a number of highly-rated mobile applications (apps) that are designed to assist people with ASD, specifically those features that use Artificial Intelligence (AI) technologies. In this study, 250 mobile apps have been retrieved using keywords such as autism, autism AI, and autistic. Among 250 apps, 46 were identified after filtering out irrelevant apps based on defined elimination criteria such as ASD common users, medical staff, and non-medically trained people interacting with people with ASD. In order to review common functionalities and features, 25 apps were downloaded and analysed based on eye tracking, facial expression analysis, use of 3D cartoons, haptic feedback, engaging interface, text-to-speech, use of Applied Behaviour Analysis therapy, Augmentative and Alternative Communication techniques, among others were also deconstructed. As a result, software developers and healthcare professionals can consider the identified features in designing future support tools for autistic people. This study hypothesises that by studying these current features, further recommendations of how existing applications for ASD people could be enhanced using AI for (1) progress tracking, (2) personalised content delivery, (3) automated reasoning, (4) image recognition, and (5) Natural Language Processing (NLP). This paper follows the PRISMA methodology, which involves a set of recommendations for reporting systematic reviews and meta-analyses.
While only 4.2 million people out of a population of 7.9 million disabled people are working, a considerable contribution is still required from universities and industries to increase employability among the disabled, in particular, by providing adequate career guidance post higher education. This study aims to identify the potential predictive features, which will improve the chances of engaging disabled school leavers in employment about 6 months after graduation. MALSEND is an analytical platform that consists of information about UK Destinations Leavers from Higher Education (DLHE) survey results from 2012 to 2017. The dataset of 270,934 student records with a known disability provides anonymised information about students' age range, year of study, disability type, results of the first degree, among others. Using both qualitative and quantitative approaches, characteristics of disabled candidates during and after school years were investigated to identify their engagement patterns. This paper builds on constructing and selecting subsets of features useful to build a good predictor regarding the engagement of disabled students 6 months after graduation using the big data approach with machine learning principles. Features such as age, institution, disability type, among others were found to be essential predictors of the proposed employment model. A pilot was developed, which shows that the Decision Tree Classifier and Logistic Regression models provided the best results for predicting the Standard Occupation Classification (SOC) of a disabled school leaver in the UK with an accuracy of 96%.
This paper presents a QoS-aware, content-aware and device-aware nonintrusive medical QoE (m-QoE) prediction model over small cell networks. The proposed prediction model utilises a Multilayer Perceptron (MLP) neural network to predict m-QoE. It also acts as a platform to maintain and optimise the acceptable diagnostic quality through a device-aware adaptive video streaming mechanism. The proposed model is trained for an unseen dataset of input variables such as QoS, content features and display device characteristics, to produce an output value in the form of m-QoE (i.e. MOS). The efficiency of the proposed model is validated through subjective tests carried by medical experts. The prediction accuracy obtained via the correlation coefficient and Root Mean-Square-Error (RMSE) indicates that the proposed model succeeds in measuring m-QoE closer to the visual perception of the medical experts. Furthermore, we have addressed two main research questions: (1) How significant is ultrasound video content type in determining m-QoE? (2) How much of a role does the screen size and device resolution play in medical experts’ diagnostic experience? The former is answered through the content classification of ultrasound video sequences based on their spatiotemporal features, by including these features in the proposed prediction model, and validating their significance through medical experts’ subjective ratings. The latter is answered by conducting a novel subjective experiment of the ultrasound video sequences across multiple devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.