These days identification of a person is an integral part of many computer-based solutions. It is a key characteristic for access control, customized services, and a proof of identity. Over the last couple of decades, many new techniques were introduced for how to identify human faces. This approach investigates the human face identification based on frontal images by producing ratios from distances between the different features and their locations. Moreover, this extended version includes an investigation of identification based on side profile by extracting and diagnosing the feature sets with geometric ratio expressions which are calculated into feature vectors. The last stage involves using weighted means to calculate the resemblance. The approach considers an explainable Artificial Intelligence (XAI) approach. Findings, based on a small dataset, achieve that the used approach offers promising results. Further research could have a great influence on how faces and face-profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate, and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89. This work is an extended version of the paper submitted in ACIIDS 2020.
In the oil and gas industries, predicting and classifying oil and gas production for hydrocarbon wells is difficult. Most oil and gas companies use reservoir simulation software to predict future oil and gas production and devise optimum field development plans. However, this process costs an immense number of resources and is time consuming. Each reservoir prediction experiment needs tens or hundreds of simulation runs, taking several hours or days to finish. In this paper, we attempt to overcome these issues by creating machine learning and deep learning models to expedite the process of forecasting oil and gas production. The dataset was provided by the leading oil producer, Saudi Aramco. Our approach reduced the time costs to a worst-case of a few minutes. Our study covered eight different ML and DL experiments and achieved its most outstanding R2 scores of 0.96 for XGBoost, 0.97 for ANN, and 0.98 for RNN over the other experiments.
This paper presents the application and issues related to explainable AI in context of a driving assistive system. One of the key functions of the assistive system is to signal potential risks or hazards to the driver in order to allow for prompt actions and timely attention to possible problems occurring on the road. The decision making of an AI component needs to be explainable in order to minimise the time it takes for a driver to decide on whether any action is necessary to avoid the risk of collision or crash. In the explored cases, the autonomous system does not act as a "replacement" for the human driver, instead, its role is to assist the driver to respond to challenging driving situations, possibly difficult manoeuvres or complex road scenarios. The proposed solution validates the XAI approach for the design of a safety and security system that is able to identify and highlight potential risk in autonomous vehicles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.