This paper proposes an Eye Gaze Tracking (EGT) technique using a single eye image that can be easily calibrated and mapped for Human Computer Interaction (HCI). The technique employs both geometric and trigonometric relationships to find a user's Point of Regard, followed by calculating user-dependent variables for final mapping onto a user interface (UI). Experimental results show acceptable accuracy with minimal focal errors.
The main objective of this paper is pornography recognition using audio features. Unlike most of the previous attempts, which have concentrated on the visual content of pornography images or videos, we propose to take advantage of sounds. Using sounds is particularly important in cases in which the visual features are not adequately informative of the contents (e.g., cluttered scenes, dark scenes, scenes with a covered body). To this end, our hypothesis is grounded in the assumption that scenes with pornographic content encompass audios with features specific to those scenes; these sounds can be in the form of speech or voice. More specifically, we propose to extract two types of features, (I) pitch and (II) mel-frequency cepstrum coefficients (MFCC), in order to train five different variations of the k-nearest neighbor (KNN) supervised classification models based on the fusion of these features. Later, the correctness of our hypothesis was investigated by conducting a set of evaluations based on a porno-sound dataset created based on an existing pornography video dataset. The experimental results confirm the feasibility of the proposed acoustic-driven approach by demonstrating an accuracy of 88.40%, an F-score of 85.20%, and an area under the curve (AUC) of 95% in the task of pornography recognition.
The exponentially growing number of pornographic material has brought many challenges to the modern daily life, particularly where children and minors have unlimited access to the internet. In Malaysia, all local and foreign films should obtain the suitability approval before distribution or public viewing, and this process of screening visual contents of all the TV channels imposes a huge censorship cost to the service providers such as Unifi TV. To leverage this issue, this paper proposes to use an emerging model of Deep Learning (DL) techniques called Residual Learning Convolutional Neural Networks (ResNet), in order to automate the process of nudity detection in visual contents. The pre-trained ResNet model, with hundred and one layers, was utilized to perform transfer learning and solve a new binary classification problem of nudity versus non-nudity. The performance of the proposed model is evaluated based on a newly created dataset comprising more than 4k samples of nudity and non-nudity images. After conducting experiments on the nudity dataset, the deep learning method succeeded to achieve the best performance of 70.42% in term of F-score, 84.04% in term of accuracy, and 93.72% in term of AUC .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.