In the present paper, we propose a source camera identification (SCI) method for mobile devices based on deep learning. Recently, convolutional neural networks (CNNs) have shown a remarkable performance on several tasks such as image recognition, video analysis or natural language processing. A CNN consists on a set of layers where each layer is composed by a set of high pass filters which are applied all over the input image. This convolution process provides the unique ability to extract features automatically from data and to learn from those features. Our proposal describes a CNN architecture which is able to infer the noise pattern of mobile camera sensors (also known as camera fingerprint) with the aim at detecting and identifying not only the mobile device used to capture an image (with a 98% of accuracy), but also from which embedded camera the image was captured. More specifically, we provide an extensive analysis on the proposed architecture considering different configurations. The experiment has been carried out using the images captured from different mobile devices cameras (MICHE-I Dataset) and the obtained results have proved the robustness of the proposed method.
BackgroundIn many telemedicine applications, the correct use of medical device at the point of need is essential to provide an appropriate service. Some applications may require untrained people to interact with medical devices and patients: care delivery in transportation, military actions, home care and telemedicine training.Appropriate operation of medical device and correct connection with patient’s body are crucial. In these scenarios, tailored applications of Augmented Reality can offer a valid support by guiding untrained people at the point of need. This study aims to explore the feasibility of using Augmented Reality in telemedicine applications, by facilitating an acceptable use of biomedical equipment by any unskilled person. In particular, a prototype system was built in order to estimate how untrained users, with limited or no knowledge, can effectively interact with an ECG device and properly placing ECG electrodes on patient’s chest.MethodsAn Augmented Reality application was built to support untrained users in performing an ECG test. Simple markers attached to the ECG device and onto patient’s thorax allow camera calibration. Once objects and their pose in the space are recognized, the video of the current scene is enriched, in real-time, with additional pointers, text boxes and audio that help the untrained operator to perform the appropriate sequence of operations. All the buttons, switches, ports of the ECG device together with the location of precordial leads were coded and indicated. Some user’s voice commands were also included to improve usability.ResultsTen untrained volunteers, supported by the augmented reality, were able to carry out a complete ECG test first on a mannequin and then on a real patient in a reasonable time (about 8 minutes on average). Average positioning errors of precordial electrodes resulted less than 3 mm for the mannequin and less than 7 mm for the real patient. These preliminary findings suggest the effectiveness of the developed application and the validity of clinical ECG recordings.ConclusionThis application can be adapted to support the use of other medical equipment as well as other telemedicine tasks and it could be performed with a Tablet or a Smartphone.
Mobile biometrics technologies are nowadays the new frontier for secure use of data and services, and are considered particularly important due to the massive use of handheld devices in the entire world. Among the biometric traits with potential to be used in mobile settings, the iris/ocular region is a natural candidate, even considering that further advances in the technology are required to meet the operational requirements of such ambitious environments. Aiming at promoting these advances, we organized the Mobile Iris Challenge Evaluation (MICHE)-I contest. This paper presents a comparison of the performance of the participant methods by various Figures of Merit (FoMs). A particular attention is devoted to the identification of the image covariates that are likely to cause a decrease in the performance levels of the compared algorithms. Among these factors, interoperability among different devices plays an important role. The methods (or parts of them) implemented by the analyzed approaches are classified into segmentation (S), which was the main target of MICHE-I, and recognition (R). The paper reports both the results observed for either S or R, and also for different recombinations (S+R) of such methods. Last but not least, we also present the results obtained by multi-classifier strategies
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.