Accurate optic cup and optic disc (OC, OD) segmentation is the prerequisite for cup-disc ratio (CDR) calculation. In this paper, a new full convolutional neural network (FCN) with multi-scale residual module is proposed. Firstly, polar coordinate transformation was introduced to balance the CDR with space constraints, and CLAHE was implemented in fundus images for contrast enhancement. Secondly, W-Net-R model was proposed as the main framework, while the standard convolution unit was replaced by the multi-scale residual module. Finally, the multi-label cost function is utilized to guide its functioning. In the experiment, the REFUGE dataset was used for training, validation and testing. We obtained 0.979 and 0.904 for OD and OC segmentations on MIoU, which indicates a relative improvement of 4.04% and 3.55%, comparing with that of U-Net, respectively. Experiment results proved that our proposed method is superior to other state-of-the-art schemes on OC and OD segmentation, and could be a potential prospective tool for early screening of glaucoma.
Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.
Firmware formats vary from vendor to vendor, making it difficult to track which vendor or device the firmware belongs to, or to identify the firmware used in an embedded device. Current firmware analysis tools mainly distinguish firmware by static signatures in the firmware binary code. However, the extraction of a signature often requires careful analysis by professionals to obtain it and requires a significant investment of time and effort. In this paper, we use Doc2Vec to extract and process the character information in firmware, combine the file size, file entropy, and the arithmetic mean of bytes as firmware features, and implement the firmware classifier by combining the Extra Trees model. The evaluation is performed on 1,190 firmware files from 5 router vendors. The accuracy of the classifier is 97.18%, which is higher than that of current approaches. The results show that the proposed approach is feasible and effective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.