In this contribution, we develop an accurate and effective event detection method to detect events from a Twitter stream, which uses visual and textual information to improve the performance of the mining process. The method monitors a Twitter stream to pick up tweets having texts and images and stores them into a database. This is followed by applying a mining algorithm to detect an event. The procedure starts with detecting events based on text only by using the feature of the bag-of-words which is calculated using the term frequency-inverse document frequency (TF-IDF) method. Then it detects the event based on image only by using visual features including histogram of oriented gradients (HOG) descriptors, grey-level cooccurrence matrix (GLCM), and color histogram. K nearest neighbours (Knn) classification is used in the detection. The final decision of the event detection is made based on the reliabilities of text only detection
Segmentation of retinal vessels plays a crucial role in detecting many eye diseases, and its reliable computerized implementation is becoming essential for automated retinal disease screening systems. A large number of retinal vessel segmentation algorithms are available, but these methods improve accuracy levels. Their sensitivity remains low due to the lack of proper segmentation of low contrast vessels, and this low contrast requires more attention in this segmentation process. In this paper, we have proposed new preprocessing steps for the precise extraction of retinal blood vessels. These proposed preprocessing steps are also tested on other existing algorithms to observe their impact. There are two steps to our suggested module for segmenting retinal blood vessels. The first step involves implementing and validating the preprocessing module. The second step applies these preprocessing stages to our proposed binarization steps to extract retinal blood vessels. The proposed preprocessing phase uses the traditional image-processing method to provide a much-improved segmented vessel image. Our binarization steps contained the image coherence technique for the retinal blood vessels. The proposed method gives good performance on a database accessible to the public named DRIVE and STARE. The novelty of this proposed method is that it is an unsupervised method and offers an accuracy of around 96% and sensitivity of 81% while outperforming existing approaches. Due to new tactics at each step of the proposed process, this blood vessel segmentation application is suitable for computer analysis of retinal images, such as automated screening for the early diagnosis of eye disease.
Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.
The Internet of Things (IoT) is defined as interconnected digital and mechanical devices with intelligent and interactive data transmission features over a defined network. The ability of the IoT to collect, analyze and mine data into information and knowledge motivates the integration of IoT with grid and cloud computing. New job scheduling techniques are crucial for the effective integration and management of IoT with grid computing as they provide optimal computational solutions. The computational grid is a modern technology that enables distributed computing to take advantage of a organization’s resources in order to handle complex computational problems. However, the scheduling process is considered an NP-hard problem due to the heterogeneity of resources and management systems in the IoT grid. This paper proposed a Greedy Firefly Algorithm (GFA) for jobs scheduling in the grid environment. In the proposed greedy firefly algorithm, a greedy method is utilized as a local search mechanism to enhance the rate of convergence and efficiency of schedules produced by the standard firefly algorithm. Several experiments were conducted using the GridSim toolkit to evaluate the proposed greedy firefly algorithm’s performance. The study measured several sizes of real grid computing workload traces, starting with lightweight traces with only 500 jobs, then typical with 3000 to 7000 jobs, and finally heavy load containing 8000 to 10,000 jobs. The experiment results revealed that the greedy firefly algorithm could insignificantly reduce the makespan makespan and execution times of the IoT grid scheduling process as compared to other evaluated scheduling methods. Furthermore, the proposed greedy firefly algorithm converges on large search spacefaster , making it suitable for large-scale IoT grid environments.
The computational cloud aims to move traditional computing from personal computers to cloud providers on the internet. Cloud security represents an important research area. Confidentiality, integrity, and availability are the main cloud security characteristics addressed. Cloud providers apply dynamic load balancing and reactive fault tolerance techniques to build secure cloud services to achieve high service availability. Dynamic cloud load balancing approaches distribute submitted tasks to virtual machines during tasks execution and the load of virtual machines is updated based on the system's state. Reactive cloud fault tolerance is activated for system process failures after failure effectively happens. Reactive cloud fault tolerance handles failure after the fault has occurred. Despite the significance of dynamic load balancing and reactive fault tolerance techniques and mechanisms, few reviews focus on these approaches in a systematic, unbiased method focusing on integrating cloud dynamic load balancing and reactive fault tolerance techniques. This paper conducts a systematic literature review of the existing literature concerning reactive fault tolerance, dynamic load balancing, and their integration in their basic approaches, types, frameworks, and future directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.