Targeted environmental and ecosystem management remain crucial in control of dengue. However, providing detailed environmental information on a large scale to effectively target dengue control efforts remains a challenge. An important piece of such information is the extent of the presence of potential dengue vector breeding sites, which consist primarily of open containers such as ceramic jars, buckets, old tires, and flowerpots. In this paper we present the design and implementation of a pipeline to detect outdoor open containers which constitute potential dengue vector breeding sites from geotagged images and to create highly detailed container density maps at unprecedented scale. We implement the approach using Google Street View images which have the advantage of broad coverage and of often being two to three years old which allows correlation analyses of container counts against historical data from manual surveys. Containers comprising eight of the most common breeding sites are detected in the images using convolutional neural network transfer learning. Over a test set of images the object recognition algorithm has an accuracy of 0.91 in terms of F-score. Container density counts are generated and displayed on a decision support dashboard. Analyses of the approach are carried out over three provinces in Thailand. The container counts obtained agree well with container counts from available manual surveys. Multi-variate linear regression relating densities of the eight container types to larval survey data shows good prediction of larval index values with an R-squared of 0.674. To delineate conditions under which the container density counts are indicative of larval counts, a number of factors affecting correlation with larval survey data are analyzed. We conclude that creation of container density maps from geotagged images is a promising approach to providing detailed risk maps at large scale.
The explosion of online information with the recent advent of digital technology in information processing, information storing, information sharing, natural language processing, and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content. For example, a typical stock market investor reads the news, explores market sentiment, and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock. However, capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market. Although existing studies have attempted to enhance stock prediction, few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making. To address the above challenge, we propose a unified solution for data collection, analysis, and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles, social media, and company technical information. We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices. Specifically, we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices. Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93. Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance. Finally, our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.