Today, the digital marketing is constantly evolving, new tools are regularly introduced with the new consumer habits and the multiplication of data, often forcing marketers to delve into too much data that may not even give them the overview they need to make business decisions that have an impact. After the revolution of machine learning technology in other real world application, machine learning is changing the digital marketing landscape, 84% of marketing organizations are implementing or expanding their use of machine learning in 2018 [1].It becomes easier to predict and analyze consumer behavior with great accuracy. In our work we will start by establish an art of state on the main and most used machine learning potentials in digital marketing strategies and we show how machine learning tools can be used at large scale for marketing purposes by analyzing extremely large sets of data. The way that ML is integrated in digital marketing practices helps them better understand the target consumers and optimize their interactions with them.
Abstract-Processing a data stream in real time is a crucial issue for several applications, however processing a large amount of data from different sources, such as sensor networks, web traffic, social media, video streams and other sources, represents a huge challenge. The main problem is that the big data system is based on Hadoop technology, especially MapReduce for processing. This latter is a high scalability and fault tolerant framework. It also processes a large amount of data in batches and provides perception blast insight of older data, but it can only process a limited set of data. MapReduce is not appropriate for real time stream processing, and is very important to process data the moment they arrive at a fast response and a good decision making. Ergo the need for a new architecture that allows real-time data processing with high speed along with low latency. The major aim of the paper at hand is to give a clear survey of the different open sources technologies that exist for real-time data stream processing including their system architectures. We shall also provide a brand new architecture which is mainly based on previous comparisons of real-time processing powered with machine learning and storm technology.
Liver segmentation in CT images has multiple clinical applications and is expanding in scope. Clinicians can employ segmentation for pathological diagnosis of liver disease, surgical planning, visualization and volumetric assessment to select the appropriate treatment. However, segmentation of the liver is still a challenging task due to the low contrast in medical images, tissue similarity with neighbor abdominal organs and high scale and shape variability. Recently, deep learning models are the state of art in many natural images processing tasks such as detection, classification, and segmentation due to the availability of annotated data. In the medical field, labeled data is limited due to privacy, expert need, and a time-consuming labeling process. In this paper, we present an efficient model combining a selective pre-processing, augmentation, post-processing and an improved SegCaps network. Our proposed model is an end-to-end learning, fully automatic with a good generalization score on such limited amount of training data. The model has been validated on two 3D liver segmentation datasets and have obtained competitive segmentation results.
Semantic Segmentation is the process of assigning a label to every pixel in the image that share same semantic properties and stays a challenging task in computer vision. In recent years, and due to the large availability of training data the performance of semantic segmentation has been greatly improved by using deep learning techniques. A large number of novel methods have been proposed. However, in some crucial fields we can't assure sufficient data to learn a deep model and achieves high accuracy. This paper aims to provide a brief survey of research efforts on deep-learning-based semantic segmentation methods on limited labeled data and focus our survey on weakly-supervised methods. This survey is expected to familiarize readers with the progress and challenges of weakly supervised semantic segmentation research in the deep learning era and present several valuable growing research points in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.