The Internet of Things (IoT) is producing an extraordinary volume of data daily, and it is possible that the data may become useless while on its way to the cloud for analysis, due to longer distances and delays. Fog/edge computing is a new model for analyzing and acting on time-sensitive data (real-time applications) at the network edge, adjacent to where it is produced. The model sends only selected data to the cloud for analysis and long-term storage. Furthermore, cloud services provided by large companies such as Google, can also be localized to minimize the response time and increase service agility. This could be accomplished through deploying small-scale datacenters (reffered to by name as cloudlets) where essential, closer to customers (IoT devices) and connected to a centrealised cloud through networks-which form a multi-access edge cloud (MEC). The MEC setup involves three different parties, i.e. service providers (IaaS), application providers (SaaS), network providers (NaaS); which might have different goals, therefore, making resource management a defficult job. In the literature, various resource management techniques have been suggested in the context of what kind of services should they host and how the available resources should be allocated to customers' applications, particularly, if mobility is involved. However, the existing literature considers the resource management problem with respect to a single party. In this paper, we assume resource management with respect to all three parties i.e. IaaS, SaaS, NaaS; and suggest a game theoritic resource management technique that minimises infrastructure energy consumption and costs while ensuring applications performance. Our empirical evaluation, using real workload traces from Google's cluster, suggests that our approach could reduce up to 11.95% energy consumption, and approximately 17.86% user costs with negligible loss in performance. Moreover, IaaS can reduce up to 20.27% energy bills and NaaS can increase their costs savings up to 18.52% as compared to other methods.
:
Recently, medical imaging and machine learning gained significant attention in the early detection of brain tumor. Compound structure and tumor variations, such as change of size, make brain tumor segmentation and classification a challenging task. In this review, we survey existing work on brain tumor, their stages, survival rate of patients after each stage, and computerized diagnosis methods. We discuss existing image processing techniques with a special focus on preprocessing techniques and their importance for tumor enhancement, tumor segmentation, feature extraction and features reduction techniques. We also provide the corresponding mathematical modeling, classification, performance matrices, and finally important datasets. Last but not least, a detailed analysis of existing techniques is provided which is followed by future directions in this domain.
:
Malignant melanoma is acknowledged as being amongst the most deadly kind of cancers, which have been increased broadly worldwide from the last decade. In 2018 around 91,270 cases of melanoma are found and 9,320 people have died in the US. However, the diagnosis at the initial stage indicates the high survival rate. The conventional diagnostic methods are expensive, inconvenient and subject to the dermatologist expertise as well as the highly equipped environment. The recent achievements in the computerized based systems are highly promising with improved accuracy and efficiency. The several measures such as irregularity, contrast stretching, change in origin, feature extraction and feature selection is considered for accurate melanoma detection and classification. Typically, the digital dermoscopy comprise of four fundamental image processing steps including preprocessing, segmentation, feature extraction and reduction, and lesion classification. We compare our survey with existing surveys in terms of preprocessing techniques (hair removal, contrast stretching) and their challenges, lesion segmentation methods, feature extraction methods with their challenges, features selection techniques, datasets for validation of the digital system, classification methods and performance measure. Also, a brief summary of each step is presented in the tables. The challenges for each step are also described in detail, which clearly indicates why the digital systems are not performing well. Future directions are also given in this survey.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.