Purpose – The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach. Design/methodology/approach – A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required. Findings – The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests. Research limitations/implications – The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content. Practical implications – The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs. Originality/value – The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.
Purpose Trust is one of the main pillars of many communication and interaction domains. Computing is no exception. Fog computing (FC) has emerged as mitigation of several cloud computing limitations. However, selecting a trustworthy node from the fog network still presents serious challenges. This paper aims to propose an algorithm intended to mitigate the trust and the security issues related to selecting a node of a fog network. Design/methodology/approach The proposed model/algorithm is based on two main concepts, namely, machine learning using fuzzy neural networks (FNNs) and the weighted weakest link (WWL) algorithm. The crux of the proposed model is to be trained, validated and used to classify the fog nodes according to their trust scores. A total of 2,482 certified computing products, in addition to a set of nodes composed of multiple items, are used to train, validate and test the proposed model. A scenario including nodes composed of multiple computing items is designed for applying and evaluating the performance of the proposed model/algorithm. Findings The results show a well-performing trust model with an accuracy of 0.9996. Thus, the end-users of FC services adopting the proposed approach could be more confident when selecting elected fog nodes. The trained, validated and tested model was able to classify the nodes according to their trust level. The proposed model is a novel approach to fog nodes selection in a fog network. Research limitations/implications Certainly, all data could be collected, however, some features are very difficult to have their scores. Available techniques such as regression analysis and the use of the experts have their own limitations. Experts might be subjective, even though the author used the fuzzy group decision-making model to mitigate the subjectivity effect. A methodical evaluation by specialized bodies such as the security certification process is paramount to mitigate these issues. The author recommends the repetition of the same study when data form such bodies is available. Originality/value The novel combination of FNN and WWL in a trust model mitigates uncertainty, subjectivity and enables the trust classification of complex FC nodes. Furthermore, the combination also allowed the classification of fog nodes composed of diverse computing items, which is not possible without the WWL. The proposed algorithm will provide the required intelligence for end-users (devices) to make sound decisions when requesting fog services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.