A smart city is an idea that is realized by the computing of a large amount of data collected through sensors, cameras, and other electronic methods to provide services, manage resources and solve daily life problems. The transformation of the conventional grid to a smart grid is one step in the direction towards smart city realization.An electric grid is composed of control stations, generation centres, transformers, communication lines, and distributors, which helps in transferring power from the power station to domestic and commercial consumers. Present electric grids are not smart enough that they can estimate the varying power requirement of the consumer. Also, these conventional grids are not enough robust and scalable. This has become the motivation for shifting from a conventional grid to a smart grid. The smart grid is a kind of power grid, which is robust and adapts itself to the varying needs of the consumer and self-healing in nature. In this way, the transformation from a conventional grid to a smart grid will help the government to make a smart city. The emergence of machine learning has helped in the prediction of the stability of the grid under the dynamically changing requirement of the consumer. Also, the usage of a variety of sensors will help in the collection of real-time consumption data. Through machine learning algorithms, we can gain an insight view of the collected data. This has helped the smart grid to convert into a robust smart grid, as this will help in avoiding the situation of failure. In this work, the authors have applied logistic regression, decision tree, support vector machine, linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, random forest, and k-nearest neighbour algorithms to predict the stability of the grid. The authors have used the smart grid stability dataset freely available on Kaggle to train and test the models. It has been found that a model designed using the support vector machine algorithm has given the most accurate result.
It is has long been anecdotally known that web archives and search engines favor Western and English-language sites. In this paper we quantitatively explore how well indexed and archived are Arabic language web sites. We began by sampling 15,092 unique URIs from three different website directories: DMOZ (multi-lingual), Raddadi and Star28 (both primarily Arabic language). Using language identification tools we eliminated pages not in the Arabic language (e.g., English language versions of Al-Jazeera sites) and culled the collection to 7,976 definitely Arabic language web pages. We then used these 7,976 pages and crawled the live web and web archives to produce a collection of 300,646 Arabic language pages. We discovered: 1) 46% are not archived and 31% are not indexed by Google (www.google.com), 2) only 14.84% of the URIs had an Arabic country code top-level domain (e.g., .sa) and only 10.53% had a GeoIP in an Arabic country, 3) having either only an Arabic GeoIP or only an Arabic top-level domain appears to negatively impact archiving, 4) most of the archived pages are near the top level of the site and deeper links into the site are not wellarchived, 5) the presence in a directory positively impacts indexing and presence in the DMOZ directory, specifically, positively impacts archiving.
The most effective threat for wireless sensor networks (WSN) is Vampire attacks on sensor nodes as they can stretch the network connectivity among them and influence the network’s energy, which can drain the network. Vampire attack has particular malicious nature of sensor nodes in which they can widely exploit features of combined routing protocol. Fuzzy rules and fuzzy sets are highly optimal techniques in mitigating the vampire attacks of the network, which can quantify the uncertain behaviour of sensor nodes. This study aims to propose a novel technique using a probabilistic fuzzy chain set with authentication-based routing protocol and hybrid clustering technique for data optimization of the network. The suggested approach here employs a fuzzy-based chain rule set to combat growing types of vampire assaults using probability formulas. The authentication routing protocol has increased network routing security. The proposed technique (PFCS-ARP_HC) has optimized the energy consumption of network. Simulation for this technique has been carried out using NS2 and experimental results show the performance of the proposed model in terms of throughput of 98%, packet delivery ratio of 89%, energy consumption of 67%, latency of 46% control overhead of 53%, and attack detection ratio of 87.9%.
Quantifying the captures of a URI over time is useful for researchers to identify the extent to which a Web page has been archived. Memento TimeMaps provide a format to list mementos (URI-Ms) for captures along with brief metadata, like Memento-Datetime, for each URI-M. However, when some URI-Ms are dereferenced, they simply provide a redirect to a different URI-M (instead of a unique representation at the datetime), often also present in the TimeMap. This infers that confidently obtaining an accurate count quantifying the number of non-forwarding captures for a URI-R is not possible using a TimeMap alone and that the magnitude of a TimeMap is not equivalent to the number of representations it identifies. In this work we discuss this particular phenomena in depth. We also perform a breakdown of the dynamics of counting mementos for a particular URI-R (google.com) and quantify the prevalence of the various canonicalization patterns that exacerbate attempts at counting using only a TimeMap. For google.com we found that 84.9% of the URI-Ms result in an HTTP redirect when dereferenced. We expand on and apply this metric to TimeMaps for seven other URI-Rs of large Web sites and thirteen academic institutions. Using a ratio metric DI for the number of URI-Ms without redirects to those requiring a redirect when dereferenced, five of the eight large web sites' and two of the thirteen academic institutions' TimeMaps had a ratio of ratio less than one, indicating that more than half of the URI-Ms in these TimeMaps result in redirects when dereferenced.
It has long been suspected that web archives and search engines favor Western and English language webpages. In this article, we quantitatively explore how well indexed and archived Arabic language webpages are as compared to those from other languages. We began by sampling 15,092 unique URIs from three different website directories: DMOZ (multilingual), Raddadi, and Star28 (the last two primarily Arabic language). Using language identification tools, we eliminated pages not in the Arabic language (e.g., English-language versions of Aljazeera pages) and culled the collection to 7,976 Arabic language webpages. We then used these 7,976 pages and crawled the live web and web archives to produce a collection of 300,646 Arabic language pages. We compared the analysis of Arabic language pages with that of English, Danish, and Korean language pages. First, for each language, we sampled unique URIs from DMOZ; then, using language identification tools, we kept only pages in the desired language. Finally, we crawled the archived and live web to collect a larger sample of pages in English, Danish, or Korean. In total for the four languages, we analyzed over 500,000 webpages. We discovered: (1) English has a higher archiving rate than Arabic, with 72.04% archived. However, Arabic has a higher archiving rate than Danish and Korean, with 53.36% of Arabic URIs archived, followed by Danish and Korean with 35.89% and 32.81% archived, respectively. (2) Most Arabic and English language pages are located in the United States; only 14.84% of the Arabic URIs had an Arabic country code top-level domain (e.g., sa) and only 10.53% had a GeoIP in an Arabic country. Most Danish-language pages were located in Denmark, and most Korean-language pages were located in South Korea. (3) The presence of a webpage in a directory positively impacts indexing and presence in the DMOZ directory, specifically, positively impacts archiving in all four languages. In this work, we show that web archives and search engines favor English pages. However, it is not universally true for all Western-language webpages because, in this work, we show that Arabic webpages have a higher archival rate than Danish language webpages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.