The COVID-19 pandemic, is also known as the coronavirus pandemic, is an ongoing serious global problem all over the world. The outbreak first came to light in December 2019 in Wuhan, China. This was declared pandemic by the World Health Organization on 11th March 2020. COVID-19 virus infected on people and killed hundreds of thousands of people in the United States, Brazil, Russia, India and several other countries. Since this pandemic continues to affect millions of lives, and a number of countries have resorted to either partial or full lockdown. People took social media platforms to share their emotions, and opinions during this lockdown to find a way to relax and calm down. In this research work, sentiment analysis on the tweets of people from top ten infected countries has been conducted. The experiments have been conducted on the collected data related to the tweets of people from top ten infected countries with the addition of one more country chosen from Gulf region, i.e.
Injection in SQL (structure query language) is one of the threats to web-based apps, mobile apps and even desktop applications associated to the database. An effective SQL Injection Attacks (SQLIA) could have severe implications for the victimized organization including economic loss, loss of reputation, enforcement and infringement of regulations. Systems which do not validate the input of the user correctly make them susceptible to SQL injection. SQLIA happens once an attacker can incorporate a sequence of harmful SQL commands into a request by changing back-end database through user information. To use this sort of attacks may readily hack applications and grab the private information by the attacker. In this article we introduce deferential sort of process to safeguard against current SQLIA method and instruments that are used in ASP.NET apps to detect or stop these attacks.
Large amounts of data are generated every moment by connected objects creating Internet of Things (IoT). IoT isn’t about things; it’s about the data those things create and collect. Organizations rely on this data to provide better user experiences, to make smarter business decisions, and ultimately fuel their growth. However, none of this is possible without a reliable database that is able to handle the massive amounts of data generated by IoT devices. Relational databases are known for being flexible, easy to work with, and mature but they aren’t particularly known for is scale, which prompted the creation of NoSQL databases. Another thing to note is that IoT data is time-series in nature. In this paper we are discussed and compare about top five time-series database like InfluxDB, Kdb+, Graphite, Prometheus and RRDtool.
Background: Nowadays, the digital world is rising rapidly and becoming very difficult in nature's quantity, diversity, and speed. Recently, there have been two major changes in data management, which are NoSQL databases and Big Data Analytics. While evolving with the diverse reasons, their independent growths balance each other and their convergence would greatly benefit organization to make decisions on-time with the amount of multifaceted data sets that might be semi structured, structured, and unstructured. Though several software solutions have come out to support Big Data analytics on the one hand, on the other hand, there have been several packages of NoSQL database available in the market. Methods: The main goal of this article is to give comprehension of their perspective and a complete study to associate the future of the emerging several important NoSQL data models. Results: Evaluating NoSQL databases for Big Data analytics with traditional SQL performance shows that NoSQL database is a superior alternative for industry condition need high-performance analytics, adaptability, simplicity, and distributed large data scalability. Conclusion: This paper conclude with industry's current adoption status of NoSQL databases.
The World Wide Web is a large repository of text documents, images, multimedia and much other information, referred to as information resources. A huge amount of fresh information is posted on the World Wide Web day by day. Web crawlers are programs that traverse the Web and download web documents in an automated manner. Search engines have to keep an up to date image to all Web pages and other web resources hosted in web servers in their index and data repositories, to provide improved and exact result to its users. The crawlers of these search engines have to retrieve the pages continuously to keep the index up to date. The list of URL is very huge and thus it is very difficult to refresh it quickly as 40% of web pages change daily. Due to which more of the network resources particularly bandwidth are consumed by the web crawlers to keep the repository up to date.This paper deals with a system based on web crawler using mobile agent. The proposed approach uses Java Aglets for crawling the web pages. The major advantages of web crawler based on Mobile Agents are that the analysis part of the crawling process is done locally at the residence of data rather than inside the remote server. This considerably reduces network load and traffic which can improve the performance and efficiency of the crawling process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.