<p class="0abstract">Today, the influence of the social media on different aspects of our lives is increasing, many scholars from various disciplines and majors looking at the social media networks as the ongoing revolution. In Social media networks, many bonds and connections can be established whether being direct or indirect ties. In fact, Social networks are used not only by people but also by companies. People usually create their own profiles and join communities to discuss different common issues that they have interest in. On the other hand, companies also can create their virtual presence on the social media networks to benefit from this media to understand the customers and gather richer information about them. With all of the benefits and advantages of social media networks, they should not always be seen as a safe place for communicating, sharing information and ideas, and establishing virtual communities. These information and ideas could carry with them hatred speeches that must be detected to avoid raising violence. Therefore, web content mining can be used to handle this issue. Web content mining is gaining more concern because of its importance for many businesses and institutions. Sentiment Analysis (SA) is an important sub-area of web content mining. The purpose of SA is to determine the overall sentiment attitude of writer towards a specific entity and classify these opinions automatically. There are two main approaches to build systems of sentiment analysis: the machine learning approach and the lexicon-based approach. This research presents the design and implementation for violence detection over social media using machine learning approach. Our system works on Jordanian Arabic dialect instead of Modern Standard Arabic (MSA). The data was collected from two popular social media websites (Facebook, Twitter) and has used native speakers to annotate the data. Moreover, different preprocessing techniques have been used to show their effect on our model accuracy. The Arabic lexicon was used for generating feature vectors and separate them to features set. Here, we have three well known machine learning algorithms: Support Vector Machine (SVM), Naive Bayes (NB) and k-Nearest Neighbors (KNN). Building on this view, Information Science Research Institute’s (ISRI) stemming and stop word file as a result of preprocessing were used to extract the features. Indeed, several features have been extracted; however, using the SVM classifier reveals that unigram and features extracted from lexicon are characterized by the highest accuracy to detect violence.</p>
Summary The IEEE 802 standards rely on the distributed coordination function (DCF) as the fundamental medium access control method. DCF uses the binary exponential backoff (BEB) algorithm to regulate channel access. The backoff time determined by BEB depends on a contention window (CW) whose size is doubled if a station suffers a collision and reset to its minimum value after a successful transmission. Doubling the size of CW reduces channel access time, which decreases the throughput. Resetting it to its minimum value harms fairness since the station will have a better chance of accessing the channel compared to stations that suffered a collision. We propose an algorithm that addresses collisions without instantly increasing the CW size. Our algorithm aims to reduce the collision probability without affecting the channel access time and delay. We present extensive simulations for fixed and mobile scenarios. The results show that, on average, our algorithm outperforms BEB in terms of throughput and fairness. Compared to exponential increase exponential decrease (EIED), our algorithm improves, on average, throughput and delay performance. We also propose analytical models for BEB, EIED and our algorithm. Our models extend Bianchi's popular Markov chain‐based model by using a collision probability that is dependent on the station transmission history. Our models provide a better estimation of the probability that a station transmits in a random slot time, which allows a more accurate throughput analysis. Using our models, we show that both the saturation throughput and maximum throughput of our algorithm are higher than those of BEB and EIED.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.