The web is being loaded daily with a huge volume of data, mainly unstructured textual data, which increases the need for information extraction and NLP systems significantly. Named-entity recognition task is a key step towards efficiently understanding text data and saving time and effort. Being a widely used language globally, English is taking over most of the research conducted in this field, especially in the biomedical domain. Unlike other languages, Arabic suffers from lack of resources. This work presents a BERT-based model to identify biomedical named entities in the Arabic text data (specifically disease and treatment named entities) that investigates the effectiveness of pretraining a monolingual BERT model with a small-scale biomedical dataset on enhancing the model understanding of Arabic biomedical text. The model performance was compared with two state-of-the-art models (namely, AraBERT and multilingual BERT cased), and it outperformed both models with 85% F1-score.
As known as Parallel-Link Robot, Delta Robot is a kind of Manipulator Robot that consists of three arms mounted in parallel. Delta Robot has a central joint constructed as an end-effector represented as a gripper. An Analysis of Inverse Kinematic (IK) used to convert the end-effector trajectory (X, Y) into rotations of stepper motors (ZA, ZB and ZC). The proposed method used Artificial Neural Networks (ANNs) to simplify the process of IK solver. The IK solver generated the datasets contain motion data of the Delta robot. There are 11 KB Datasets consist of 200 motion data used to be trained. The proposed method was trained in 58.78 seconds in 5000 iterations. Using a learning rate (α) 0.05 and produced the average accuracy was 97.48%, and the average loss was 0.43%. The proposed method was also tested to transfer motion data over Socket.IO with 115.58B in 6.68ms.
Context. The improvements made in the last couple of decades in the requirements engineering (RE) processes and methods have witnessed a rapid rise in effectively using diverse machine learning (ML) techniques to resolve several multifaceted RE issues. One such challenging issue is the effective identification and classification of the software requirements on Stack Overflow (SO) for building quality systems. The appropriateness of ML-based techniques to tackle this issue has revealed quite substantial results, much effective than those produced by the usual available natural language processing (NLP) techniques. Nonetheless, a complete, systematic, and detailed comprehension of these ML based techniques is considerably scarce. Objective. To identify or recognize and classify the kinds of ML algorithms used for software requirements identification primarily on SO. Method. This paper reports a systematic literature review (SLR) collecting empirical evidence published up to May 2020. Results. This SLR study found 2,484 published papers related to RE and SO. The data extraction process of the SLR showed that (1) Latent Dirichlet Allocation (LDA) topic modeling is among the widely used ML algorithm in the selected studies and (2) precision and recall are amongst the most commonly utilized evaluation methods for measuring the performance of these ML algorithms. Conclusion. Our SLR study revealed that while ML algorithms have phenomenal capabilities of identifying the software requirements on SO, they still are confronted with various open problems/issues that will eventually limit their practical applications and performances. Our SLR study calls for the need of close collaboration venture between the RE and ML communities/researchers to handle the open issues confronted in the development of some real world machine learning-based quality systems.
Politics is one of the hottest and most commonly mentioned and viewed topics on social media networks nowadays. Microblogging platforms like Twitter and Weibo are widely used by many politicians who have a huge number of followers and supporters on those platforms. It is essential to study the supporters’ network of political leaders because it can help in decision making when predicting their political futures. This study focuses on the supporters’ network of three famous political leaders of Pakistan, namely, Imran Khan (IK), Maryam Nawaz Sharif (MNS), and Bilawal Bhutto Zardari (BBZ). This is done using social network analysis and semantic analysis. The proposed method (1) detects and removes fake supporter(s), (2) mines communities in the politicians’ social network(s), (3) investigates the supporters’ reply network for conversations between supporters about each leader, and, finally, (4) analyses the retweet network for information diffusion of each political leader. Furthermore, sentiment analysis of the supporters of politicians is done using machine learning techniques, which ultimately predicted and revealed the strongest supporter network(s) among the three political leaders. Analysis of this data reveals that as of October 2017 (1) IK was the most renowned of the three politicians and had the strongest supporter’s community while using Twitter in a very controlled manner, (2) BBZ had the weakest supporters’ network on Twitter, and (3) the supporters of the political leaders in Pakistan are flexible on Twitter, communicating with each other, and that any group of supporters has a low level of isolation.
Context. Social media platforms such as Facebook and Twitter carry a big load of people’s opinions about politics and leaders, which makes them a good source of information for researchers to exploit different tasks that include election predictions. Objective. Identify, categorize, and present a comprehensive overview of the approaches, techniques, and tools used in election predictions on Twitter. Method. Conducted a systematic mapping study (SMS) on election predictions on Twitter and provided empirical evidence for the work published between January 2010 and January 2021. Results. This research identified 787 studies related to election predictions on Twitter. 98 primary studies were selected after defining and implementing several inclusion/exclusion criteria. The results show that most of the studies implemented sentiment analysis (SA) followed by volume-based and social network analysis (SNA) approaches. The majority of the studies employed supervised learning techniques, subsequently, lexicon-based approach SA, volume-based, and unsupervised learning. Besides this, 18 types of dictionaries were identified. Elections of 28 countries were analyzed, mainly USA (28%) and Indian (25%) elections. Furthermore, the results revealed that 50% of the primary studies used English tweets. The demographic data showed that academic organizations and conference venues are the most active. Conclusion. The evolution of the work published in the past 11 years shows that most of the studies employed SA. The implementation of SNA techniques is lower as compared to SA. Appropriate political labelled datasets are not available, especially in languages other than English. Deep learning needs to be employed in this domain to get better predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.