COVID-19 was first discovered in Wuhan, China in December 2019. It is one of the worst pandemics in human history. Recent studies reported that COVID-19 is transmitted among humans by droplet infection or direct contact. COVID-19 pandemic has invaded more than 210 countries around the world and as of February 18
th
, 2021, just after a year has passed, a total of 110,533,973 confirmed cases of COVID-19 were reported and its death toll reached about 2,443,091. COVID-19 is a new member of the family of corona viruses, its nature, behaviour, transmission, spread, prevention, and treatment are to be investigated. Generally, a huge amount of data is accumulating regarding the COVID-19 pandemic, which makes hot research topics for machine learning researchers. However, the panicked world’s population is asking when the COVID-19 will be over? This study considered machine learning approaches to predict the spread of the COVID-19 in many countries. The experimental results of the proposed model showed that the overall R2 is 0.99 from the perspective of confirmed cases. A machine learning model has been developed to predict the estimation of the spread of the COVID-19 infection in many countries and the expected period after which the virus can be stopped. Globally, our results forecasted that the COVID-19 infections will greatly decline during the first week of September 2021 when it will be going to an end shortly afterward.
Recognizing avatar faces is a very important issue for the security of virtual worlds. In this paper, a novel face recognition technique based on the wavelet transform and the multiscale representation of the adaptive local binary pattern (ALBP) with directional statistical features is proposed to increase the accuracy rate of recognizing avatars in different virtual worlds. The proposed technique consists of three stages: preprocessing, feature extraction, and recognition. In the preprocessing and feature extraction stages, wavelet decomposition is used to enhance the common features of the same subject of images and the multiscale ALBP (MALBP) is used to extract representative features from each facial image. Then, in the recognition stage the wavelet MALBP (WMALBP) histogram dissimilarity with statistical features of each test image and each class model is used within the nearest neighbor classifier to improve the classification accuracy of the WMALBP. Experiments conducted on two virtual world avatar face image datasets show that our technique performs better than LBP, PCA, multiscale local binary pattern, ALBP, and ALBP with directional statistical features (ALBPF) in terms of the accuracy and the time required to classify each facial image to its subject.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.