Coronavirus disease (COVID-19) has infected over more than 28.3 million people around the globe and killed 913K people worldwide as on 11 September 2020. With this pandemic, to combat the spreading of COVID-19, effective testing methodologies and immediate medical treatments are much required. Chest X-rays are the widely available modalities for immediate diagnosis of COVID-19. Hence, automation of detection of COVID-19 from chest X-ray images using machine learning approaches is of greater demand. A model for detecting COVID-19 from chest X-ray images is proposed in this paper. A novel concept of cluster-based one-shot learning is introduced in this work. The introduced concept has an advantage of learning from a few samples against learning from many samples in case of deep leaning architectures. The proposed model is a multi-class classification model as it classifies images of four classes, viz., pneumonia bacterial, pneumonia virus, normal, and COVID-19. The proposed model is based on ensemble of Generalized Regression Neural Network (GRNN) and Probabilistic Neural Network (PNN) classifiers at decision level. The effectiveness of the proposed model has been demonstrated through extensive experimentation on a publicly available dataset consisting of 306 images. The proposed cluster-based one-shot learning has been found to be more effective on GRNN and PNN ensembled model to distinguish COVID-19 images from that of the other three classes. It has also been experimentally observed that the model has a superior performance over contemporary deep learning architectures. The concept of one-shot cluster-based learning is being first of its kind in literature, expected to open up several new dimensions in the field of machine learning which require further researching for various applications.
The recent outbreak of the novel Coronavirus Disease (COVID-19) has given rise to diverse health issues due to its high transmission rate and limited treatment options. Almost the whole world, at some point of time, was placed in lock-down in an attempt to stop the spread of the virus, with resulting psychological and economic sequela. As countries start to ease lock-down measures and reopen industries, ensuring a healthy workplace for employees has become imperative. Thus, this paper presents a mobile app-based intelligent portable healthcare (pHealth) tool, called iWorkSafe, to assist industries in detecting possible suspects for COVID-19 infection among their employees who may need primary care. Developed mainly for low-end Android devices, the iWorkSafe app hosts a fuzzy neural network model that integrates data of employees' health status from the industry's database, proximity and contact tracing data from the mobile devices, and user-reported COVID-19 self-test data. Using the built-in Bluetooth low energy sensing technology and K Nearest Neighbor and K-means techniques, the app is capable of tracking users' proximity and trace contact with other employees. Additionally, it uses a logistic regression model to calculate the COVID-19 self-test score and a Bayesian Decision Tree model for checking real-time health condition from intelligent e-health platform for further clinical attention of the employees. Rolled out in an apparel factory on 12 employees as a test case, the pHealth tool generates an alert to maintain social distancing among employees inside the industry. In addition, the app helps employees to estimate risk with possible COVID-19 infection based on the collected data and found that the score is effective in estimating personal health condition of the app user.INDEX TERMS Industry 4.0, artificial intelligence, machine learning, mobile app, digital health, safe workplace, worker safety, Coronavirus.
Violent event detection is an interesting research problem and it is a branch of action recognition and computer vision. The detection of violent events is significant for both the public and private sectors. The automatic surveillance system is more attractive and interesting because of its wide range of applications in abnormal event detection. Since many years researchers were worked on violent activity detection and they have proposed different feature descriptors on both vision and acoustic technology. Challenges still exist due to illumination, complex background, scale changes, sudden variation, and slowmotion in videos. Consequently, violent event detection is based on the texture features of the frames in both crowded and uncrowned scenarios. Our proposed method used Local Binary Pattern (LBP) and GLCM (Gray Level Co-occurrence Matrix) as feature descriptors for the detection of a violent event. Finally, prominent features are used with five different supervised classifiers. The proposed feature extraction technique used Hockey Fight (HF) and Violent Flows (VF) two standard benchmark datasets for the experimentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.