In this era of technology, Smartphone plays a vital role in individual's life. Now-a-days, we tend to use smartphones for storing critical information like banking details, documents etc. as it makes it portable. Android is the most preferred type of operating system for smartphone as per consumer buying interest. But also, vulnerabilities are mainly targeted in case of android by malwares as android is the most vulnerable because of its third-party customization support, which results in identity theft, Denial of Services (DoS), Ransomware attacks etc. In this work, we present android malware called MysteryBot identification, static and dynamic analysis result. MysteryBot is a banking Trojan. Some recommended steps to make your android device safe from such kind of malwares infections are also explained in this paper.
Stroke is a time-sensitive illness that without rapid care and diagnosis can result in detrimental effects on the person. Caretakers need to enhance patient management by procedurally mining and storing the patient's medical records because of the increasing synergy between technology and medical diagnosis. Therefore, it is essential to explore how these risk variables interconnect with each other in patient health records and understand how they each individually affect stroke prediction. Using explainable Artificial Intelligence (XAI) techniques, we were able to show the imbalance dataset and improve our model’s accuracy, we showed how oversampling improves our model’s performance and used explainable AI techniques to further investigate the decision and oversample a feature to have even better performance. We showed and suggested explainable AI as a technique to improve model performance and serve as a level of trustworthiness for practitioners, we used four evaluation metrics, recall, precision, accuracy, and f1 score. The f1 score with the original data was 0% due to imbalanced data, the non-stroke data was significantly higher than the stroke data, the 2nd model has an f1 score of 81.78% and we used explainable AI techniques, Local Interpretable Model-agnostic Explanations (LIME) and SHapely Additive exPlanation (SHAP) to further analyse how the model came to a decision, this led us to investigate and oversample a specific feature to have a new f1 score of 83.34%. We suggest the use of explainable AI as a technique to further investigate a model’s method for decision-making.
The ongoing danger of ransomware has led to a struggle between creating and identifying novel approaches. Although detection and mitigation systems have been created and are used widely, they are always evolving and being updated due to their reactive nature. This is because harmful code and its behavior can frequently be altered to evade detection methods. In this study, we present a classification method that combines static and dynamic data to improve the precision of locky ransomware detection and classification. We trained supervised machine learning algorithms using cross-validation and used a confusion matrix to observe accuracy, enabling a systematic comparison of each algorithm. In this work, supervised algorithms such as the decision tree algorithm resulted in an accuracy of 97%, naïve baiyes 95%, random tree 63%, and ZeorR 50%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.