Rationale refers to making human judgments, sets of reasons, or intentions to explain a particular decision. Nowadays, crowd-users argue and justify their decisions on social media platforms about market-driven software applications, thus generating a software rationale. Such rationale information can be of pivotal importance for the software and requirements engineers to enhance the performance of existing software applications by revealing end-users tactic knowledge to improve software designing and development decision-making. For this purpose, we proposed an automated approach to capture and analyze end-user reviews containing rationale information, focusing on low-rating applications in the amazon store using Natural Language Processing (NLP) and supervised machine learning (ML) classification methods. In the literature, high-rating applications have been emphasized while ignoring low-rating software application that causes potential biasness. Therefore, we examined 59 comparatively low-ranked market-based software applications from the Amazon app store covering various software categories to capture and identify crowd-users justifications. Next, using a developed grounded theory and content analysis approach, we studied and recorded how crowd-users analyze and explain their rationale based on issues encountered, attacking or supporting arguments registered, and updating or uninstalling software applications. Also, to achieve the best results, an experimental study is conducted by comparing various ML algorithms, i.e., MNB, LR, RF, MLP, KNN, AdaBoost, and Voting classifier, on the end-users rationale data set by preprocessing the input data, applying feature engineering, balancing the data set, and then training and testing the ML algorithms with a standard cross-validation approach. We obtained satisfactory results with MLP, voting, and RF Classifiers, having 93%, 93%, and 90% average accuracy, respectively. Also, we plot the ROC curves for the high-performing ML Classifier to identify and capture classifiers yielding the best performance with an under-sampling or oversampling balancing approach. Additionally, we obtained the average Precision, Recall, and F-measure values of 98%, 94%, 96%, 97%, 95%, and 96% for identifying supporting & decision rationale elements in the user comments, respectively. The proposed research approach outer-perform the existing rationale approaches with better Precision, Recall, and F-measure values.