The capacity of relational interaction between high-level information and reason is the defining characteristic of human intelligence. Regardless of the remarkable progress in artificial intelligence, recent machine reading comprehension models still heavily rely on high-dimensional word-based distributed representations. Since these models employ statistical means to answer questions of complex textual corpus and employ an accuracy-based metric system, their learning capacity of the required skills is not guaranteed. To ensure the capacity of MRC models to learn the desired skills, explainability has become an emerging requirement. In this paper, we propose an end-to-end natural language reasoning model that is based on sets of high-level aggregated representations which promote operational explainability. To this end, sequential multi-head attention, and a loss regularization function is proposed. We show analysis of the proposed approach on two natural language reasoning oriented question and answering datasets (bAbI and NewsQA).