Purpose
The purpose of this study is, First, to leverage the limitation of annotated data and to identify the cognitive level of learning objectives efficiently, this study adopts transfer learning by using word2vec and a bidirectional gated recurrent units (GRU) that can fully take into account the context and improves the classification of the model. This study adds a layer based on attention mechanism (AM), which captures the context vector and gives keywords higher weight for text classification. Second, this study explains the authors’ model’s results with local interpretable model-agnostic explanations (LIME).
Design/methodology/approach
Bloom's taxonomy levels of cognition are commonly used as a reference standard for identifying e-learning contents. Many action verbs in Bloom's taxonomy, however, overlap at different levels of the hierarchy, causing uncertainty regarding the cognitive level expected. Some studies have looked into the cognitive classification of e-learning content but none has looked into learning objectives. On the other hand, most of these research papers just adopt classical machine learning algorithms. The main constraint of this study is the availability of annotated learning objectives data sets. This study managed to build a data set of 2,400 learning objectives, but this size remains limited.
Findings
This study’s experiments show that the proposed model achieves highest scores of accuracy: 90.62%, F1-score and loss. The proposed model succeeds in classifying learning objectives, which contain ambiguous verb from the Bloom’s taxonomy action verbs, while the same model without the attention layer fails. This study’s LIME explainer aids in visualizing the most essential features of the text, which contributes to justifying the final classification.
Originality/value
In this study, the main objective is to propose a model that outperforms the baseline models for learning objectives classification based on the six cognitive levels of Bloom's taxonomy. In this sense, this study builds the bidirectional GRU (BiGRU)-attention model based on the combination of the BiGRU algorithm with the AM. This study feeds the architecture with word2vec embeddings. To prove the effectiveness of the proposed model, this study compares it with four classical machine learning algorithms that are widely used for the cognitive classification of text: Bayes naive, logistic regression, support vector machine and K-nearest neighbors and with GRU. The main constraint related to this study is the absence of annotated data; there is no annotated learning objective data set based on Bloom’s taxonomy's cognitive levels. To overcome this problem, this study seemed to have no choice but to build the data set.