Generating own examples for previously encountered new concepts is a common and highly effective learning activity, at least when the examples are of high quality. Unfortunately, however, students are not able to accurately evaluate the quality of their own examples and instructional support measures such as idea unit standards that have been found to enhance the accuracy of self-evaluations in other learning activities, have turned out to be ineffective in example generation. Hence, at least when learners generate examples in self-regulated learning settings in which they scarcely receive instructor feedback, they cannot take beneficial regulation decisions concerning when to continue and when to stop investing effort in example generation. The present study aimed at investigating the benefits of a relatively parsimonious means to enhance judgment accuracy in example generation tasks, i.e. the provision of expert examples as external standards. For this purpose, in a 2×2 factorial experiment we varied whether N = 131 university students were supported by expert example standards (with vs. without) and idea unit standards (with vs. without) in evaluating the quality of self-generated examples that illustrated new declarative concepts. We found that the provision of expert example standards reduced bias and enhanced absolute judgment accuracy, whereas idea unit standards had no beneficial effects. We conclude that expert example standards are a promising means to enhance judgment accuracy in evaluating the quality of self-generated examples.
Although e-learning has become an important feature to promote learning experience, still little is known about the readiness of adult learners for e-learning in continuing vocational education. By exploring perceived challenges and benefits, it was our aim to identify dimensions that define e-learning readiness. Therefore, we conducted a study design with qualitative and quantitative components. It consisted of both, semi-structured interviews, as well as an online survey regarding biography, personality, learning behavior, and general attitudes toward e-learning. The continuing vocational education course that we were investigating comes from the field of project management. The learner group was heterogeneous regarding their biographical and occupational background. Our results suggest several dimensions of e-learning readiness which are namely: motivation, learning strategies/regulation, attitudes toward learning, and personality-associated aspects as well as digital literacy. These findings are in line with previous research to only some extent, but reveal the necessity to redefine single dimensions of e-learning readiness to develop an inventory that is generalizable for different adult learner groups. Based on these assumptions a new measure for e-learning readiness needs to be proposed in future research as a next step.
In acquiring new conceptual knowledge, learners often engage in the generation of examples that illustrate the to-be-learned principles and concepts. Learners are, however, bad at judging the quality of self-generated examples, which can result in suboptimal regulation decisions. A promising means to foster judgment accuracy in this context is providing external standards in form of expert examples after learners have generated own examples. Empirical evidence on this support measure, however, is scarce. Furthermore, it is unclear whether providing learners with poor examples, which include typical wrong illustrations, as negative example standards after they generated own examples would increase judgment accuracy as well. When they generated poor examples themselves, learners might realize similarities between their examples and the negative ones, which could result in more cautious and hence likely more accurate judgments concerning their own examples. Against this background, in a 2 × 2 factorial experiment we prompted N = 128 university students to generate examples that illustrate previously encountered concepts and self-evaluate these examples afterwards. During self-evaluation, we varied whether learners were provided with expert example standards (with vs. without) and negative example standards (with vs. without). In line with previous findings, expert example standards enhanced learners’ judgment accuracy. The newly developed negative example standards showed inconsistent and partly even detrimental effects regarding judgment accuracy. The results substantiate the notion that expert example standards can serve as a promising means to foster accurate self-evaluations in example generation tasks, whereas negative example standards should be treated with caution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.