“…It is worth noting this because numerous metacognition studies have not examined test–retest reliability ( Bacow et al, 2009 ; Larøi et al, 2009 ; Hsu, 2010 ; Cook et al, 2014 ; Martin et al, 2014 ; Bailey and Wells, 2015 ; Fernie et al, 2015 ; Kollmann et al, 2016 ; Kolubinski et al, 2017 ; Alma et al, 2018 ; Caselli et al, 2018 ; Lloyd et al, 2018 ). In addition, the present findings are consistent with some previous studies ( Cartwright-Hatton et al, 2004 ; Wilson et al, 2011 ; Lachat Shakeshaft et al, 2020 ) with respect to the stability of metacognition over the period of time, although the correlation was weak ( r = 0.24–0.34) ( Cartwright-Hatton et al, 2004 ) or unstable for some domains ( r = 0.24–0.90) ( Wilson et al, 2011 ) of the metacognitive questionnaire when comparing with our test–retest ( r = 0.70–0.81). Regarding to the Cronbach’s alpha coefficient, although the results revealed acceptable internal reliability values for both the whole 10 items (α = 0.64) and for the five items of the CMS metacognition (α = 0.63), the exception was the CMS self-judgment accuracy for which Cronbach’s α was 0.59.…”