ResumenEn este artículo se propone un sistema de análisis de la práctica educativa que tiene en cuenta tres dimensiones (cómo, qué y quién) y distintas unidades de análisis (sesiones de clase, actividades típicas de aula, episodios, ciclos). Las tres dimensiones de análisis nos permiten además revisar las distintas propuestas que se han ido planteando al respecto según cuál es su foco: los contenidos desarrollados en la interacción (qué), el modo como se desarrolla (cómo) o el grado de responsabilidad alcanzado por los alumnos (quién). El sistema de análisis se describe con algún detalle a través de algunos ejemplos y se muestran el tipo de resultados que cabe obtener según los estudios que hemos venido llevando a cabo. Finalmente, se examinan algunas cuestiones que aún requieren alcanzar un consenso entre los investigadores: el problema de la unidad de análisis, el papel de los conocimientos sobre las tareas, la distinción entre análisis del discurso y análisis de la práctica educativa. Palabras clave: Análisis del discurso, análisis de la práctica educativa, mediaciones, cognición fría y cálida. What, how and who: Three dimensions to analyse educational practice AbstractThe paper proposes a system for analysing educational practice based on three dimensions (how, what and who) and different units of analyses (class sessions, typical classroom activities, episodes, cycles). The three analysis dimensions also allow us to review the different proposals put forth taking their approach into account: the contents developed in interaction (what dimension), they way interaction develops (how dimension), and the degree of responsibility reached by students (who dimension). The proposed analysis system is described with some detail using various examples, and the type of results that may be obtained based on our work is shown. Finally, three issues that remain under discussion are reviewed: the unit of analysis problem, the role of cognitive analysis of academic tasks, the distinction between discourse analysis and educational practice analysis.
Effective instructional explanations help the students to construct coherent mental representations. To do so, one condition is that they must be tailored to students' needs. It is hypothesized that explanations are more helpful if they also explicitly aid the students to detect problems in their mental representations, as this provokes an impasse that motivates students to process the explanation deeply. Participants were provided with a computer-based material on plate tectonics and then with explanatory support in the form of either a tailored explanation preceded by an impasse-trigger (I ? E group) or an identical explanation without the impasse-trigger (noI ? E group). After the reading of the materials they solved retention and transfer tests; their flawed ideas were also counted. Participants in the I ? E group recalled more correct information, generated more transfer solutions, and showed fewer flawed ideas than those in the noI ? E group. This indicates that tailored explanations combined with impasse-triggers that make explicit conflicts between the text model and the students 0 models can indeed foster deep learning.Keywords Instructional explanations Á Self-explanations Á Mental model repair Á Students' mental models Á Impasses Á Impasse-triggersOne key question in the promotion of deep learning is under what conditions instructional explanations work effectively. Identifying the factors that make instructional explanations successful is relevant to the extent that these explanations are prevalent (in normal tutoring, classroom lectures, instructional materials) and pose potential advantages (they are complete and coherent and they can help learners when they are stuck). However, evidence from tutoring sessions suggests that tutorial explanations are not associated with learning, whereas explanations generated by the students (i.e., self-explanations) have consistently
a b s t r a c tComputer-based learning environments include verbal aids helping learners to gain a deep understanding. These aids can be presented in either the visual or the auditory modality. The problem is that it is not clear-cut how to present them for two reasons: the modality principle [Mayer, R.E., 2001. Multimedia Learning. Cambridge University Press, New York] is not applicable because verbal aids do not usually come with related pictures and the little empirical research on the question provides diverging results. Our aim was twofold: to present a research framework, which makes it possible to reinterpret prior findings, and to test it empirically as it provides guidelines about how to present verbal aids. It distinguishes between two types of verbal aids: regulatory, which guide the learners' decision making process during learning, and explanatory, which help learners to revise their understanding of the to-be-learned contents. The framework suggests that explanatory aids should be presented visually and regulatory aids should be presented auditorily. In two experiments participants learned from a computer-based learning environment on plate tectonics and solved retention and inference questions afterwards. They received verbal aids presented in different modalities depending on the condition. Participants receiving visual explanatory aids outperformed those receiving auditory explanatory aids both in retention and inference questions. Participants receiving auditory regulatory aids showed no advantage; the same pattern was obtained in the second experiment, in which the auditory aids were given by a pedagogical agent. Results have practical implications for the design of computer-based materials.
Interactive multimedia learning environments incorporate interactive features, such as questioning, through which questions are posed to students and feedback is delivered on their answers. An experiment was conducted comparing two forms of questioning. The participants learned about geology with a multimedia environment that included questioning episodes. In the interactive questioning condition, the participants were presented with the question, chose an answer from a set of three options, and got the corresponding feedback. In the noninteractive questioning condition, the participants were presented with the same question and options, but they were not required to make a choice; instead, they were exposed to the feedback for each option. In a control condition, the participants received statements equivalent to those in the question and feedback. After learning with the environment, the participants took retention and transfer tests. The results in retention and transfer showed that the participants in the interactive condition outperformed those in the control and the noninteractive conditions, who did not differ from each other. This finding means that participation is critical for questioning to work effectively. This has implications for the design of learning environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.