Ayuda
Ir al contenido

Dialnet


Resumen de Processing and effectiveness of formative feedback to increase comprehension and learning of conceptual knowledge in digital environments

Ignacio Máñez Sáez

  • Students from all over the world are expected to develop reading skills and acquire knowledge as part of the compulsory education. In school settings, one of the most frequent task-oriented reading activities teachers assign to the students refers to answering questions from an available text (e.g., Ness, 2011; Sánchez & García, 2015; Sánchez, García, & Rosales, 2010), which may support the student’s comprehension and learning processes (e.g., Anmarkrud, McCrudden, Bråten, & Strømsø, 2013; Cerdán, Gilabert, & Vidal-Abarca, 2011; Lewis & Mensink, 2012; Roelle & Berthold, 2017). Those reading situations can be envisaged as hybrid activities since students need not only to comprehend text information, but also make decisions while performing the tasks (Rouet, Britt, & Durick, 2017). Thus, students may use the document(s) strategically to provide the correct answers, which involves both cognitive (e.g., making inferences) and metacognitive (e.g., assessing one’s own understanding, deciding to search the text, or assessing textual relevance) processes (Cataldo & Oakhill, 2000; McCrudden & Schraw, 2007; Rouet, 2006; Rouet & Britt, 2011; Rouet et al., 2017; Vidal-Abarca, Mañá, & Gil, 2010). Teachers assign those tasks to assess and/or improve students’ comprehension and learning. However, students sometimes struggle to comprehend or use written documents, so that effective instructional procedures need to be designed and applied to overcome those issues. To that end, teachers (or computers in digital environments) may deliver formative feedback aimed at improving students’ abilities or knowledge (Hattie & Gan, 2011; Hattie & Timperley, 2007; Narciss, 2008; Shute, 2008). In fact, according to the National Reading Panel (2000) report, using questions to assess the student’s comprehension and providing immediate feedback is one of the effective teaching strategies for improving student’s understanding. Thus, this thesis builds on previous research conducted in the domains of psychology and education on the feedback effectiveness and the feedback processing. Here, we focus on question-answering tasks in which formative feedback can be timely delivered in digital learning environments.

    Digital learning environments are growing in popularity in school settings. Technological devices open a wide range of possibilities with regard to delivering question-answering tasks along with item-based formative feedback based on the student’s current performance (e.g., Azevedo & Bernard, 1995; Mason & Bruning, 2001; Mory, 2004). Whereas teachers usually know the students’ responses to the questions, computer-based systems are able to trace the student’s interactions with the materials while solving the tasks assigned, to grade the student’s responses automatically, as well as to provide them with elaborative feedback messages. However, students do not necessarily use the computer-based feedback as expected. In order to be effective, students need to be willing and able to process the feedback information actively (Carless & Boud, 2018; Timmers & Veldkamp, 2011). External feedback allows the student to evaluate whether (s)he needs to update their response model (i.e., close the gap between the current and the desired level of performance). If this happens, the student may update her/his previously-built representation on the text content, either by adding new information, or by restructuring previously-learned knowledge (Shute, 2008). Thus, it is important to understand how students use and process computer-based formative feedback when answering questions from a text in a digital environment.

    Theoretical studies and classical meta-analyses suggest that formative feedback is able to boost student’s learning, although its effectiveness is often inconsistent and variable (e.g., Azevedo & Bernard, 1995; Jaehnig & Miller, 2007; Kluger & DeNisi, 1996; Shute, 2008). Recent studies show that feedback effects on learning outcomes vary as a function of the feedback content. Previous contributions aimed at examining the effects of different types of feedback, including verification information on the students’ response (Knowledge of Response or KR: Correct or Incorrect), information pointing out the correct answer (Knowledge of Correct Response or KCR: The correct answer is X), or even more detailed information (Elaborated Feedback or EF: explanations, prompts, or examples). While both KR and KCR feedback include simple corrective information, the EF encompasses more detailed information, so students have to process its content actively to analyze their errors in light of the information received. Recently, Van der Kleij, Feskens, and Eggen (2015) conducted a meta-analysis on the feedback effectiveness within computer-based formative assessments in which EF was found to be the most effective type of feedback over KCR and KR (mean effect sizes were .49, .32, and .05, respectively). Those feedback messages have been tested on different learning scenarios varying in the nature of the task, ranging from simple memorization or associative tasks (e.g., learning vocabulary or definitions) to higher-order learning tasks (e.g., text comprehension involving inference processes or knowledge acquisition) (e.g., Golke, Dörfler, & Artelt, 2015; Lee, Lim, & Grabowski, 2009; Lipko-Speed, Dunlosky, & Rawson, 2014; Llorens, Vidal-Abarca, & Cerdán, 2016; Maier, Wolf, & Randler, 2016; Moreno, 2004; Murphy, 2007). Even though the study of feedback has a long history in the field of learning and instruction, only a few studies have attempted to investigate the effects of formative feedback on students’ text comprehension and learning from texts, which generally entails a high demand for information processing (Dörfler, Golke, & Artelt, 2017). Likewise, research has seldom questioned students’ ability and willingness to engage in processing feedback, as well as how it exerts an impact on learning as a function of individual characteristics such as reading skill or prior knowledge. Those are the main goals of the present work.

    Despite the potential effectiveness of formative feedback in improving performance, researchers and professionals in the field of learning seem to assume that students process the feedback messages automatically (Corbalan, Kester, & Van Merriënboer, 2009; Gordijn & Nijhof, 2002; Van der Kleij , Timmers, & Eggen, 2011). However, providing feedback messages in digital environments does not mean that students process its content carefully (Aleven, Stahl, Schworm, Fischer, & Wallace, 2003). Once students receive the feedback messages, they make decisions about how to use that information (e.g., Fox, Klein Entink, & Timmers, 2014; Pridemore & Klein, 1991; Timmers & Veldkamp, 2011), evaluating the accuracy of their responses in order to adjust their knowledge to the standard received (e.g., Bangert-Drowns, Kulik, Kulik, & Morgan, 1991). Therefore, students need to understand the feedback information and make decisions about how to process its content, so students are expected to deploy both cognitive processes related to meaning making (e.g., paraphrasing or making inferences) and metacognitive processes that involve monitoring (e.g., to compare the student’s answers with the standard) and self-regulation (e.g., how to solve specific problems).

    We propose a framework for feedback processing in the context of question-answering settings based on previous theoretical approaches and empirical findings. Our approach places emphasis on the importance of student’s engagement in processing the feedback information actively, since students have to make decisions consciously about what information to read and how much effort to invest in order to benefit from feedback. After answering a question in a digital environment, the computer-based system may provide students with EF messages along with corrective feedback. At that moment, the student has to decide whether (s)he needs to review his or her response model in light of the feedback received. In the introduction we present a two-phase model on feedback processing: Phase 1, Response Verification, and Phase 2, Knowledge Revision. During the question-answering process, the student formulates, with different levels of certainty, an initial response model. Once the student validates his/her answer, the corrective feedback allows the student to verify or refute his/her response model according to whether it is correct or not (i.e., Response Verification). When EF is provided, the student has to decide whether (s)he needs or desires to process any additional information after having verified or refuted his/her response model (i.e., Knowledge Revision). This second phase mainly begins when the student provides incorrect answers or correct answers with lower levels of certainty. After verifying or refuting his/her response model, the student may omit to process the additional feedback or, on the contrary, initiate a process of knowledge revision that implies both cognitive and metacognitive operations that may enhance modifications among the previously-acquired inaccurate or erroneous ideas. At that moment, the student has to make comparisons between the active response model in his/her working memory and the feedback information that competes with the previously-acquired incomplete or incorrect knowledge. During feedback processing, it is possible that factors related to the feedback itself (e.g., the presence of corrective feedback), individual characteristics of the students (e.g., reading skills or prior knowledge), or contextual characteristics (e.g., availability of materials) may influence how students use computer-based EF.

    This thesis includes four studies aimed at examining how students interact with computer-based EF along with KR and KCR feedback in question-answering settings and its effects on text comprehension and learning outcomes while taking into account the students’ reading skill and prior knowledge. The first study examined the influence of a detailed EF in both question-answering performance and accuracy to assess textual relevance of Secondary school students. The study also explored to what extent EF was processed over a control condition in which non-formative feedback was delivered. Likewise, we explored whether student’s reading skill influenced how they engaged in processing EF differently. Seventy-five 7th and 8th grade students answered a set of 20 questions from two expository texts (10 questions per text). During the question-answering process, students had the text available and were forced to highlight the text information they considered relevant to answer each question. While half the students received item-based EF that included information on the student’s answer correctness and his/her accuracy to select question-relevant text information along with monitoring hints on task-specific strategies, the other half received non-formative feedback (i.e., control feedback group). Main findings suggested that EF influenced positively the students’ text comprehension performance and the assessment of textual relevance (i.e., EF reduced the amount of non-relevant text information students assessed as question-relevant). However, EF did not affect neither the question nor the text-search processing times, suggesting that EF improved students’ efficiency to search and assess textual relevance. Regarding feedback processing, findings showed that EF increased the processing times and the decision-making process to access optionally-delivered feedback information above the control condition. Additionally, we found that skilled comprehenders outperformed their less-skilled counterparts. This study sheds light on how a complex EF on the students’ question-answering performance and their accuracy to assess textual relevance may improve the students’ question-answering process in a digital environment. However, further research is necessary to explore the cognitive and metacognitive processes involved in feedback processing, as well as how skilled and less-skilled comprehenders engage in processing such an EF.

    In the second study, we explicitly sought to explore what components of the EF messages students pay attention to and what cognitive and metacognitive processes students deploy when EF is delivered in a digital environment. According to the results found in the first study, we conducted a second study with the same materials. In this case, however, skilled and less-skilled comprehenders in grade 8 were asked to report any thought that came to mind while answering the questions and receiving EF. Participants thought-aloud in one text and performed the task in silence on the other text. Main findings showed that students paid more attention to know whether their answers were correct (i.e., KR and KCR feedback) than to any other elaborative components of feedback. Related to the previous result, we found that students focused their attention on the feedback after providing incorrect responses but paid little or no attention to feedback on those questions they answered correctly. Whereas students actively monitored the accuracy of their responses by comparing their answers with the standard provided, they rarely constructed meaning and self-regulated the use of feedback. Interestingly, feedback triggered affective reactions and attributions as part of the monitoring processes. Individual differences in feedback processing suggested that skilled and less-skilled comprehenders processed the EF quite similarly. Those findings suggested that other individual characteristics like prior knowledge may be more relevant to understand how students engage in and process EF in the context of learning complex knowledge from academic texts.

    Based on previous findings, we changed the approach of feedback processing and learning from texts due to the fact that students were mainly interested in knowing the correctness of their responses or the correct response rather than engaging in deep processing. Maybe students processed EF narrowly in the previous studies because the EF messages included the correct response (i.e., KCR feedback), which may have discouraged students to process additional information. Likewise, we employed a question-answering task designed to assess student’s text comprehension, not learning from texts. Hence, we designed an experiment to study how the presence of corrective feedback (KCR, KR, or control feedback) may influence the student’s decision to use additional EF made of explanations when learning conceptual knowledge in the area of physics. Due to the topic covered in the science text, ‘Atmospheric pressure and the wind phenomenon’, we took into account the student’s prior knowledge, arguably one of the main individual factors involved in text comprehension and learning from texts. Secondary school students in grade 9 answered a set of questions from an available science text. For each question, they received corrective feedback according to the condition assigned: KCR, KR, or Control (i.e., non-corrective feedback). After receiving this feedback, all the students were allowed to access item-based EF that included an explanation about the knowledge assessed. 24 hours later, students completed a final test with new open-ended questions. Main findings showed that students did not use EF very often, especially when corrective feedback included the correct response (i.e., KCR feedback). Further, results showed that KR and KCR feedback made students focus on EF for incorrectly-answered questions. Finally, results suggested that no differences were found for the learning task and the final performance outcomes.

    Study 4 aimed at examining how two external factors (corrective feedback and text availability) may influence Secondary school grade 9 students’ decision to use additional EF and its consequent impact on learning science knowledge about the atmospheric pressure and the wind phenomenon. Based on the previous findings, we decided to run the experiment without the KCR feedback condition because students decided to access little EF messages when this corrective feedback was delivered. Thus, we included the KR and the control feedback groups in this experiment. Additionally, we manipulated the text availability while answering the questions, so that one half of the sample performed the task with text access and the other half performed the task without text access. The EF messages were administered the same way as in the previous study. After receiving KR or control feedback, all the students were allowed to access item-based EF that included an explanation about the knowledge assessed. Main findings showed that corrective feedback made students focus on EF for incorrectly-answered questions, and that keeping the text unavailable made students use EF more often. However, neither corrective feedback nor text availability influenced student’s learning outcomes. Moreover, the relation between students’ prior knowledge and learning task performance was moderated by the students’ decision to access EF, which means that students with lower levels of prior knowledge who accessed EF quite often reduced the impact of their prior knowledge. Nevertheless, both prior knowledge and EF accesses played independent positive roles in the final task administered several hours later.

    These studies contribute to our understanding of how Secondary school students are willing to invest efforts in processing computer-based feedback that includes both corrective and elaborative information in the context of question-answering scenarios. Thus, we point out the importance of understanding students’ willingness to use formative feedback and, as a consequence, the need to develop digital environments where students have the opportunity to use feedback on demand, since automatic feedback delivery does not guarantee that students are driven to attend and process its content in the expected manner, especially when the correct answer is provided. The studies presented in this thesis have theoretical and practical implications for both researchers and practitioners in the fields of psychology and education interested in enhancing students' comprehension and learning by means of delivering formative feedback in digital environments, such as e-textbooks, MOOCs (Massive Online Open Courses), or ITSs (Intelligent Tutoring Systems).


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus