Heidi Diefes-Dux, Judith S. Zawojewski, Margret A. Hjalmarson
Open-ended problems are an important part of the engineering curriculum because, when well designed, they closely resemble problem-solving situations students will encounter as professional engineers. However, valid and reliable evaluation of student performance onopen-ended problems is a challenge given that numerous reasonable responses are likely to exist for a given problem and multipleinstructors may be evaluating student work. The purpose of this paper is to present a concrete example of how educational designresearch, a models-and-modeling perspective from mathematics education, and multi-tiered teaching experiments are brought to bear inthe design of valid and reliable evaluation tools for scoring team responses to complex problem-solving activities used in a large first-year engineering course in which teaching assistants evaluate student work. This on-going design study demonstrates how designing apackage of evaluation tools (including rubrics, task-specific supports, and scorer training) based on the aforementioned educationalresearch methods supports (1) sustained fidelity to engineering expert-identified characteristics of high performance across iterations ofchange to improve reliability, and (2) the implementation of planned iterations of the evaluation tools based on systematically collecteddata.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados