Ayuda
Ir al contenido

Dialnet


Resumen de Towards automatic recognition of irregular, short-open answers in Fill-in-the-blank tests

Sergio Alejandro Rojas Barbosa

  • Assessment of student knowledge in Learning Management Systems such as Moodle is mostly conducted using close-ended questions (e.g. mul­tiple-choice) whose answers are straightforward to grade without human intervention. FILL-IN-THE-BLANK tests are usually more challenging since they require test-takers to recall concepts and associations not available in the statement of the question itself (no choices or hints are given). Automatic assessment of the latter currently re­quires the test-taker to give a verbatim answer, that is, free of spelling or typographical mistakes. In this paper, we consider an adapted version of a classical text-matching algorithm that may pre­vent wrong grading in automatic assessment of FILL-IN-THE-BLANK questions whenever irregular (similar but not exact) answers occur due to such types of error. The technique was tested in two scenarios. In the first scenario, misspelled single-word answers to an Internet security ques­tionnaire were correctly recognized within a two letter editing tolerance (achieving 99 % accu­racy). The second scenario involved short-open answers to computer programming quizzes (i.e. small blocks of code) requiring a structure that conforms to the syntactic rules of the program­ming language. Twenty-one real-world answers written up by students, taking a computer pro­gramming course, were assessed by the method. This assessment addressed the lack of precision in terms of programmer-style artifacts (such as unfamiliar variable or function nomenclature) and uses an admissible tolerance of up to 20 % letter-level typos. These scores were satisfactory corroborated by a human expert. Additional find­ings and potential enhancements to the technique are also discussed.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus