Ayuda
Ir al contenido

Dialnet


Marking essays on screen: An investigation into the reliability of marking extended subjective texts.

  • Autores: Martin Johnson, Rita Nádas, John F. Bell
  • Localización: British journal of educational technology, ISSN 0007-1013, Vol. 41, Nº. 5, 2010, págs. 814-826
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when dealing with extended textual answers in different modes. This research project explored whether the mode in which a set of extended essay texts were accessed and read systematically influenced the assessment judgements made about them. During the project, 12 experienced English literature assessors marked two matched samples of 90 essay exam scripts on screen and on paper. A variety of statistical methods were used to compare the reliability of the essay marks given by the assessors across modes. It was found that mode did not present a systematic influence on marking reliability. The analyses also compared examiners' marks with a gold standard mark for each essay and found no shifts in the location of the standard of recognised attainment across modes. [ABSTRACT FROM AUTHOR]


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno