||Victoria Elliott (Oxford University Centre for Educational Assessment and Warwick Institute of Education)
Dr Talia Isaacs (University of Bristol)
||06 Sep 2012
||Free to attend
Examiner judgements are an essential part of the assessment process. Speaking at a recent seminar hosted by the Cambridge Assessment Network, guest speakers Victoria Elliott (Oxford University Centre for Educational Assessment and Warwick Institute of Education) and Dr Talia Isaacs (University of Bristol) explained how examiners in different contexts arrive at their judgements, and what variables have an influence on the judgements that are made.
Examining A Level essays
Victoria Elliot explained how the assumption of the A Level system is that examiners make their decisions in a rule-based, logical way. However, given the time constraints under which examiners work, and the amount of information which must be considered, she claimed it's unlikely that the cognitive process is as rational as it is conceived to be.
During an in-depth study of examiners' training and decision-making, Victoria explored how examiners make decisions given the amount of information and the limited time available. The study showed how a wide range of cognitive processes are used to varying degrees by different examiners, at different times within and between scripts, according the the most useful, and potentially the most economical method at any one time.
Influences on rater judgements of L2 (second language) speech
Dr Talia Isaacs explored the effects of individual differences in rater cognitive variables (musical ability, phonological memory, and attention control) on raters' judgements of L2 comprehensibility (ease of understanding), accentedness (degree of foreign accent), and fluency (speed of delivery and smoothness). If these cognitive variables are found to exert a measurable influence on the scores that raters assign, she said, then this could pose a threat to the validity of the assessments.
Dr Isaacs explained how a greater understanding of the linguistic dimensions that underlie listeners' L2 comprehensibility ratings can elucidate our understanding of the construct at different levels of ability, whilst discussing the implications for rating scale validation and for rater training in high-stakes assessment settings.