Research Matters 26

Contents

Contents

  • Research Matters 26 Foreword

    Oates, T. (2018). Foreword. Research Matters: A Cambridge Assessment publication, 26, 1.

    Observing the ebb and flow of contemporary international discussion about curriculum it sometimes is hard to see scientific discussion rather than political discourse. Our research group is not naïve: political issues seldom are far away from scientific enquiry. We should continue to strive to lay down the principles and practices of the science of measurement, particularly as technologically driven change creeps into day-to-day practices.

    Download

  • Research Matters 26 Editorial

    Bramley, T. (2018). Editorial. Research Matters: A Cambridge Assessment publication, 26, 1.

    The first article in this issue of Research Matters, by Rushda Khan and Stuart Shaw, gives a detailed discussion, illustrated by real examples, of some of the issues that need to be considered when preparing on-screen versions of exam questions.  I was particularly interested by their observations about the metaphors of working on-screen (files, folders, desktops) and the implications for instructions to examinees such as ‘write’ or ‘type’.

    Download

  • To "Click" or to "Choose"? Investigating the language used in on-screen assessment

    Khan, R. and Shaw, S. (2018). To "Click" or to "Choose"? Investigating the language used in on-screen assessment. Research Matters: A Cambridge Assessment publication, 26, 2-9.

    In this article we consider the extent to which the language used in on-screen examination questions ought to differ from that of paper-based exam questions. We argue that the assessment language in screen-based questions should be independent of the mode of delivery and should focus on relevant and expected test-taker cognitive processing required by the task rather than on the format of the response. We contend that “medium-independent” language improves how well a question will measure the knowledge, understanding and/or skills of interest by allowing learners to focus on its content rather than on extraneous, potentially contaminating factors such as technological literacy and mode familiarity. The latter factors may constitute potential sources of construct-irrelevant variance and, therefore, pose a threat to how scores awarded to a performance on a question are both interpreted and used. To illustrate the arguments, examples from the Cambridge online Progression Tests are used.

    Download

  • Articulation Work: How do senior examiners construct feedback to encourage both examiner alignment and examiner development?

    Johnson, M. (2018). Articulation Work: How do senior examiners construct feedback to encourage both examiner alignment and examiner development? Research Matters: A Cambridge Assessment publication, 26, 9-14.

    This is a study of the marking feedback given to a group of examiners by their Team Leaders (more senior examiners who oversee and monitor the quality of examiner marking in their team). This feedback has an important quality assurance function but also has a developmental dimension, allowing less senior examiners to gain insights into the thinking of more senior examiners. When looked at from this perspective, marking feedback supports a form of examiner professional learning.

    This study set out to look at this area of examiner practice in detail. To do this, I captured and analysed a set of feedback interactions involving 30 examiners across three Advanced level General Certificate of Education subjects. For my analysis, I used a mixture of learning theory and sociological theory to explore how the feedback was being used and how it attained its dual goals of examiner monitoring and examiner development.

    Download

  • Characteristics, uses and rationales of mark-based and grade-based assessment

    Williamson, J. (2018). Characteristics, uses and rationales of mark-based and grade-based assessment. Research Matters: A Cambridge Assessment publication, 26, 15-21.

    Mark-based assessment requires assessors to assign numerical marks to candidates’ work, assisted by a mark scheme. In grade-based approaches, assessors evaluate candidates’ work against grading criteria to decide upon a grade, avoiding marks altogether. This article outlines the characteristics, uses and rationales of the two approaches, focusing particularly on their suitability for assessment in vocational and technical qualifications.

    Download

  • Is comparative judgement just a quick form of multiple marking?

    Benton, T. and Gallacher, T. (2018). Is comparative judgement just a quick form of multiple marking? Research Matters: A Cambridge Assessment publication, 26, 22-28.

    This article describes analysis of GCSE English essays that have both been scored using comparative judgement and marked multiple times. The different methods of scoring are compared in terms of the accuracy with which the resulting scores can predict achievement on a separate set of assessments. This results show that the predictive value of marking increases if multiple marking is used and (perhaps more interestingly) if statistical scaling is applied to the marks. More importantly, the evidence in this article suggests that any advantage of comparative judgement over traditional marking can be explained in terms of the number of judgements that are made for each essay and by the use of a complex statistical model to combine these. In other words, it is the quantity of data that is collected about each essay and how this data is analysed that is important. The physical act of placing two essays next to each other and deciding which is better does not appear to produce judgements that are in themselves any more valid than from getting the same individual to simply mark a set of essays.

    Download

  • How have students and schools performed on the Progress 8 performance measure?

    Gill, T. (2018). How have students and schools performed on the Progress 8 performance measure? Research Matters: A Cambridge Assessment publication, 26, 28-36.

    The new league table measures (Attainment 8 and Progress 8) are based on performance in a student’s best eight subjects at GCSE (or equivalent). One criticism of the previous measures was that they penalised schools with a low-attaining intake. As Progress 8 is a value-added measure, it already accounts for the prior attainment of the student and should in theory no longer penalise these schools. The purpose of this research was to delve deeper into the relationship between Progress 8 scores and various student and school level factors. In particular, multilevel regression modelling was undertaken to infer which factors were most important in determining scores at student level. The results showed that various groups of students were predicted higher Progress 8 scores including girls, less deprived students, students without SEN and students in schools with a higher performing intake.   At the school level, higher Progress 8 scores were found amongst schools with higher-performing intakes. This suggests that one of the main aims of the new measures (levelling the playing field) has not been completely achieved.

    Download

  • Research News

    Barden, K. (2018). Research News. Research Matters: A Cambridge Assessment publication, 26, 38-39.

    A summary of recent conferences and seminars, statistics reports, Data Bytes and research articles published since the last issue of Research Matters.

    Download