Research Matters Special Issue 2

Contents

Contents

  • Foreword

    Oates, T. (2011). Foreword. Research Matters: A Cambridge Assessment publication, Special Issue 2, 1.

    Comparability is an area beset by assumptions, trammelled by methodological dispute, and regarded, by some, as a bankrupt pursuit.This edition emphasises the need to be clear about definitions of comparability, to specify precise objectives, to be discriminating in respect of choice of method, and to understand the utility and limitations of findings. Our conclusion is that comparability is NOT a bankrupt activity. It is complex, demanding (both theoretically and practically), and frequently produces indicative rather than definitive findings. But it remains a vital part of both management of, and research on, qualifications and assessments.

    Download

  • Editorial

    Bramley, T. (2011). Editorial. Research Matters: A Cambridge Assessment publication, Special Issue 2, 2.

    In this Special Issue of Research Matters we present some of Cambridge Assessment’s recent thinking about comparability. The opening article gives an historical overview of comparability concerns showing how they have been expressed in different political and educational contexts in England over the last 100 years. The second article identifies and defines some widely used terms and shows how different methods of investigating comparability can be related to different definitions. The third article tries to find evidence to support the popular (mis)-conception that A levels used to be norm-referenced but became criterion-referenced, and that this change was responsible for the rising pass rate. Another topic of recurring interest is whether, within a qualification type (e.g. GCSE or A level), subjects differ in difficulty. It always seems to have been easier to calculate indices of relative subject difficulty than to explain exactly what they mean. A recent approach has been to use the techniques of Item Response Theory, treating different exam subjects like different questions (items) on a test. The fourth article discusses whether this analogy works. It is an unavoidable fact of comparability research that often there is a need to compare things that are in many ways very different, such as vocational and academic qualifications. A sensible basis for comparison needs to be found, and the fifth article discusses one such basis – ‘returns to qualifications’ – that has so far been relatively rarely used by researchers in awarding bodies. The sixth article discusses some of the conceptual issues involved in linking tests to the Common European Framework of Reference for Languages (CEFR). The seventh article describes some of the issues that arose, and research undertaken by OCR in order to develop guidelines for grading procedures in 2011 that would be capable of achieving comparability. The final article takes an interesting step away from the academic literature on comparability to discuss how comparability issues are presented in the media, and to evaluate the contribution that programmes like “That’ll Teach ’em” can make to our understanding of comparability and standards.

    Download

  • 100 years of controversy over standards: an enduring problem

    Elliott, G (2011). 100 years of controversy over standards: an enduring problem. Research Matters: A Cambridge Assessment publication, Special Issue 2, 3-8.

    This article looks back at the history of comparability in the English assessment system by examining, in detail, the findings of some of the key reports held in Cambridge Assessment’s Group Archive. Of especial interest were the 1911 Consultative committee report upon Examinations in Secondary schools and the 1943 Norwood report Curriculum and Examinations in Secondary Schools. When considered alongside other, more recent literature, the insights from these papers provided a window through which to explore the ways in which theories of comparability have developed and different viewpoints have emerged. Key themes which are explored within the article include the changing, and confusing, use of terminology; the role that the purpose of the qualifications plays in determining comparability issues, and the issue of qualifications evolving and subsequently producing new comparability challenges. Some brief, but fascinating, facts and figures about very early comparability studies are also included.

    Download

  • A guide to comparability terminology and methods

    Elliott, G. (2011).  A guide to comparability terminology and methods. Research Matters: A Cambridge Assessment publication, Special Issue 2, 9-19.

    Comparability is a complex and challenging area for educational researchers, particularly those who have little experience of it. This article seeks to provide a short and accessible introduction to the area. As such it includes discussion of the holism of the topic and how to distinguish between definitions and methods and a glossary of key terms. Core to the article is a list of different methods which have been used when investigating comparability issues in the educational assessment literature. Each method is briefly described, with examples of contexts and definitions which have been applied. The article also includes a short summary of some of the key themes in the literature and discussion of how these themes relate to one another. The key aim of this paper is to help researchers come to a better shared understanding of the concepts and issues which form the interwoven web of concepts which characterises comparability.

    Download

  • A level pass rates and the enduring myth of norm-referencing

    Newton, P. (2011). A level pass rates and the enduring myth of norm-referencing. Research Matters: A Cambridge Assessment publication, Special Issue 2, 20-26.

    This article defines norm-referencing (the level of attainment of a particular student in relation to the level of attainment of all other students who sat the same examination); criterion-referencing (identifying exactly what students can and cannot do in each sub-domain of the subject being examined); and attainment-referencing (judging students on the basis of their overall level of attainment in the curriculum area being examined). It argues that A levels have never been norm-referenced or criterion-referenced but have always been attainment-referenced. This is counter to the mythology of A level examining, in which standards were norm-referenced from the 1960s to the middle of the 1980s, after which they became criterion-referenced.

    Download

  • Subject difficulty - the analogy with question difficulty

    Bramley, T. (2011). Subject difficulty - the analogy with question difficulty. Research Matters: A Cambridge Assessment publication, Special Issue 2, 27-33. 

    This article explores in depth one particular way of defining and measuring subject difficulty - the 'IRT approach'. First the IRT approach is briefly described. Then the analogy of using the IRT approach when the ‘items’ are examination subjects is explored. Next the task of defining difficulty from first principles is considered, starting from the simplest case of comparing two dichotomous items within a test. Finally, an alternative to the IRT approach, based on producing visual representations of differences in difficulty among just a few (three or four) examinations, is offered as an idea for future exploration.

    Download

  • Comparing different types of qualifications: an alternative comparator

    Greatorex, J. (2011). Comparing different types of qualifications: an alternative comparator. Research Matters: A Cambridge Assessment publication, Special Issue 2, 34-41. 

    Returns to qualifications is a statistical measure of how much more is earned on average by people with a particular qualification compared to people with similar demographic characteristics who do not have the qualification. Awarding bodies and the national regulator do not generally use this research method in comparability studies, although they are prominent in government reviews of qualifications. 

    This article considers what returns to qualifications comparability research can offer awarding bodies. This comparator enables researchers to make comparisons which cannot be achieved by other methods, for instance, comparisons between different types of qualifications, occupations, sectors and progression routes. It has the advantage that it is more independent than customary comparators used in many comparability studies.

    As with all research approaches, returns to qualifications has strengths and weaknesses, but provides some robust comparability evidence. The strongest comparability evidence is when there is a clear pattern in the results of several studies using different established research methods and independent data sets. Therefore results from returns to qualifications research combined with results from the customary comparators would provide a strong research evidence base.

    Download

  • Linking assessments to international frameworks of language proficiency: the Common European Framework of Reference

    Jones, N. (2011). Linking assessments to international frameworks of language proficiency: the Common European Framework of Reference. Research Matters: A Cambridge Assessment publication, Special Issue 2, Special Issue 2, 42-47. 

    Cambridge ESOL, the exam board within Cambridge Assessment which provides English language proficiency tests to 3.5 million candidates a year worldwide, uses the Common European Framework of Reference for Languages (CEFR) as an essential element of how we define and interpret exam levels. Many in the UK who are familiar with UK language qualifications may still be unfamiliar with the CEFR, because most of these qualifications pay little attention to proficiency – how well a GCSE grade C candidate can actually communicate in French, for example, or whether this is comparable with the same grade in German. The issues of comparability which the CEFR addresses are thus effectively different in kind from those that occupy schools exams in the UK, even if the comparisons made – over time, or across subjects – sound on the face of it similar. This article offers a brief introduction to the CEFR for those unfamiliar with it.

    Download

  • The challenges for ensuring year-on-year comparability when moving from linear to unitised schemes at GCSE

    Forster, M. (2011). The challenges for ensuring year-on-year comparability when moving from linear to unitised schemes at GCSE. Research Matters: A Cambridge Assessment publication, Special Issue 2, 48-51. 

    In September 2009 new unitised specifications were introduced in England.  These specifications were to be assessed in a modular way, throughout the course of study, rather than in a linear way (at the end of the course).  At that point in time, OCR had a number of specifications that had been run in a unitised way for a number of years, and so were able to use this information to investigate the impact of unitisation.  This meant we were able to look at the impact of resits; the terminal requirement (where 40% of the course had to be assessed at the end of the course); the trade-off between maturity and the bite-size (and hence smaller, more spread-out) nature of the assessments; the variation of unit and subject grades; and the impact of introducing a uniform mark scale, so that marks from different assessment series could be combined fairly.  Furthermore, we ‘unitised’ a number of existing linear specifications to look at the impact unitisation might have on the stability of outcomes.  This paper summarises the outcome of these investigations.

    Download

  • The pitfalls and positives of pop comparability

    Rushton, N., Haigh, M., and Elliott, G. (2011). The pitfalls and positives of pop comparability. Research Matters: A Cambridge Assessment publication, Special Issue 2, 52-56. 

    The media debate about standards in public examinations has become an August ritual. The debate tends to be polarised with reports of ‘slipping standards’ at odds with those claiming that educational prowess has increased. Some organisations have taken matters into their own hands, and have carried out their own studies investigating this. Some of these are similar to academic papers; others are closer in nature to a media campaign. In the same way as ‘pop psychology’ is a term used to describe psychological concepts which attain popularity amongst the wider public, so ‘pop comparability’ can be used to describe the evolution of a lay-person’s view of comparability. Studies, articles or programmes which influence this wider view fall into this category and are often accessed by a much larger audience than academic papers. In this article, five of these studies are considered: Series 1 of the televised social experiment “That’ll Teach ‘em”; The Royal Society of Chemistry’s Five-Decade Challenge; the Guardian’s and the Times’ journalists (re)sitting examinations to experience their difficulty; a feature by the BBC Radio 4 programme, ‘Today’ (2009), where students discussed exam papers from 1936; and a book of O level past papers and an associated newspaper article which described students’ experiences of sitting the O level exams.

    Download

View

View promo

View e-newsletter is our monthly update of news, insights and events.

Data Bytes

A regular series of graphics from our research team, highlighting the latest research findings and trends in education and assessment.