Research Matters 05

  • Research Matters 5 Foreword

    Oates, T. (2008). Foreword. Research Matters: A Cambridge Assessment publication, 5, 1.

    This issue of Research Matters is testimony to the diversity of the education system in England – not ‘social diversity’, but diversity in assessment and qualifications. Andy Green, in his seminal book Education and State Formation (1990) compared England, Germany, France and the USA as a means of understanding why the English education is so diverse in its form and content.

    Download

  • Research Matters 5 Editorial

    Green, S. (2008). Editorial. Research Matters: A Cambridge Assessment publication, 5, 1.

    A main theme of this issue is the psychology of assessment and the way that judgements are made. Suto, Crisp and Greatorex have worked on a number of linked research studies considering the marking process from a range of perspectives related to human judgement and decision making. In their article, they provide an overview of their research in the context of GCSEs and A levels.

    Download

  • Independent examination boards and the start of a national system

    Watts, A. (2008). Independent examination boards and the start of a national system. Research Matters: A Cambridge Assessment publication, 5, 2-6.

    A presentation given as part of the programme for the 2007 Annual Alumni Weekend of the University of Cambridge, describing how public examinations were first introduced in England for secondary age students and charting the development of the system.

    Download

  • Investigating the judgemental marking process: an overview of our recent research

    Suto, I., Crisp, V. and Greatorex, J. (2008). Investigating the judgemental marking process: an overview of our recent research. Research Matters: A Cambridge Assessment publication, 5, 6-9.

    This article gives an overview of a number of linked studies which explored the process of marking GCSE and A-level examination questions from a number of different angles. Key aims of these studies were to provide insights into how examiner training and marking accuracy could be improved, as well as reasoned justifications for how item types could be assigned to different groups of examiners in the future. The research studies combined several approaches, exploring both the information that people attend to when marking items and the sequences of mental operations involved. Examples include studies that used the think-aloud method to identify the cognitive marking strategies entailed in marking student responses, or to explore the broader socio-cognitive influences on the marking process. Other examples explored the relationship between cognitive marking strategy complexity and marking accuracy.

    This article brings together the findings from these various related studies to summarise the influences and processes that have been identified as important to the marking process from the research conducted so far.

    Download

  • An exploration of self-confidence and insight into marking accuracy among GCSE maths and physics markers

    Nadas, R. and Suto, I. (2008). An exploration of self-confidence and insight into marking accuracy among GCSE maths and physics markers. Research Matters: A Cambridge Assessment publication, 5, 9-15.

    A considerable volume of literature in education and occupational research investigates issues in self-confidence and insight. However, GCSE markers’ perceptions of their marking performance have not, to our knowledge, been examined. Exploring markers’ perceptions is important for several reasons. First, if markers’ estimates of their own performance prove to be accurate, then this information could be used by Awarding Bodies in standardisation procedures to identify and discuss examination questions that markers have difficulties with. If, however, markers’ insight proves to be unreliable and unrelated to their actual marking accuracy, then their feedback on ‘problem areas’ could be misleading: for example, when conducting standardisation procedures, Principal Examiners might find themselves focussing on the ‘wrong’ questions. Secondly, investigating whether self-confidence and insight change or become more accurate with more marking practice or more feedback could inform marker training practices.

    In this article, we present research which explored GCSE markers’ perceptions of their own marking accuracy. Markers’ levels of self-confidence and insight, and possible changes in these measures over the course of the marking process, were investigated. The term ‘self-confidence’ here denotes markers’ post-marking estimates of how accurately they thought they had marked a sample of questions; ‘insight’ refers to the relationship between markers’ actual marking accuracy and estimated accuracy, indicating how precise their estimates were. Overall, we found very different patterns of self-confidence and insight for maths and physics markers.

    Download

  • The influence of performance data on awarders' estimates in Angoff awarding meetings

    Novakovic, N. (2008). The influence of performance data on awarders' estimates in Angoff awarding meetings. Research Matters: A Cambridge Assessment publication, 5, 15-19.

    The Angoff method is one of the most widely used procedures for computing cut scores in both the vocational and general education settings. In the Angoff standard setting procedure, a panel of judges with subject expertise are asked to individually estimate, for each test item, the percentage of minimally competent or borderline candidates (MCCs) who would be able to answer that item correctly.

    The aim of this study was to investigate the relative effect of discussion and performance data on: (1) the awarders’ expectations on how MCCs might perform on a test, (2) the magnitude of change in the awarders’ estimates between sessions and (3) the awarders’ rank-ordering of items in terms of their relative difficulty.

    Download

  • A review of literature regarding the validity of coursework and the rationale for its inclusion in the GCSE

    Crisp, V. (2008). A review of literature regarding the validity of coursework and the rationale for its inclusion in the GCSE. Research Matters: A Cambridge Assessment publication, 5, 20-24.

    Coursework was included in many GCSEs from their introduction in 1988 to increase the validity of assessment by providing wider evidence of student work and to enhance pupil learning by valuing skills such as critical thinking and independent learning (SEC, 1985). As the Secondary Examinations Council put it ‘above all, the assessment of coursework can correspond much more closely to the scale of values in this wider world, where the individual is judged as much by his or her style of working and ability to cooperate with colleagues as by the eventual product’ (SEC, 1985, p. 6).

    The validity and reliability of the assessment of GCSE coursework has come under much discussion since its introduction with the focus of concerns changing over time. At the inception of the GCSE, the main threats anticipated were possible unreliability of teacher marking, possible cheating and concern that girls were favoured (see QCA, 2006a). Now, concerns about consistency across similar subjects, fairness and authenticity (including the issues of internet plagiarism and excessive assistance from others), tasks becoming overly-structured (and hence reducing learning benefits) along with the overall burden on students across subjects, have led to a review of coursework by the Qualifications and Curriculum Authority (QCA).

    This article reviews relevant literature using the stages of assessment described by Crooks, Kane and Cohen (1996) to structure discussion of possible improvements to the validity of assessment as a result of including a coursework element within GCSE specifications and possible threats to validity associated with coursework.

    Download

  • School-based assessment in international practice

    Johnson, M. and Burdett, N. (2008). School-based assessment in international practice. Research Matters: A Cambridge Assessment publication, 5, 24-29.

    The term ‘school-based assessment’ (SBA) can conjure up diverse and not necessarily synonymous meanings which often include forms of ongoing and continual classroom assessment of a formative nature. This article attempts to clarify how, and why, SBA has been successfully introduced in various contexts and the importance of the context in its success or otherwise. It reviews SBA research literature and lists some of the advantages of SBA, some of the reservations about using SBA, and some principles about when and why SBA should be used.

    Download

  • Using simulated data to model the effect of inter-marker correlation on classification consistency

    Gill, T. and Bramley, T. (2008). Using simulated data to model the effect of inter-marker correlation on classification consistency. Research Matters: A Cambridge Assessment publication, 5, 29-36.

    The marking of exam papers is never going to be 100% reliable unless all exams consist entirely of multiple-choice or other completely objective questions. Different opinions on the quality of the work or different interpretations of the mark schemes create the potential for candidates to receive a different mark depending on which examiner marks their paper. Of more concern for candidates is the potential for candidates to receive a different grade from a different examiner. The purpose of this study was to use simulated data to estimate the extent to which examinees might get a different grade for: i) different levels of correlation between markers and ii) for different grade bandwidths.

    Download

  • Statistical reports: Patterns of GCSE and A-level uptake

    Emery, J. and Vidal Rodeiro, C. L. (2008). Statistical reports: Patterns of GCSE and A-level uptake. Research Matters: A Cambridge Assessment publication, 5, 36-38.

    Two new statistical reports have been added to the ‘Statistics Reports’ series on the Cambridge Assessment website (http://www.cambridgeassessment.org.uk/ca/Our_Services/Research/Statistical_Reports):

    -    Statistics Report Series No. 4: Uptake of GCSE subjects 2000 – 2006   
    -    Statistics Report Series No. 5: Uptake of GCE A-Level subjects in England 2006

    Data for these reports were extracted from the 16+/18+ databases. These databases are compiled for the Department for Children, Schools and Families from data supplied by all the awarding bodies in England. They contain background details and national examination data for all candidates who have their 16th, 17th and 18th birthdays in a particular school year. Candidates are allocated a unique number that remains the same throughout their Key Stage tests, allowing matching of examination data for longitudinal investigations. Records are present only if the candidate has sat an examination in a particular subject, not just attended classes. This brief article outlines some of the results from both reports.

    Download

  • The OCR Operational Research Team

    Gray, E. (2008). The OCR Operational Research Team. Research Matters: A Cambridge Assessment publication, 5, 38-39.

    To those within OCR (Oxford, Cambridge and RSA Examinations) the Operational Research Team (ORT) provides a constant source of advice, data and statistical support on all technical matters; to those outside OCR their work is largely unknown. This short sketch is an introduction to the main areas of interest of the team and its involvement in the life of OCR.

    Download

  • Research News

    The Research Division (2008). Research News. Research Matters: A Cambridge Assessment publication, 5, 39-40.

    A summary of recent conferences and seminars, and research articles published since the last issue of Research Matters.

    Download