Research Matters 10

Contents

Contents

  • Research Matters 10 - Foreword

    Oates, T. (2010). Foreword. Research Matters: A Cambridge Assessment publication, 10, 1.

    While the articles in this edition of Research Matters again engage with highly analytic approaches to the understanding of the behaviour of specific assessments, a key issue shines from two pieces (see Beth Black’s article and the one by Irenka Suto and Stuart Shaw).

    Download

  • Research Matters 10 - Editorial

    Green, S. (2010). Editorial. Research Matters: A Cambridge Assessment publication, 10, 1.

    This issue covers a wide range of themes including e-assessment, Critical Thinking, quality assurance and methods for studying comparability in vocational contexts. The variety illustrates the depth and breadth of research interests currently under investigation in relation to processes, technological developments and the assessment of new qualifications.

    Download

  • "It's not like teaching other subjects" - the challenges of introducing Critical Thinking AS level in England

    Black, B. (2010). "It's not like teaching other subjects" - the challenges of introducing Critical Thinking AS level in England. Research Matters: A Cambridge Assessment publication, 10, 2-8.

    This article focuses on the introduction of Critical Thinking AS level into schools in England. In 2001, 130 schools entered in total just over 2,000 candidates for the whole AS level. By 2009, this had increased to over 1000 schools entering over 22,000 candidates. However, candidate ‘success’ at Critical Thinking (in terms of proportion of grade As and passes) remained relatively low. This article explores three potential explanations for this.

    Download

  • Response to Cambridge Assessment's seminar on Critical Thinking, February 2010

    Chislett, J. (2010). Response to Cambridge Assessment's seminar on Critical Thinking, February 2010. Research Matters: A Cambridge Assessment publication, 10, 9-10.

    In this article an experienced teacher of Critical Thinking discusses whether or not Critical Thinking could, or should, be ‘embedded’ into other subjects, rather than taught and assessed as a standalone subject in its own right.

    Download

  • A tricky task for teachers: assessing pre-university students' research reports

    Suto, I. and Shaw, S.  (2010). A tricky task for teachers: assessing pre-university students' research reports. Research Matters: A Cambridge Assessment publication, 10, 10-16.

    In the UK and internationally, many students preparing for university are given the challenge of conducting independent research and writing up a report of around 4,000 or 5,000 words. Such research activities provide students with opportunities to investigate a specialist area of study in greater depth, to cross boundaries with an inter-disciplinary enquiry, or to explore a novel non-school subject such as archaeology, cosmology or anthropology. In this study, we explored the feasibility of applying a single mark scheme to research reports covering diverse topics in order to reward generic research skills. Our aim was to investigate the reliability with which teachers can mark diverse research reports, using four different generic assessment objectives. We also investigated teachers’ views in applying generic mark schemes, particularly when marking reports on unfamiliar topics. Our analyses indicated that marking reliability was good, though like almost all qualifications, imperfect. Possible reasons and explanations for marking difficulty related to subject knowledge, the clarity of student thought, and the overall level of student performance. 

    Download

  • Towards an understanding of the impact of annotations on returned exam scripts

    Johnson, M. and Shaw, S. (2010). Towards an understanding of the impact of annotations on returned exam scripts. Research Matters: A Cambridge Assessment publication, 10, 16-21.

    There is little empirical study into practices around scripts returned to centres. Returned scripts often include information from examiners about the performance being assessed. As well as the total score given for the performance, additional information is carried in the form of the annotations left on the script by the marking examiner.

    Examiners’ annotations have been the subject of a number of research studies (Crisp and Johnson, 2007; Johnson and Shaw, 2008; Johnson and Nádas, 2009) but as far as we know there has been no research into how this information is used by centres or candidates and whether it has any influence on future teaching and learning. This study set out to look at how teachers and students interact with examiners’ annotations  on scripts.

    This study used survey and interview methods to explore:
     
    1. How do teachers and centres use annotations?
    2. What is the scale of such use?
    3. What importance is attached to the annotations?
    4. What factors might influence the interpretation of the annotations?

    Download

  • Must examiners meet in order to standardise their marking? An experiment with new and experienced examiners of GCE AS Psychology

    Raikes, N., Fidler, J. and Gill, T. (2010). Must examiners meet in order to standardise their marking? An experiment with new and experienced examiners of GCE AS Psychology. Research Matters: A Cambridge Assessment publication, 10, 21-27.

    When high-stakes examinations are marked by a panel of examiners, the examiners must be standardised so that candidates are not advantaged or disadvantaged according to which examiner marks their work.

    It is common practice for Awarding Bodies’ standardisation processes to include a “Standardisation” or “Co-ordination” meeting, where all examiners meet to be briefed by the Principal Examiner and to discuss the application of the mark scheme in relation to specific examples of candidates’ work.  Research into the effectiveness of standardisation meetings has cast doubt on their usefulness, however, at least for experienced examiners.  

    In the present study we addressed the following research questions:

    1. What is the effect on marking accuracy of including a face-to-face meeting as part of an examiner standardisation process?
    2. How does the effect on marking accuracy of a face-to-face meeting vary with the type of question being marked (short-answer or essay) and the level of experience of the examiners?
    3. To what extent do examiners carry forward standardisation on one set of questions to a different but very similar set of questions?

    Download

  • A review of literature on item-level marker agreement: implications for on-screen marking monitoring research and practice

    Curcin, M. (2010). A review of literature on item-level marker agreement: implications for on-screen marking monitoring research and practice. Research Matters: A Cambridge Assessment publication, 10, 27-33.

    This review article focuses mainly on the literature relevant for the inter-marker agreement aspect of marking reliability in the context of on-screen marking. The increasing use of on-screen in place of paper-based marking presents new possibilities for monitoring of marking and ensuring higher agreement levels, but also raises questions with respect to the most efficient and beneficial use of marker agreement information that is routinely collected in this process, both in monitoring practice and in research.

    Download

  • Why use computer-based assessment in education? A literature review

    Haigh, M. (2010). Why use computer-based assessment in education?  A literature review. Research Matters: A Cambridge Assessment publication, 10, 33-40.

    The aim of this literature review is to examine the evidence around the claims made for the shift towards computer-based assessment (CBA) in educational settings. In this examination of the literature a number of unevidenced areas are uncovered, and the resulting discussion provides the basis for suggested further research alongside practical considerations for the application of CBA.

    Download

  • Is CRAS a suitable tool for comparing specification demands from vocational qualifications?

    Greatorex, J. and Rushton, N. (2010). Is CRAS a suitable tool for comparing specification demands from vocational qualifications? Research Matters: A Cambridge Assessment publication, 10, 40-44.

    The aim of the research was to ascertain whether a framework of cognitive demands, known as CRAS, is a suitable tool for comparing the demands of vocational qualifications.  CRAS was developed for use with academic examinations and may not tap into the variety of demands which vocational qualifications place on candidates.  Data were taken from a series of comparability studies by awarding bodies and the national regulator.  The data were the frameworks (often questionnaires) used to compare qualifications in these studies.  All frameworks were mapped to CRAS.  It was found that most aspects of the various frameworks mapped to an aspect of CRAS.  However, there were demands which did not map to CRAS; these were mostly affective and interpersonal demands, such as working in a team.  Affective and interpersonal domains are significant in vocational qualifications; therefore, using only CRAS to compare vocational qualifications is likely to omit key demands from the comparison.

    Download

  • Developing and piloting a framework for the validation of A levels

    Shaw, S. and Crisp, V. (2010). Developing and piloting a framework for the validation of A levels. Research Matters: A Cambridge Assessment publication, 10, 44-47.

    Validity is a key principle of assessment, a central aspect of which relates to whether the interpretations and uses of test scores are appropriate and meaningful (Kane, 2006). For this to be the case, various criteria must be achieved, such as good representation of intended constructs, and avoidance of construct irrelevant variance. Additionally, some conceptualisations of validity include consideration of the consequences that may result from the assessment, such as effects on classroom practice. The kinds of evidence needed may vary depending on the intended uses of assessment outcomes. For example, if assessment results are designed to be used to inform decisions about future study or employment, it is important to ascertain that the qualification acts as suitable preparation for this study or employment, and to some extent predicts likely success.

    This article reports briefly on the development, piloting and revision of a framework and methodology for validating general academic qualifications such as A levels. The development drew on previously proposed frameworks for validation from the literature, and the resulting framework and set of methods were piloted with International A level Geography. This led to revisions to the framework and use with A level Physics.

    Download

  • Statistical Reports

    The Statistics Team (2010). Statistical Reports. Research Matters: A Cambridge Assessment publication, 10, 47.

    The ongoing Statistics Reports Series provides statistical summaries of various aspects of the English examination system, such as trends in pupil uptake and attainment, qualifications choice, subject combinations and subject provision at school. This article contains a summary of the most recent additions to this series.

    Download

  • Research News

    The Research Division (2010). Research News. Research Matters: A Cambridge Assessment publication, 10, 48.

    A summary of recent conferences and seminars, and research articles published since the last issue of Research Matters.

    Download