Research Matters 14

Contents

Contents

  • Research Matters 14 - Foreword

    Oates, T. (2012). Foreword. Research Matters: A Cambridge Assessment publication, 14, 1.

    One of the major problems in educational research is the time lag involved in evaluation. An innovation is conceived, design work is completed, implementation is undertaken, and evaluation starts. But evaluation takes time. Cambridge Assessment believes that we should continue to have an ethically-based commitment to timely and incisive evaluation, and that we should work to reduce the frequency of unnecessary fundamental change to the form and content of qualifications.

    Download

  • Research Matters 14 - Editorial

    Green, S. (2012). Editorial. Research Matters: A Cambridge Assessment publication, 14, 1.

    Most of the articles in this issue address matters relating to comparability covering a range of contexts from methodology to modularisation and highlighting challenges posed in this contentious field.

    Download

  • An intra-board comparison of the effects of using pseudo candidates' scripts and real candidates' scripts in a rank-ordering exercise at syllabus level

    Yim, L. (2012). An intra-board comparison of the effects of using pseudo candidates' scripts and real candidates' scripts in a rank-ordering exercise at syllabus level. Research Matters: A Cambridge Assessment publication, 14, 2-9.

    This study compared the use of scripts from 'pseudo-candidates' (a set of scripts from components of an assessment taken by different candidates) with scripts from real candidates (components all taken by the same candidate) to see if it affected the outcome of a comparability exercise using the rank-ordering method.  The scripts from the real and pseudo-candidates were selected or created such that they had very similar profiles of marks (scores) across the components of the assessment.

    Download

  • The effect of scripts’ profiles upon comparability judgements

    Rushton, N. (2012). The effect of scripts’ profiles upon comparability judgements. Research Matters: A Cambridge Assessment publication, 14, 10-17.

    Comparability studies often involve experts’ judging students’ scripts to decide which is better. Sometimes this involves experts judging more than one paper/component from a student. Students frequently achieve a higher level in one paper/component than they do on another. The scripts from these students are described as having an uneven profile.

    Uneven profile scripts have been identified as a cause of difficulty for those making judgements in comparability studies. This study investigated whether it was harder to compare uneven profiled scripts and whether uneven profiled scripts were judged more harshly.

    It found that the profile of the script only affected the difficulty of making comparisons in English Literature, where the effect varied by judge. Uneven profile scripts were slightly more likely to win their comparisons, but this depended on the judge. These findings suggest that the outcome of comparability studies could be affected by the existence of uneven profile scripts.

    Download

  • Monitoring the difficulty of tiered GCSE components using threshold marks for grade C

    Dhawan, D. (2012). Monitoring the difficulty of tiered GCSE components using threshold marks for grade C. Research Matters: A Cambridge Assessment publication, 14, 18-21.

    The study reported in this article aimed to explore simple ways of monitoring the relative difficulty of tiered components by considering the difference between the grade C boundaries on each tier. If the difficulty of the question papers is as intended, and the grade boundaries have been set correctly, the C boundary mark on the Foundation paper will be higher – as a proportion of the paper total – than the C boundary mark on the Higher paper.  The difference between the C boundaries is easy to calculate and, when combined with other indicators, might prove useful for routine monitoring of the technical qualities of assessments. Thus in the present study we calculated these differences for two examination sessions, and we present our findings in this article.

    Download

  • An investigation on the impact of GCSE modularisation on A level uptake and performance

    Vidal Rodeiro, C. L. (2012). An investigation on the impact of GCSE modularisation on A level uptake and performance. Research Matters: A Cambridge Assessment publication, 14, 21-28.

    The modularisation of GCSEs has caused considerable controversy since its introduction. Firstly, there are those who believe that modular assessment could lead to lack of coherence and fragmentation of learning as students have little time for reflection, skill development and knowledge consolidation; secondly, the increased assessment load means students could spend more time revising for the modular exams, rather than simply benefiting from the learning experience; and thirdly, there is the view that re-sitting modules may be lowering examination standards with ‘teaching to the test’ time heightened at the expense of deeper learning. Based on the above issues, teachers expressed concerns about modular students being less well equipped for the transition from GCSE to further study (e.g., A levels) than their linear counterparts. This study set out to investigate whether modular courses are good preparation for further study. The focus was on the impact of the GCSE assessment route on the uptake and performance in three A levels: English, mathematics and ICT.

    Download

  • Piloting a method for comparing the demand of vocational qualifications with general qualifications

    Greatorex, J. and Shiell, H. (2012). Piloting a method for comparing the demand of vocational qualifications with general qualifications. Research Matters: A Cambridge Assessment publication, 14, 29-38.

    Frequently, researchers are tasked with comparing the demand of vocational and general qualifications, and methods of comparison often rely on human judgement.  Therefore, the research aims to develop an instrument to compare vocational and general qualifications, pilot the instrument and explore how experts judge demand.  Reading a range of OCR (Oxford Cambridge and Royal Society of Arts Examinations) level 2 specifications illustrated that they included knowledge, skills and understanding from five domains; the affective, cognitive, interpersonal, metacognitive, and psychomotor domains. Therefore, these domains were included in the instrument. Four cognate units were included in the study. Four experts participated, each with familiarity with at least one unit. Each expert read pairs of unit specifications and judged which was more demanding in each domain (affective, cognitive, interpersonal, metacognitive and psychomotor).  Subsequently, they completed a questionnaire about their experience. The results are presented.  It was found that the demands instrument was suitable for comparing the demand of cognate units from vocational and general qualifications.

    Download

  • The validity of teacher assessed Independent Research Reports contributing to Cambridge Pre-U Global Perspectives and Research

    Greatorex, J. and Shaw, S. (2012). The validity of teacher assessed Independent Research Reports contributing to Cambridge Pre-U Global Perspectives and Research. Research Matters: A Cambridge Assessment publication, 14, 38-41.

    This research considered the validity of tutor assessed, pre-university independent research reports. Evidence of construct relevance in tutors’ interpretations of the levels awarded to the candidates’ research process was investigated. This included designing, planning, managing and conducting their own research project using techniques and methods appropriate to the subject discipline. The research was conducted in the context of the Cambridge International Pre-U Global Perspectives and Independent Research qualification (the GPR), a pre-university qualification for 16-19 year olds which is designed to equip students with the skills required to make a success of their university studies.  Tutors’ justifications for the levels they gave candidates were considered.  In the first of two studies (Study 1), tutor justifications were qualitatively analysed for specific tutor behaviours that might highlight tutors interpreting levels in a construct irrelevant way.  In the second study (Study 2), external moderators (EMs) rated the justifications according to the extent to which they reflected the intended constructs.  Study 1 showed little evidence of construct irrelevance and Study 2 provided strong evidence of construct relevance in tutors’ interpretation of the levels they awarded candidates for the research process. 

    Download

  • The Cambridge International Examinations bilingual research agenda

    Imam, H. and Shaw, S. (2012). The Cambridge International Examinations bilingual research agenda. Research Matters: A Cambridge Assessment publication, 14, 42-45.

    The context within which students prepare for Cambridge Assessment International Education (Cambridge International) assessments are often linguistically and educationally diverse. Whatever the country, the common denominator of Cambridge International schools is that students are being tutored and assessed through the medium of English. Some schools use bilingual instruction, delivering certain subjects through English as an additional language and other subjects through the first language, often trying to meet standards in both an international curriculum and a national curriculum. The opportunity to learn an additional language through a content subject has led to the practice of content and language integrated learning (CLIL) programmes. Other schools use monolingual instruction and deliver all subjects through English, either as a first or as an additional language. This article describes a Cambridge International research agenda designed to address four key questions: What is the impact of different teaching environments?; What impact does bilingual education have on the teaching and learning process?; What is the impact of bilingual education on learner outcomes?; What are the key assessment issues?

    Download

  • Cambridge Assessment Statistics Reports: Recent highlights

    Emery, J., Gill, T., Grayson, R. and Vidal Rodeiro, C. L. (2012). Cambridge Assessment Statistics Reports: Recent highlights. Research Matters: A Cambridge Assessment publication, 14, 45-50.

    The Research Division publishes a number of Statistics Reports each year based on the latest national examinations data. These are statistical summaries of various aspects of the English examination system, covering topics such as subject provision and uptake, popular subject combinations, trends over time in the uptake of particular subjects and the examination attainment of different groups of candidates. The National Pupil Database (NPD) is the source of most of these reports. This is a very large longitudinal database, owned by the Department for Education, which tracks the examination attainment of all pupils within schools in England from their early years up to Key Stage 5 (A level or equivalent). Another database, the Pupil Level Annual School Census (PLASC), can be requested matched to the NPD. This contains background information on candidates such as deprivation indicators, language, ethnicity and special educational needs. Other sources of data used to produce the Statistics Reports include the Inter-Awarding Body Statistics produced by the Joint Council for Qualifications (JCQ). This article highlights some of the most recent Statistics Reports, published between 2010 and 2011.

    Download

  • Research News

    The Research Division (2012). Research News. Research Matters: A Cambridge Assessment publication, 14, 51.

    A summary of recent conferences and seminars, and research articles published since the last issue of Research Matters.

    Download