Research Matters 12

Contents

Contents

  • Research Matters 12 - Foreword

    Oates, T. (2011). Foreword. Research Matters: A Cambridge Assessment publication, 12, 1.

    This Research Matters is published at a time of considerable change. In periods of change, it is vital not to lose sight of those things to which we constantly should attend.

    Download

  • Research Matters 12 - Editorial

    Green, S. (2011). Editorial. Research Matters: A Cambridge Assessment publication, 12, 1.

    This issue takes us from matters related to examinations that were topical in 1911 through to today’s processes and techniques and the development of new research methods.

    Download

  • Lessons from the past: An overview of the issues raised on the 1911 'Report of the Consultative Committee on Examinations in Secondary Schools'

    Elliott, G. (2011). Lessons from the past: An overview of the issues raised on the 1911 'Report of the Consultative Committee on Examinations in Secondary Schools'. Research Matters: A Cambridge Assessment publication, 12, 2-7.

    This article celebrates the 100th anniversary of the publication 'Examinations in Secondary Schools' by reviewing its contents in the light of issues faced in present day. Ten key issues were identified from the 1911 report, nearly all of which are still subject to current debate. A huge wealth of detail in the 1911 report makes for extremely interesting reading alongside records from recent times. Tracing the outcomes of decisions made into policy and practice, with the benefit of hindsight, can provide an illuminating source of evidence to add to current educational debate.

    Download

  • Evaluating Senior Examiners' use of Item Level Data

    Shiell, H. and Raikes, N. (2011). Evaluating Senior Examiners' use of Item Level Data. Research Matters: A Cambridge Assessment publication, 12, 7-10.

    Many of Cambridge Assessment's written examination scripts are now scanned and marked on screen by examiners working on computers. One benefit arising from on-screen marking is that the marks are captured at item or question-part level and are available for analysis in Cambridge within hours of being submitted by examiners. Cambridge Assessment now routinely analyses these item marks and provides subject staff and senior examiners with reports containing Item Level Data (ILD) for nearly all examinations marked on screen. In this article, we present findings from an evaluation of senior CIE and OCR examiners’ use of these Item Level Data reports.

    Download

  • Practical issues in early implementation of the Diploma Principal Learning

    Crisp, V. and Green, S. (2011). Practical issues in early implementation of the Diploma Principal Learning. Research Matters: A Cambridge Assessment publication, 12, 10-13.

    This short article reports on some of the findings from an interview study conducted in the first year of implementation of the 14–19 Diplomas. The Diplomas were introduced by the Labour government as part of wider educational reforms (DfES, 2005a, 2005b). They were designed to prepare young people for the world of work or for independent study, and are intended to combine theoretical and applied learning, to provide different ways of learning, to encourage students to develop skills valued by employers and universities, and provide opportunities for students to apply skills to work situations in realistic contexts. They are also intended to contribute to ensuring that a wide range of appropriate learning pathways are available to young people, thus facilitating increased participation and attainment. The Diplomas are available at Levels 1, 2 and 3 and rather than being taught by an individual school or college, they are available through consortia consisting of a small group of schools and/or colleges working collaboratively. The Diploma is a composite qualification which is made up of the following elements: principal learning; generic learning; additional and specialist learning.

    The current research focused on the Principal Learning (PL). The Principal Learning components are specific to a domain or ‘line of learning’. Learning through experience of simulated or real work contexts, through applying and practically developing skills, as well as theoretical learning, is emphasised. The PL components are assessed predominantly via assignments which are internally marked and externally moderated. Teaching of Diplomas in the first five ‘lines of learning’ began in September 2008 with a further five beginning in September 2009 and four in September 2010.

    Six consortia running Phase 1 Diplomas in the first year of implementation took part in this research. At each consortium, one or more teachers and (in all but one case) a number of learners were interviewed about the learning that was occurring and various practicalities around implementation of the Diploma. This article reports on the latter.

    Download

  • The effect of changing component grade boundaries on the assessment outcome in GCSEs and A levels

    Bramley, T. and Dhawan, V. (2011). The effect of changing component grade boundaries on the assessment outcome in GCSEs and A levels. Research Matters: A Cambridge Assessment publication, 12, 13-18.

    GCSE and A level assessments are graded examinations, where grade boundaries are set on the raw mark scale of each of the units/components comprising the assessment. These boundaries are then aggregated in a particular way depending on the type of assessment to produce the overall grades for the assessment. This article reports a simple 'sensitivity analysis' determining the effect on assessment grade boundaries of varying the (judgementally set) key grade boundaries on the units/components by ±1 mark. Two assessments with different structures were used - a tiered ‘linear’ GCSE, and a 6-unit ‘modular’ A level.

    Download

  • An American University Case Study Approach to Predictive Validity: Exploring the issues

    Shaw, S. and Bailey, C. (2011). An American University Case Study Approach to Predictive Validity: Exploring the issues. Research Matters: A Cambridge Assessment publication, 12, 18-26.

    Predictive validity entails the comparison of test scores with some other measure for the same candidates taken some time after the test has been given. For tests that are used for university selection purposes, it is vital to demonstrate predictive validity. The research reported here uses data collected from three cohorts of students enrolled at Florida State University. The data includes information about each student’s performance at high school, ethnicity, gender and first year GPA. Multilevel modelling has been applied to the data using the statistical software package MLwiN to investigate the relationships between the variables, and in particular to determine which are the best indicators of academic success at university, whilst taking into account the effects of individual high schools. Issues relating to choice of predictive and university success measures, intervening variables, controlling for selection bias, data and measurement, and choice of research model are discussed in the context of one American university.

    Download

  • Evaluating the CRAS framework: Development and recommendations

    Johnson, M. and Mehta, S. (2011). Evaluating the CRAS framework: Development and recommendations. Research Matters: A Cambridge Assessment publication, 12, 27-33.

    This article reviews conceptual issues surrounding comparisons of demand through a critical evaluation of the CRAS (Complexity-Resources-Abstractness-Strategy) framework (Pollitt, Hughes, Ahmed, Fisher-Hoch and Bramley, 1998).

    The article outlines the origins of the CRAS framework in the scale of cognitive demand (Edwards and Dall’Alba, 1981). The characteristics of the CRAS framework are then outlined, with attention being drawn to the assumptions that underlie these characteristic features. The article culminates in a set of recommendations and guidance that are relevant for potential users of the CRAS framework.

    Download

  • Developing a research tool for comparing qualifications

    Greatorex, J., Mehta, S., Rushton, N., Hopkin, R. and Shiell, H. (2011). Developing a research tool for comparing qualifications. Research Matters: A Cambridge Assessment publication, 12, 33-42.

    Comparability studies about qualification standards generally use demand or candidates’ performance as comparators. However, these can be unrepresentative for vocational and new qualifications. Consequently, other comparators need to be used. This article details the process of devising and piloting a research instrument to compare the features of cognate units from diverse qualifications and subjects.

    First, knowledge was elicited from twelve experts through Kelly’s repertory grid interviews where they were asked to compare different types of qualifications. This data was analysed thematically.  Four features and several sub-features were identified. These features were used to categorise the interview data and develop the research instrument. A pilot of the instrument indicated that salient features varied between units. Therefore, the instrument is suitable for use in future comparability studies about features. However, conventions still need to be agreed for how to analyse the data that is collected using the instrument.

    Download

  • Statistical Reports

    The Statistics Team (2011). Statistical Reports. Research Matters: A Cambridge Assessment publication, 12, 43.

    The ongoing Statistics Reports Series provides statistical summaries of various aspects of the English examination system, such as trends in pupil uptake and attainment, qualifications choice, subject combinations and subject provision at school. This article contains a summary of the most recent additions to this series.

    Download

  • Research News

    The Research Division (2011). Research News. Research Matters: A Cambridge Assessment publication, 12, 43-44.

    A summary of recent conferences and seminars, and research articles published since the last issue of Research Matters.

    Download