Research Matters 03

Contents

  • Research Matters 3 Foreword

    Oates, T. (2007). Foreword. Research Matters: A Cambridge Assessment publication, 3, 1.

    I am pleased to introduce the third issue of Research Matters, which again seeks to stimulate debate and information exchange on matters central to assessment.

    Download

  • Research Matters 3 Editorial

    Green, S. (2007). Editorial. Research Matters: A Cambridge Assessment publication, 3, 1.

    In this issue we report on topics ranging from the construct of Critical Thinking to the factors affecting examination success at A-level.

    Download

  • Critical Thinking - a tangible construct?

    Black, B. (2007). Critical Thinking - a tangible construct? Research Matters: A Cambridge University Press & Assessment publication, A selection of articles (2011) 11-13. First published in Research Matters, Issue 3, January 2007

    This article introduces some of the debates about defining the construct of Critical Thinking and some of the implications for assessment of Critical Thinking.

    Download

  • Difficulties in evaluating the predictive validity of selection tests

    Bell, J. F. (2007). Difficulties in evaluating the predictive validity of selection tests. Research Matters: A Cambridge Assessment publication, 3, 5-10.

    One of the most important problems associated with evaluating the predictive validity of a selection test is that the outcome variable is only known for the selected applicants.  This article uses simulated data to compare six different selection methods. It shows that interpreting uncorrected correlation coefficients is difficult and, depending on the circumstances, can seriously underestimate the effectiveness of a selection test.

    Download

  • Using Thinking Skills Assessment in University admissions

    Emery, J. and Bell, J. F. (2007). Using Thinking Skills Assessment in University admissions. Research Matters: A Cambridge Assessment publication, 3, 10-13.

    At the time of writing, most University of Cambridge colleges use the Thinking Skills Assessment (TSA) developed by Cambridge Assessment during the admissions process. The range of subjects for which it is used varies from college to college. The test provides "supplementary information" for use in helping to make admissions decisions. Obviously, to be meaningful, any such selection tool must be able to predict future performance. This issue of predictive validity is the focus of this article, which reports on the 2003 TSA scores and the subsequent 1st year (Part 1A) examination results of Computer Science students (taken in Summer 2005).

    Download

  • Factors affecting examination success at A-level

    Vidal Rodeiro, C. L. and Bell, J. F., (2007). Factors affecting examination success at A-level. Research Matters: A Cambridge Assessment publication, 3, 14-19.

    Previous research has shown that background information about students (such as gender or ethnicity) is an important predictor of attainment. This previous research has also provided evidence of links between socio-economic characteristics of students and their educational attainment, for example, measures of socio-economic status, parents’ educational background, family structure and income have been shown to be important predictors of attainment at secondary level. Such factors have also been found to be strongly related to measures of prior attainment at entry to school. In this research, we use information from different databases in order to investigate the contribution of students’ attainment at GCSE, family background, schooling and neighbourhood to their success in GCE A-levels. In particular, we focus on the students’ performance in GCE A-level in Chemistry.

    Download

  • A-level uptake: 'Crunchier subjects' and the 'Cracker effect'

    Bell, J. F., Malacova, E., Vidal Rodeiro, C. L. and Shannon, M. (2007). A-level uptake: 'Crunchier subjects' and the 'Cracker effect'. Research Matters: A Cambridge Assessment publication, 3, 19-25.

    One of the recent claims made about A-levels is that students are opting for the allegedly easier subjects. Furthermore, Cambridge University produced a list of A-level subjects that provide a less effective preparation for their courses, for example, Business Studies, Media Studies or Sports Studies.

    In this article we investigate the uptake of A-levels in England from 2001 to 2005. This period covers the transition to Curriculum 2000. The aim of this reform was that students would study for four or five subjects at AS-level in the first year of the sixth form and then choose three of them to continue on to A-level. Its objective was to broaden the curriculum and to provide more balance.

    For most subjects and groups of subjects there has been very little change in uptake during the period under study. For some subjects and groups of subjects, there have been changes associated with Curriculum 2000 but the uptakes have subsequently stabilised. Of greater concern are the subjects that have declined through the whole period, for example, Geography, Physics and Modern Languages. For Science and Mathematics, there is a need to consider how these subjects are extended beyond a very able elite.

    Download

  • Discussion piece: The psychometric principles of assessment

    Rust, J. (2007). Discussion piece: The psychometric principles of assessment. Research Matters: A Cambridge Assessment publication, 3, 25-27.

    Psychometrics is the science of psychological assessment, and is a foundation of assessment and measurement. Within psychometrics there are four fundamental principles whereby the quality of an assessment is judged. These are (1) reliability, (2) validity, (3) standardisation and (4) freedom from bias. Reliability is the extent to which an assessment is free from error; validity is the extent to which a test or examination assesses what it purports to assess; standardisation gives us information on how the result of an assessment is to be judged; and freedom from bias examines the extent and causes of differences between groups. These four principles inform not only test use but also the entire process of test development, from the original curriculum or job specification, via the choice and appraisal of examination questions and test items, through to the eventual evaluation of the success or otherwise of the assessment itself.

    Download

  • Is passing just enough? Some issues to consider in grading competence-based assessments

    Johnson, M. (2007). Is passing just enough? Some issues to consider in grading competence-based assessments. Research Matters: A Cambridge Assessment publication, 3, 27-30.

    Competence-based assessment involves judgements about whether candidates are competent or not. For a variety of historical reasons, competency-based assessment has had an ambivalent relationship with grading (i.e., identifying different levels of competence), although it is accepted by some that ‘grading is a reality’ (Thomson, Saunders and Foyster, 2001, p.4). The question of grading in competence-based qualifications is particularly important in the light of recent national and international moves towards developing unified frameworks for linking qualifications. This article uses validity as a basis for discussing some of the issues that surround the grading of competence-based assessments and is structured around 10 key points.

    Download

  • Research News

    The Research Division (2007). Research News. Research Matters: A Cambridge Assessment publication, 3, 32.

    A summary of recent conferences and seminars, and research articles published since the last issue of Research Matters.

    Download