Register of Change Part 2: 2010-2021
Rushton, N. & Ireland, J. 2022. Register of Change Part 2: 2010- 2021. Cambridge University Press & Assessment Research Report. Cambridge, UK: Cambridge University Press & Assessment.
Register of Change Part 1: 2000-2010
Rushton, N. 2022. Register of Change Part 1: 2000-2010. Cambridge University Press & Assessment Research Report. Cambridge, UK: Cambridge University Press & Assessment.
COVID related consultations 2020-21
Rushton, N. 2022. COVID-19 related consultations 2020-2021: A register of Change supplementary document. Cambridge University Press & Assessment Research Report. Cambridge, UK: Cambridge University Press & Assessment.
Spelling errors in 16-year-olds’ writing
Rushton, N. (2017). Presented at the annual conference of the British Educational Research Association, University of Sussex, Brighton, UK, 5-7 September 2017.
Teachers’ and employers’ views on the transition from GCSE Mathematics to A level Mathematics or employment
Rushton, N. & Wilson, F. (2015). Teachers’ and employers’ views on the transition from GCSE Mathematics to A level Mathematics or employment. Research Matters: A Cambridge Assessment publication, 20, 21-27.
Mathematics is one of the core GCSE subjects, and students are required to study the subject until the end of Key Stage 4 (KS4), when they are approximately aged 16. There is no requirement for students to take a qualification in Mathematics, but almost all students do. GCSE Mathematics is important because it represents the end of students' compulsory Mathematics learning. The current study aimed to identify the areas of Mathematics that were problematic for students who had just completed GCSE Mathematics. It also aimed to discover whether there was any overlap in the skills that were considered to be problematic as preparation for A level and those considered to be problematic as preparation for employment. It uses responses from a larger survey of teachers and employers to consider three research questions: 1. What areas of Mathematics are GCSE students well/poorly prepared in? 2. What teaching is needed to bring students up to the standard for starting A level Mathematics? 3. What Mathematics training do employers run for school leavers?
Course struggle, exam stress, or a fear of the unknown? A study of A level students’ assessment preferences and the reasons behind them
Suto, I., Elliott, G., Rushton, N. and Mehta, S. (2014). Course struggle, exam stress, or a fear of the unknown? A study of A level students’ assessment preferences and the reasons behind them. Educational Futures (ejournal of the British Educational Studies Association), 6(2).
Common errors in Mathematics
Rushton, N. (2014). Common errors in Mathematics. Research Matters: A Cambridge Assessment publication, 17, 8-17.
When answering Mathematics questions, students often make errors leading to incorrect answers or the loss of accuracy marks. Many of these errors will be random, occurring through calculation errors or misreading of the question, and will not affect many candidates. However, some errors may be seen in a number of students’ scripts. These are sometimes referred to as common errors.
The aim of this study was to identify common errors that have been made in Mathematics exams. Three Mathematics specifications were used in this study: IGCSE Mathematics (0580), GCSE Mathematics A (J512) and GCSE Mathematics B (J567). Copies of the examiners’ reports and exam papers were obtained for all three qualifications for June 2009, 2010, 2011 and 2012. Within each examiner’s report, any common errors that candidates made were coded against a theme and sub-theme. The results were intended to inform the redevelopment of the mathematics qualifications, and to provide useful information for teachers and examiners.
The pitfalls and positives of pop comparability
Rushton, N., Haigh, M., and Elliott, G. (2011). The pitfalls and positives of pop comparability. Research Matters: A Cambridge Assessment publication, Special Issue 2, 52-56.
The media debate about standards in public examinations has become an August ritual. The debate tends to be polarised with reports of ‘slipping standards’ at odds with those claiming that educational prowess has increased. Some organisations have taken matters into their own hands, and have carried out their own studies investigating this. Some of these are similar to academic papers; others are closer in nature to a media campaign. In the same way as ‘pop psychology’ is a term used to describe psychological concepts which attain popularity amongst the wider public, so ‘pop comparability’ can be used to describe the evolution of a lay-person’s view of comparability. Studies, articles or programmes which influence this wider view fall into this category and are often accessed by a much larger audience than academic papers. In this article, five of these studies are considered: Series 1 of the televised social experiment “That’ll Teach ‘em”; The Royal Society of Chemistry’s Five-Decade Challenge; the Guardian’s and the Times’ journalists (re)sitting examinations to experience their difficulty; a feature by the BBC Radio 4 programme, ‘Today’ (2009), where students discussed exam papers from 1936; and a book of O level past papers and an associated newspaper article which described students’ experiences of sitting the O level exams.
Developing a research tool for comparing qualifications
Greatorex, J., Mehta, S., Rushton, N., Hopkin, R. and Shiell, H. (2011). Developing a research tool for comparing qualifications. Research Matters: A Cambridge Assessment publication, 12, 33-42.
Comparability studies about qualification standards generally use demand or candidates’ performance as comparators. However, these can be unrepresentative for vocational and new qualifications. Consequently, other comparators need to be used. This article details the process of devising and piloting a research instrument to compare the features of cognate units from diverse qualifications and subjects.
First, knowledge was elicited from twelve experts through Kelly’s repertory grid interviews where they were asked to compare different types of qualifications. This data was analysed thematically. Four features and several sub-features were identified. These features were used to categorise the interview data and develop the research instrument. A pilot of the instrument indicated that salient features varied between units. Therefore, the instrument is suitable for use in future comparability studies about features. However, conventions still need to be agreed for how to analyse the data that is collected using the instrument.