Lucy Chambers

Lucy Chambers

Lucy Chambers

I joined Cambridge University Press and Assessment in 2004, first working in the Cambridge Assessment English group then moving to the Research Division in 2012. I have worked on a number of projects including developing methods and metrics to monitor the quality of marking, examination comparability and research data management. My current interests include the moderation of school-based assessment and investigating the validity of comparative judgement. I am a University of Cambridge Data Champion, and am an EMCC accredited coach.

Prior to working at Cambridge University Press and Assessment, I taught English in Japan, the Czech Republic and the UK in both the private schools and business sectors. I hold a MA in Applied Linguistics from Anglia Ruskin University, a PGDip in Health Psychology from City University and a BSc in Psychology from the University of Stirling.

Outside of work, I enjoy gardening, doing house renovations, and a little bit of dancing. I volunteer for a medical charity and am a lay member of their research awards panel.

Publications

2024

A structured discussion of the fairness of GCSE and A level grades in England in summer 2020 and 2021.
Crisp, V., Elliott, G., Walland, E., & Chambers, L. (2024). A structured discussion of the fairness of GCSE and A level grades in England in summer 2020 and 2021. Research Papers in Education.
Moderation of non-exam assessments: a novel approach using comparative judgement.

Chambers, L., Vitello, S., & Vidal Rodeiro, C. (2024). Moderation of non-exam assessments: a novel approach using comparative judgement. Assessment in Education: Principles, Policy & Practice.

2022

How do judges in Comparative Judgement exercises make their judgements?
Leech, T. & Chambers, L. (2022, 11 November). How do judges in Comparative Judgement exercises make their judgements? Open paper presentation, 23rd Annual Meeting of the Association for Educational Assessment – Europe, Dublin, Ireland.
Online moderation of non-exam assessments: is Comparative Judgement a practical alternative?
Vidal Rodeiro, C.L. and Chambers, L. (2022, November 10 - 12). Online moderation of non-exam assessments: is Comparative Judgement a practical alternative? [Paper presentation]. AEA-Europe Annual Conference, Dublin.
Exploring the Validity of Comparative Judgement: Do Judges Attend to Construct-Irrelevant Features?
Chambers, L. and Cunningham, E. (2022) Exploring the Validity of Comparative Judgment: Do Judges Attend to Construct-Irrelevant Features? Frontiers in Education 7:802392
Research Matters 33: Spring 2022
  • Foreword Tim Oates
  • Editorial Tom Bramley
  • A summary of OCR’s pilots of the use of Comparative Judgement in setting grade boundaries Tom Benton, Tim Gill, Sarah Hughes, Tony Leech
  • How do judges in Comparative Judgement exercises make their judgements? Tony Leech, Lucy Chambers
  • Judges' views on pairwise Comparative Judgement and Rank Ordering as alternatives to analytical essay marking Emma Walland
  • The concurrent validity of Comparative Judgement outcomes compared with marks Tim Gill
  • How are standard-maintaining activities based on Comparative Judgement affected by mismarking in the script evidence? Joanna Williamson
  • Moderation of non-exam assessments: is Comparative Judgement a practical alternative? Carmen Vidal Rodeiro, Lucy Chambers
  • Research News Lisa Bowett
Moderation of non-exam assessments: is Comparative Judgement a practical alternative?

Vidal Rodeiro, C. L. & Chambers, L.(2022). Moderation of non-exam assessments: is Comparative Judgement a practical alternative? Research Matters: A Cambridge University Press & Assessment publication, 33, 100-119.

Many high-stakes qualifications include non-exam assessments that are marked by teachers. Awarding bodies then apply a moderation process to bring the marking of these assessments to an agreed standard. Comparative Judgement (CJ) is a technique where two (or more) pieces of work are compared at a time, allowing an overall rank order of work to be generated.

This study explored the practical feasibility of using CJ for moderation via an experimental moderation task requiring judgements of pairs of authentic portfolios of work. This included aspects such as whether moderators can view and navigate the portfolios sufficiently to enable them to make the comparative judgements, on what basis they make their decisions, whether moderators can be confident making CJ judgements on large pieces of candidate work (e.g., portfolios), and the time taken to moderate.

How do judges in Comparative Judgement exercises make their judgements?

Leech, T. & Chambers, L. (2022). How do judges in Comparative Judgement exercises make their judgements? Research Matters: A Cambridge University Press & Assessment publication, 33, 31–47.

Two of the central issues in comparative judgement (CJ), which are perhaps underexplored compared to questions of the method’s reliability and technical quality, are “what processes do judges use to make their decisions” and “what features do they focus on when making their decisions?” This article discusses both, in the context of CJ for standard maintaining, by reporting the results of both a study into the processes used by judges when making CJ judgements, and the outcomes of surveys of judges who have used CJ. In the first instance, using insights from observations of judges and their being asked to think aloud while they judged, we highlight the variety of processes used when making their decisions, including comparative reference, re-marking and question-by question evaluation. We then develop a four dimension model to explore what impacts what judges attend to, and explore through survey responses the distinctive ways in which the structure of the question paper, different elements of candidate responses, judges’ own preferences and the CJ task itself affect decision-making. We conclude by discussing, in the light of these factors, whether the judgements made in CJ (or in the judgemental element of current standard maintaining procedures) are meaningfully holistic, and whether judges can properly take into account differences in difficulty between different papers.

2020

Non-standard English in UK students' writing over time.
Constantinou, F., & Chambers, L. (2020). Non-standard English in UK students' writing over time. Language and Education, 34(1), 22-35.

2019

Moderation of non-exam assessments: a novel approach using comparative judgement
Chambers, L., Vitello, S. and Vidal Rodeiro, C.L. (2019). Moderation of non-exam assessments: a novel approach using comparative judgement. Presented at the 20th annual AEA-Europe conference, Lisbon, Portugal, 13-16 November 2019.
Moderating artwork: Investigating judgements and cognitive processes

Chambers, L., Williamson, J. and Child, S. (2019). Moderating artwork: Investigating judgements and cognitive processes.  Research Matters: A Cambridge Assessment publication, 27, 19-25.

In this article, we explore the cognitive process and resources drawn upon when moderating artwork. The cognitive processes involved in the external moderation of non-exam assessments has received little attention; the few research studies that exist investigated moderation where the candidates’ submissions were in mostly written form. No studies were found which explicitly looked at a non-written submission, such as artwork. In this small- scale study, participating moderators were asked to “think aloud” whilst moderating candidates’ Art and Design submissions. An analysis of the resulting verbal protocol and observational data enabled timelines of moderator activity to be produced. From these, a process map containing moderation stages, activities, cognitive processes, and resource use was developed.

Moderating artwork - investigating judgements and cognitive processes
Chambers, L., Williamson, J. and Child, S. (2019). Moderating artwork - investigating judgements and cognitive processes.  Presented at the annual MAXQDA conference, Berlin, Germany, 27 February - 1 March 2019.
A diachronic perspective on formality in students' writing: empirical findings from the UK
Constantinou, F., Chambers, L., Zanini, N. and Klir, N. (2019). A diachronic perspective on formality in students' writing: empirical findings from the UK. Language, Culture and Curriculum, 33(1), 66-83.

2018

'That path won't lead nowhere': non-standard English in UK students' writing over time
Constantinou, F. and Chambers, L. (2018). 'That path won't lead nowhere': non-standard English in UK students' writing over time. Presented at the annual conference of the British Educational Research Association (BERA), Newcastle, UK, September 2018.

2017

Alternative uses of examination data: the case of English Language writing
Chambers, L., Constantinou, F., Zanini, N. and Klir, N. (2017). Presented at the 18th annual AEA Europe conference, Prague, 9-11 November 2017.
Formality in students’ writing over time: empirical findings from the UK
Constantinou, F., Chambers, L., Zanini, N. and Klir, N. (2017). Presented at the annual European Conference of Educational Research, Copenhagen, Denmark, 22-25 August 2017
Evaluating blended learning: Bringing the elements together

Bowyer, J. and Chambers, L. (2017). Evaluating blended learning: Bringing the elements together. Research Matters: A Cambridge Assessment publication, 23, 17-26.

This article provides a brief introduction to blended learning, its benefits and factors to consider when implementing a blended learning programme. It then concentrates on how to evaluate a blended learning programme and describes a number of published evaluation frameworks. There are numerous frameworks and instruments for evaluating blended learning, although no particular one seems to be favoured in the literature. This is partly due to the diversity of reasons for evaluating blended learning systems, as well as the many intended audiences and perspectives for these evaluations.  The article concludes by introducing a new framework which brings together many of the constructs from existing frameworks whilst adding new elements. It is aim is to encompass all aspects of the blended learning situation to permit researchers and evaluators to easily identify the relationships between the different elements whilst still enabling focussed and situated evaluation.

2016

Research Matters Special Issue 4: Aspects of Writing 1980-2014
  • Variations in aspects of writing in 16+ English examinations between 1980 and 2014 Gill Elliott, Sylvia Green, Filio Constantinou, Sylvia Vitello, Lucy Chambers, Nicky Rushton, Jo Ireland, Jessica Bowyer, David Beauchamp

2015

Piloting a method for comparing examination question paper demands
Chambers, L., Greatorex, J., Constantinou, F. and Ireland, J. (2015). Paper presented at the AEA-Europe annual conference, Glasgow, Scotland, 4-7 November 2015.
Piloting a method for comparing examination question paper demands
Greatorex, J., Chambers, L., Constantinou, F. and Ireland, J. (2015).  Paper presented at the British Educational Research Association (BERA) conference, Belfast, UK, 14-17 September 2015.

2012

The Hebei Impact Project: A study into the impact of Cambridge English exams in the state sector in Hebei province, China

Chambers, L., Elliott, M., and Jianguo, H. (2012). The Hebei Impact Project: A study into the impact of Cambridge English exams in the state sector in Hebei province, China. Research Notes, 50, 20-23.

An exploration of how independent research and project management skills can be developed and assessed among 16 to 19 year olds
Suto, I., Nadas, R. and Chambers, L. (2012). Paper presented at the British Educational Research Association (BERA) conference, Manchester, UK, 4-6 September 2012.
Test taker familiarity and speaking test performance: Does it make a difference?

Chambers, L., Galaczi, E., and Gilbert, S. (2012). Test taker familiarity and speaking test performance: Does it make a difference? Research Notes, 49, 33-40.

2011

Composition and revision in computer-based written assessment

Chambers, L. (2011). Composition and revision in computer-based written assessment. Research Notes, 43, 25-32.

The BULATS online speaking test

Chambers, L., and Ingham, K. (2011). The BULATS online speaking test. Research Notes, 43, 21-25.

 

2010

Composition and revision in CB written assessment
Chambers, L. (2010). Presented at the 43rd conference of the British Association of Applied Linguistics, Aberdeen, UK, 9-10 September 2010.

2009

Using the CEFR to inform assessment criteria development for Online BULATS speaking and writing

Chambers, L. (2009). Using the CEFR to inform assessment criteria development for Online BULATS speaking and writing. Research Notes, 38, 29-31.

Computer-based and paper-based writing assessment: A comparative text analysis
Chambers, L. (2009). Presented at the 42nd conference of the British Association of Applied Linguistics, Newcastle, UK, 3-5 September 2009.

2008

Computer-based and paper-based writing assessment: a comparative text analysis

Chambers, L. (2008). Computer-based and paper-based writing assessment: a comparative text analysis. Research Notes, 34, 9-15.

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.