Martina Kuvalja

Martina Kuvalja

Martina Kuvalja

Martina works as a Senior Researcher in the “Digital Assessment and Evaluation” team. She manages their research programme and provides research consultancy to digital assessment product teams while making sure customers' needs and their well-being are at the heart of what we build and research. She is a reviewer and a member of the Cambridge University Press & Assessment Research Ethics Committee.

In the last 7 years in the organisation, Martina designed and managed numerous studies, ranging from research investigating the reliability of marking in high-stakes exams in the UK to running evaluations of curriculum and examination reforms internationally.

Previously, Martina worked as a consultant for several not-for-profit organisations, and as a postdoctoral researcher at the University of Cambridge. During that time, she also taught and supervised postgraduate students.

Martina has a PhD in Educational Psychology from the University of Cambridge, where she investigated the development of self-regulated learning and metacognition in children. Self-regulation and metacognition remain her research interests and she delivers workshops and talks for teachers, the assessment community and product teams on the topic. In the past, she has consulted product teams helping them run UX studies and develop digital learning and assessment products while ensuring they encouraged learners’ metacognition and self-regulated learning.



Does ChatGPT make the grade?

Brady, J., Kuvalja, M., Rodrigues, A., & Hughes, S. (2024). Does ChatGPT make the grade? Research Matters: A Cambridge University Press & Assessment publication, 37, 24-39.

This study explores undergraduate students’ use of ChatGPT when writing essays. Three students were tasked with writing two essays each for a coursework component for a Cambridge qualification facilitated by access to ChatGPT. After writing the essays, they participated in semi-structured interviews about their experiences of using the technology. Researchers compared the transcript of the chatlog between the students and ChatGPT with the submitted essays. Analysis showed that the students relied on ChatGPT outputs to different extents, although they followed a similar process of engagement. The students shared their misgivings and points of appreciation for the technology.

Research Matters 37 : Spring 2024
  • Foreword Tim Oates
  • Editorial Tom Bramley
  • Extended Reality (XR) in mathematics assessment: A pedagogical visionXinyue Li
  • Does ChatGPT make the grade?Jude Brady, Martina Kuvalja, Alison Rodrigues, Sarah Hughes
  • How do approaches to curriculum mapping affect comparability claims? An analysis of mathematics curriculum content across two educational jurisdictionsNicky Rushton, Dominika Majewska, Stuart Shaw
  • Exploring speededness in pre-reform GCSEs (2009 to 2016)Emma Walland
  • A Short History of the Centre for Evaluation and Monitoring (CEM)Chris Jellis
  • Research NewsLisa Bowett


The Futures of Assessments: Navigating Uncertainties through the Lenses of Anticipatory Thinking
Abu Sitta, F., Maddox, B., Casebourne, I., Hughes, S., Kuvalja, M., Hannam, J. & Oates, T. (2023). The Futures of Assessment: Navigating Uncertainties through the Lenses of Anticipatory Thinking Cambridge University Press & Assessment Research Report. Cambridge, UK: Cambridge University Press & Assessment

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.