Tori Coleman

Tori Coleman

Having joined Cambridge University Press and Assessment as a research assistant in the summer of 2016, I have worked on a range of projects relating to educational taxonomies, accessibility of examination papers, construct validity, and curriculum mapping. I am also involved in co-ordinating a series of qualitative research methods workshops and reading groups for colleagues. My current areas of research relate to curriculum evaluation models and the comparability of optional examination questions.

I have a BSc in Psychology from the University of Bath, and an MPhil in Education (Psychology and Education) from the University of Cambridge.

Outside of work I volunteer with GirlGuiding UK, and am currently a Brownie leader.

Publications

2021

Digital divide in UK education during COVID-19 pandemic: Literature review
Coleman, V. (2021). Digital divide in UK education during COVID-19 pandemic: Literature review. Cambridge Assessment Research Report. Cambridge, UK: Cambridge Assessment.
Early policy response to COVID-19 in education—A comparative case study of the UK countries

Mouthaan, M., Johnson, M., Greatorex, J., Coleman, V., and Fitzsimons, S. (2021). Early policy response to COVID-19 in education—A comparative case study of the UK countries. Research Matters: A Cambridge Assessment publication, 31, 51-67.

Inspired by the work of David Raffe and his co-authors who set out the positive benefits gained from comparing the policies of “the UK home nations” in an article published in 1999, researchers in the Education and Curriculum Team launched a project in early 2020 that we called Curriculum Watch. The aim of this project was to collate a literature and documents database of education and curriculum policies, research and analyses from across the four countries of the United Kingdom (UK).

In this article, we draw on our literature database to make sense of the rapid changes in education policy that occurred in the early stages of the COVID-19 pandemic in the four UK nations of England, Scotland, Wales and Northern Ireland. We analyse some of the key areas of UK policy formation and content (in relation to curriculum, pedagogy and assessment) that we observed during the first six months of the unfolding pandemic. In addition, we reiterate the clear benefits of using comparative research methods in the UK context: our research findings support the idea that closeness of national contexts offers the opportunity for evidence exchange and policy learning in education.

2020

Context matters—Adaptation guidance for developing a local curriculum from an international curriculum framework

Fitzsimons, S., Coleman, V., Greatorex, J., Salem, H., and Johnson, M. (2020). Context matters—Adaptation guidance for developing a local curriculum from an international curriculum framework. Research Matters: A Cambridge Assessment publication, 30, 12-18.

The Learning Passport (LP) is a collaborative project between the University of Cambridge, UNICEF and Microsoft, which aims to support the UNICEF goal of providing quality education provision for children and youth whose education has been disrupted by crisis or disaster. A core component of this project is a curriculum framework for Mathematics, Science and Literacy which supports educators working in emergency contexts. This framework provides a broad outline of the essential content progressions that should be incorporated into a curriculum to support quality learning in each subject area, and is intended to act as a blueprint for localised curriculum development across a variety of contexts. To support educators in the development of this localised curriculum an LP Adaptation Guidance document was also created. This document provides guidance on several factors that local curriculum developers should consider before using the LP Curriculum Framework for their own curriculum development process. This article discusses how key areas within the LP Adaptation Guidance have broader relevance beyond education in emergencies, highlighting that the challenges that exist within some of the most deprived educational contexts have applicability in all contexts.

Out of their heads: using concept maps to elicit teacher-examiners’ assessment knowledge
Johnson, M. and Coleman, V. (2020). Out of their heads: using concept maps to elicit teacher-examiners’ assessment knowledge. International Journal of Research & Method in Education (ahead of print).
The Learning Passport Research and Recommendations Report.
Cambridge University Press & Cambridge Assessment. (2019). The Learning Passport Research and Recommendations Report: Summary of Findings. Cambridge, UK: Cambridge University Press & Cambridge Assessment. 

The Learning Passport webpage can be viewed here.

2019

Towards a method for comparing curricula
Greatorex, J., Rushton, N., Coleman, T., Darlington, E. and Elliott, G. (2019). Cambridge Assessment Research Report. Cambridge, UK: Cambridge Assessment.

2018

A review of instruments for assessing complex vocational competence

Greatorex, J., Johnson, M. and Coleman, V. (2017). A review of instruments for assessing complex vocational competence. Research Matters: A Cambridge Assessment publication, 23, 35-42.

The aim of the research was to explore the measurement qualities of checklists and Global Rating Scales [GRS] in the context of assessing complex competence. Firstly, we reviewed the literature about the affordances of human judgement and the mechanical combination of human judgements. Secondly, we reviewed examples of checklists and GRS which are used to assess complex competence in highly regarded professions. These examples served to contextualise and elucidate assessment matters. Thirdly, we compiled research evidence from the outcomes of systematic reviews which compared advantages and disadvantages of checklists and GRS. Together the evidence provides a nuanced and firm basis for conclusions. Overall, literature shows that mechanical combination can outperform the human integration of evidence when assessing complex competence, and that therefore a good use of human judgements is in making decisions about individual traits, which are then mechanically combined. The weight of evidence suggests that GRS generally achieve better reliability and validity than checklists, but that a high quality checklist is better than a poor quality GRS. The review is a reminder that including assessors in designing assessment instruments processes can helps to maximise manageability.

2017

On the reliability of applying educational taxonomies

Coleman, V. (2017). On the reliability of applying educational taxonomies. Research Matters: A Cambridge Assessment publication, 24, 30-37.

Educational taxonomies are classification schemes that organise thinking skills according to their level of complexity, providing a unifying framework and common terminology. They can be used to analyse and design educational materials, analyse students’ levels of thinking and analyse and ensure alignment between learning objectives and corresponding assessment materials. There are numerous educational taxonomies that have been created and this article reviews studies that have examined their reliability, in particular Bloom’s was a frequently used taxonomy.

It was found that there were very few studies specifically examining the reliability of educational taxonomies. Furthermore, where reliability was measured, this was primarily inter-rater reliability with very few studies discussing intra-rater reliability. Many of the studies reviewed provided only limited information about how reliability was calculated and the type of reliability measure used varied greatly between studies.

Finally, this article also highlights factors that influence reliability and that therefore offer potential avenues for improving reliability when using educational taxonomies, including training and practice, the use of expert raters, and the number of categories in a taxonomy. Overall it was not possible to draw conclusions about the reliability of specific educational taxonomies and it seems that the field would benefit from further targeted studies about their reliability.

Research Matters

Research Matters 28: Autumn 2019

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.