Tori Coleman

Tori Coleman

Having joined Cambridge Assessment as a research assistant in the summer of 2016, I have worked on a range of projects relating to educational taxonomies, accessibility of examination papers, construct validity, and curriculum mapping. I am also involved in co-ordinating a series of qualitative research methods workshops and reading groups for colleagues. My current areas of research relate to curriculum evaluation models and the comparability of optional examination questions.

I have a BSc in Psychology from the University of Bath, and an MPhil in Education (Psychology and Education) from the University of Cambridge.

Outside of work I volunteer with GirlGuiding UK, and am currently a Brownie leader.

Publications

2018

A review of instruments for assessing complex vocational competence

Greatorex, J., Johnson, M. & Coleman, V. (2017). A review of instruments for assessing complex vocational competence. Research Matters: A Cambridge Assessment publication, 23, 35-42.

The aim of the research was to explore the measurement qualities of checklists and Global Rating Scales [GRS] in the context of assessing complex competence. Firstly, we reviewed the literature about the affordances of human judgement and the mechanical combination of human judgements. Secondly, we reviewed examples of checklists and GRS which are used to assess complex competence in highly regarded professions. These examples served to contextualise and elucidate assessment matters. Thirdly, we compiled research evidence from the outcomes of systematic reviews which compared advantages and disadvantages of checklists and GRS. Together the evidence provides a nuanced and firm basis for conclusions. Overall, literature shows that mechanical combination can outperform the human integration of evidence when assessing complex competence, and that therefore a good use of human judgements is in making decisions about individual traits, which are then mechanically combined. The weight of evidence suggests that GRS generally achieve better reliability and validity than checklists, but that a high quality checklist is better than a poor quality GRS. The review is a reminder that including assessors in designing assessment instruments processes can helps to maximise manageability.

2017

On the reliability of applying educational taxonomies

Coleman, V. (2017). On the reliability of applying educational taxonomies. Research Matters: A Cambridge Assessment publication, 24, 30-37.

Educational taxonomies are classification schemes that organise thinking skills according to their level of complexity, providing a unifying framework and common terminology. They can be used to analyse and design educational materials, analyse students’ levels of thinking and analyse and ensure alignment between learning objectives and corresponding assessment materials. There are numerous educational taxonomies that have been created and this article reviews studies that have examined their reliability, in particular Bloom’s was a frequently used taxonomy.

It was found that there were very few studies specifically examining the reliability of educational taxonomies. Furthermore, where reliability was measured, this was primarily inter-rater reliability with very few studies discussing intra-rater reliability. Many of the studies reviewed provided only limited information about how reliability was calculated and the type of reliability measure used varied greatly between studies.

Finally, this article also highlights factors that influence reliability and that therefore offer potential avenues for improving reliability when using educational taxonomies, including training and practice, the use of expert raters, and the number of categories in a taxonomy. Overall it was not possible to draw conclusions about the reliability of specific educational taxonomies and it seems that the field would benefit from further targeted studies about their reliability.

Research Matters

Research Matters

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.