On-screen assessment - opportunities and considerations

by Guest Blogger, 07 October 2020
Young girl with laptop

On-screen testing had already been an area of increasing interest for assessment experts before the events of 2020. The challenge of making means of assessment accessible to disparate cohorts of students has only accelerated with the impact of the coronavirus pandemic. Victoria Crisp and Stuart Shaw have researched the differences between setting questions on paper and on screen, and consider what the implications might be for the evolution of assessment.

It is currently normal practice in circumstances where both on-paper and on-screen testing are available for the paper questions to be written first. These are then transferred to on-screen testing programmes, but it is not always possible to transfer questions directly. Some may have to be adapted and others might have to be replaced due to software or user constraints. This creates additional work and potentially raises comparability issues relating to the equivalence of constructs assessed, the level of demand and standards.

What happens if questions were written on-screen first?

So, what happens if the process were reversed, and questions were written on-screen first? That was the focus of our research. We recruited participants to write new maths and science questions using an online platform based on some constructed content and assessment criteria, in the same way that any new assessment might be developed.

Our participants were unfamiliar with the software used, and for some of them that acted as a barrier to development. There appeared to be some avoidance of attempting to draft test questions that they were not confident in how to create, potentially reducing curriculum coverage or coverage of assessment objectives. One participant reported writing shorter questions than normal and that they did not develop questions by adding more parts in the same way that they would normally.

There was also some evidence that some setters were writing fewer creative questions and, potentially, producing lower quality questions. Whilst participants did sometimes explore innovative ways to assess concepts, often the restrictions they experienced led them to make compromises in their question design that they did not feel comfortable doing. This suggests that there is the potential for questions that tap into certain parts of learning not to be developed. Over time, if setters can no longer create certain kinds of questions that they would usually write, this could adversely affect content coverage and construct representation and, ultimately, setter retention.

It’s clear that creating reliable and accurate on-screen assessment will only become increasingly important for the future of education.

It should be noted that the participants had not used the platform before the research and that it is possible that some of the challenges might be reduced through appropriate training and increased experience. Nonetheless, our research indicates that care would be needed in moving to on-screen setting ahead of on-paper setting, to ensure comparability over time and representation of the constructs contained in the curriculum or syllabus. Asking setters to record question ideas that they could not implement and then working with software developers to implement appropriate revisions would be one possible way forward.

It’s clear that creating reliable and accurate on-screen assessment will only become increasingly important for the future of education, and this research gives food for thought to those involved in its development.

You can read the full research in the next edition of Research Matters which will be published 14 October.

Sign up now to receive Research Matters.

Before you go... Did you find this article on Twitter, LinkedIn or Facebook? Remember to go back and share it with your friends and colleagues!

Related blogs

Key bloggers

Tim Oates Blogger
Tim Oates

Director of Assessment and Research

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.