Simon Child, Head of Assessment Training for the Cambridge Assessment Network explains how knowledge of key assessment concepts can support teachers to make evidence-based decisions in order to optimise the use of technology in education.
It is now a truism to state that technology is becoming increasingly influential in the classroom.
The introduction of technology in education has introduced new possibilities for transforming teaching practice. The fast-paced nature of change, however, also presents significant challenges for educational practitioners, as they have to navigate the choices available to them. This is particularly true when considering the role of technology in supporting good assessment practice, which is underpinned by complex concepts such as validity, reliability and fairness.
Technology promises positive and sustainable change in educational assessment for both formative and summative purposes. These promises include, but are not limited to, the following:
- Increased precision of assessment, by reducing the gap between assessment purposes, design and outcomes
- A wider range of assessable constructs, for example so-called 21st century skills
- Effective use of data, for example to help transitions between educational stages
- Increased fairness, equity and social justice
There is an ever-increasing range of flexible assessment solutions and learning environments that make bold claims in terms of improving classroom interactions.
However, whilst flexibility is often a good thing, it means that educational practitioners are responsible for optimising the use of technology in their assessment context. This has created an urgent need to support teaching practitioners in justifying their assessment-related decisions.
There is an urgent need to support teaching practitioners in justifying their assessment-related decisions.At Cambridge Assessment, we believe that professional learning in assessment should focus on developing principled knowledge and skills in relation to key concepts, with the overall aim of empowering educational practitioners to build their reflective and decision-making capacities.
This knowledge supports practitioners in reflecting on how planned technological innovations in assessment can best align with different assessment purposes.
Threat to validity
To illustrate this idea, let me take you through a worked example. One of the key concepts in assessment is what is known as a threat to validity. A threat to validity is something about the development, process or delivery of an assessment that means that the knowledge, skills, or understanding that you are interested in is not being measured precisely. There could be many reasons for this, but we generally categorise two main types of ‘threat’:
1 - Factors affecting student assessment performance that are not linked to their ability in the area of interest (what we call ‘construct-irrelevant variance’).
Factors relate to the testing environment, item bias, or marking reliability, amongst many others.
2 - Factors related to the design of the assessment that mean that the knowledge, skills and understanding we are interested in are not being covered fully (what we call ‘construct underrepresentation’).
An (extreme) example of construct underrepresentation would be if we were interested in students’ abilities in converting fractions to decimals, but only asked questions related to multiplications and division.
A critical eye on technology
With a firm understanding of the concepts of ‘construct-irrelevant variance’ and ‘construct underrepresentation’ it is possible to look at technological innovations in assessment with a critical eye.
There could be new ‘threats to validity’ that should be carefully considered before technological innovations are implemented on a larger scale.To take a straightforward example - think about a teacher that is thinking of changing a classroom topic test from a paper-based assessment to an on-screen equivalent. How does this remove (or perhaps introduce) threats to validity?
The movement to on-screen testing will potentially increase the precision of the assessment, reducing construct-irrelevant variance.
Data from previous versions of the test could be used, for example, to check the quality of the items and content coverage. The movement to on-screen may also increase the potential to use the data from the test to track progression of students over time.
However, there may be new factors introduced that influence student performance on the test, such as typing speed, familiarity with using computers, on-screen reading accessibility and so on. These could be new ‘threats to validity’ that should be carefully considered before technological innovations are implemented on a larger scale.
Giving practitioners the tools to understand, critique, and justify their assessment-related decision-making is a key element in supporting the development of effective classroom practice.
Practitioners should be empowered to use their acquired knowledge of key assessment concepts to develop new insights in their working contexts. This will support them to make evidence-based decisions, in order to optimise the use of technology in education.
Head of Assessment Training at the Cambridge Assessment Network.
Simon is the co-course director of the Postgraduate Certificate in Educational Assessment