||Dr Tom Benton
||07 Oct 2020
The Triangle Building, Shaftesbury Road
This course will provide an introduction to the theory behind item response theory (IRT) and clear examples of when it is useful. It will include practical activities where participants will use interactive point-and-click tools built using the software package R to fit and interpret IRT models, as well as experiment with using IRT to help with test construction.
“The course was an excellent overview of IRT, from the theoretical underpinnings to practical application.”
We will introduce the basic theory of IRT including the different IRT models that are available and the situations in which they may have advantages over simpler classical methods of analysing item-level data. Participants will also learn how to fit IRT models and how to interpret the results to assess the quality of items. We will also consider two practical applications of IRT: calibration of items taken by different sets of candidates onto a common difficulty scale, and automated test assembly. In both cases, the theory behind the IRT approach will be followed by the practical activities where participants will learn how to perform these tasks themselves.
All practical activities will be conducted using interactive point-and-click tools. As such, this version of our “Using item response theory in practice” event is aimed at participants who wish to conduct statistical analyses whilst avoiding any form of programming. No prior experience of using R is required to participate in this event.
Participants will need to bring their own laptop to this session with the R software pre-installed.
Key learning outcomes
- Understand the different types of IRT models and how they can be useful.
- Gain practical experience of fitting different IRT models and interpreting the results.
- Understand how IRT can be used to assist in test assembly.
This course is intended for assessment professionals with an interest in empirical methods of assessing item quality as well as those interested in data driven approaches to test assembly. Participants will be expected to have some existing familiarity with classical item statistics (for example, facilities and discriminations).
Dr Tom Benton has worked in educational statistics for almost 20 years and is Principal Research Officer in Assessment Research and Development Division, Cambridge Assessment. Prior to joining Cambridge Assessment he worked for the National Foundation for Education Research (NFER) as an expert in the field of statistical analysis. His work at NFER included test development, survey research, programme evaluation, benchmarking and international comparisons.
Tom has been closely involved with a number of large-scale national and international surveys of young people as well as the development and standardisation of numerous educational tests. He has co-authored a report on the subject of assessing reliability which has been published by Ofqual (read the report), and has practical experience of developing reliable ways of measuring attitudes and abilities of interest across a wide range of different subject areas including academic ability, community cohesion, self-efficacy, enjoyment of school, self-confidence and political opinions.