||03 - 17 Mar 2021
This interactive series of three workshops will introduce the basic concepts behind item response theory (IRT) and give clear examples of when it is useful. It will include practical activities where participants will review outputs from IRT models and develop an understanding of how to interpret results.
These workshops have been designed for assessment professionals with an interest in empirical methods of assessing item quality as well as those interested in data driven approaches to test assembly. You should have some existing familiarity with classical item statistics (for example, facilities and discriminations).
If you have little or no knowledge of item response theory, this series will provide a comprehensive overview of the principles and practice. Or if you are a more experienced and looking to refresh your knowledge, this series will increase your understanding and boost confidence in your practice.
||03 March 2021 | 12:30 - 14:00 (UK time)
||Introduction and how to interpret IRT analysis
||10 March 2021 | 12:30 - 14:00 (UK time)
||How to calibrate items
||17 March 2021 | 12:30 - 14:00 (UK time)
||Using IRT to support test construction
This series of three workshops will provide an introduction to the basic concepts of item response theory (IRT) and a discussion of the different IRT models available and the situations in which they may have advantages over simpler classical methods of analysing item-level data.
By the end of the three weeks, you will have had an opportunity to review outputs from IRT models and developed an understanding of how to interpret results.
- Week 1 - You will cover how to interpret IRT results to assess the quality of items
- Week 2 - Will show you how to calibrate items taken by different sets of candidates onto a common difficulty scale
- Week 3 - You will consider how IRT can help facilitate automated test assembly
In addition to the workshops, you will take away resources to support you in applying the learning to your own context and developing your future assessment practice.
Key learning outcomes
By the end of the three sessions you will:
- An understanding of the different types of IRT models and how they can be useful
- Gained practical experience of interpreting the results of different IRT models
- Learned how IRT can be used to assist in test assembly
Dr Tom Benton has worked in educational statistics for almost 20 years and is Principal Research Officer in Assessment Research and Development Division, Cambridge Assessment. Prior to joining Cambridge Assessment he worked for the National Foundation for Education Research (NFER) as an expert in the field of statistical analysis. His work at NFER included test development, survey research, programme evaluation, benchmarking and international comparisons.
Tom has been closely involved with a number of large-scale national and international surveys of young people as well as the development and standardisation of numerous educational tests. He has co-authored a report on the subject of assessing reliability which has been published by Ofqual (read the report), and has practical experience of developing reliable ways of measuring attitudes and abilities of interest across a wide range of different subject areas including academic ability, community cohesion, self-efficacy, enjoyment of school, self-confidence and political opinions.