Item response theory in practice

Item response theory in practice woman attending webinar
Date: 05 Jun 2024 - 26 Jun 2024 Venue: Online
Type: Workshop series Fee: £325 (Members - £292.50)

Join Cambridge Assessment Network as a member and get 10% discount on all courses.

Book now

Bookings close 4 June 2024 at 11am (UK time)

This workshop series is accredited for continuing professional development (6 CPD hours), with certification on successful completion.

Item response theory (IRT) is a popular methodological approach for modelling response data from assessments to support a range of applications.

This interactive series of four online workshops from Cambridge will introduce the basic concepts behind IRT and give clear examples of when it is useful. It will include practical activities where participants will review outputs from IRT models and develop an understanding of how to interpret results.

These workshops have been designed for assessment professionals with an interest in empirical methods of assessing item quality as well as those interested in data driven approaches to test assembly. You should have some existing familiarity with classical item statistics (for example, facilities and discriminations).

If you have little or no knowledge of item response theory, this series will provide a comprehensive overview of the principles and practice. Or if you are a more experienced and looking to refresh your knowledge, this series will increase your understanding and boost confidence in your practice.

I had the pleasure of attending the Assessment Practitioner Workshop: Item Response Theory in Practice offered by the prestigious Cambridge Assessment Network. The meticulously crafted course content provided a comprehensive understanding of IRT and its applications in assessment.”
Aishwarya Jaiswal, Senior Research Analyst, Psychometric Assessments, Mercer Mettl and Doctoral Candidate, Banaras Hindu University, India

Workshop dates

Week 1 05 Jun 2024 | 12:30 - 14:00 (UK time) Title: Why use IRT and what does it do?  
Week 2 12 Jun 2024 | 12:30 - 14:00 (UK time) Title: Calibration and interpretation of IRT models  
Week 3 19 Jun 2024 | 12:30 - 14:00 (UK time) Title: IRT to control and compare test difficulty  
Week 4 26 Jun 2024 | 12:30 - 14:00 (UK time) Title: Test construction and computer adaptive tests  

Course outline

This series of four workshops will provide an introduction to the basic concepts of item response theory (IRT) and a discussion of the different IRT models available and the situations in which they may have advantages over simpler classical methods of analysing item-level data.

By the end of the four weeks, you will have had an opportunity to review outputs from IRT models and developed an understanding of how to interpret results.

  • Week 1 - We will cover the purposes of IRT, where it can be useful and how results can be interpreted
  • Week 2 - We will show you how to calibrate items taken by different sets of candidates onto a common difficulty scale
  • Week 3 - We will explore how IRT can be used to assess overall test difficulty and can be used in test equating
  • Week 4 - We will explore how IRT can help facilitate automated test assembly including the use of computer adaptive tests

In addition to the workshops, you will take away resources to support you in applying the learning to your own context and developing your future assessment practice.

Key learning outcomes

By the end of the four sessions you will have:

  • An understanding of the different types of IRT models and how they can be useful
  • Gained practical experience of interpreting the results of different IRT models
  • Learned how IRT can be used to assist in test assembly

Course trainer

Tom Benton - Principal Research OfficerDr Tom Benton has worked in educational statistics for almost 20 years and is Principal Research Officer in Assessment Research and Development Division, Cambridge Assessment. Prior to joining Cambridge Assessment he worked for the National Foundation for Education Research (NFER) as an expert in the field of statistical analysis. His work at NFER included test development, survey research, programme evaluation, benchmarking and international comparisons.

Tom has been closely involved with a number of large-scale national and international surveys of young people as well as the development and standardisation of numerous educational tests. He has co-authored a report on the subject of assessing reliability which has been published by Ofqual, and has practical experience of developing reliable ways of measuring attitudes and abilities of interest across a wide range of different subject areas including academic ability, community cohesion, self-efficacy, enjoyment of school, self-confidence and political opinions.

woman looking at mobile phone

Sign up for our mailing lists

Receive regular email updates about upcoming training, special offers, events and educational news.


Keep in touch