||14 Sep 2021
||Workshop: 1.5 CPD hours
This course has been accredited by the CPD Standards Office and equates to 1.5 hours of continued professional development (CPD).
Cambridge Assessment Network has been recognised as a provider of Training Excellence by the Professional Development Consortium and are a trusted source of CPD accredited learning activities. Find out more.
Despite the many potential benefits of e-assessment, many high-stakes assessments like GCSEs and A Levels have focused on exploiting technology for the administration and marking of tests, rather than the test-taker experience. On this interactive workshop you'll explore why this might be.
This workshop will be useful to practitioners interested in developing their understanding of e-assessment, the challenges and advantages that it brings and how these impact on the delivery of e-assessments.
We will first share definitions of e-assessment and describe points in the assessment process where technology can be exploited. Secondly, we will think about the drivers and benefits as well as the barriers and challenges of e-assessment, pulling on examples. Finally, we will share what research and experience we have to offer to help manage the challenges of introducing e-assessment.
Key learning outcomes
By the end of the workshop you will have:
- An understanding of what e-assessment is and where in the assessment process it can have impact
- Considered the benefits and advantages of e-assessment as well as the barriers and challenges
- Explored learning from research and practice about how to manage the challenges of e-assessment
Sarah Hughes is Research Information Manager at Cambridge Assessment International Education. She has taught in primary schools and been a researcher for UK and international awarding bodies and government agencies. Her particular research interests are the validity of assessment and the impact of research on practice and policy. She currently carries out and applies research to support assessment practitioners and to improve assessment quality.
Sanjay Mistry is Head of On-screen Research at Cambridge Assessment International Education. This key role brings together the strands of academic and market research in the area of on-screen tests and other digital products, the outcomes of which are being used to lead the research strategy for Cambridge International's on-screen assessments.
Sanjay joined Cambridge International in 2012 after spending over 10 years teaching in primary schools. Sanjay initially joined the organisation as an Assessment Manager for Primary and Lower Secondary Science. He moved to a Digital Test Design and Authoring Manager role, looking into the way we develop on-screen tests using a learner centred approach, before taking the role of Head of On-screen Research in 2019.
Hye-won Lee is Senior Research Manager at Cambridge Assessment English where she conducts research projects related to new generation assessments and digital products. Before joining Cambridge English, she has built an extensive experience of developing and validating digital assessments at leading organizations including Educational Testing Service. She holds a PhD in Applied Linguistics and Technology from Iowa State University, USA, with specialization in technology-enhanced language assessment and argument-based validation.
Andrew Mullooly is a member of the Assessment Quality & Validity Capability team at Cambridge Assessment English. As part of his roles and responsibilities, he is the Joint Chair of the Speaking Skills Group and has been involved in a number of projects related to Speaking test development and technology. Before working at Cambridge, he lived in Japan for several years and has also taught English in both Spain and Italy.
Christopher Hubbard has been working at Cambridge Assessment since 2001, and in that time has been involved in a wide range of assessment contexts and projects across a number of international settings. Some of Chris's special areas of focus have been Performance testing, Assessment scale development, Statistical analysis & test creation, Benchmark testing & Educational planning and Computer Adaptive online Testing (CAT).