New ambassador for The Assessment Network Paul Muir offers an insightful look into the evolving landscape of education and assessment, exploring the pressing challenges in assessment design today, while also highlighting the role that professional communities play in shaping the future of fair and inclusive assessments.
Could you tell us about your career path to date and what your focus is in your current role?
"These questions always make me feel old now!
This year is my 25th year working in education and assessment, with it all actually starting at Cambridge Assessment (or UCLES as it was back then) in the year 2000 working in what was then known as Education Intelligence. I had various really interesting roles at UCLES, both in international (CIE) and UK (OCR) boards until leaving for the regulator (QCA) in 2008 to lead on the National Diploma programme.
From late 2008, my career then took me to a newly formed Education Practice at PA Consulting where I spent the best part of 7 years working on education and assessment reform projects in Saudi Arabia, Qatar and UAE, but also big assessment-based transformation projects in the legal and education sectors in the UK and my first real exposure to the benefits and innovation of technology in assessment.
This work led me to join the British Council as Head of Technology Enabled Assessment in 2015 where I led a team tasked with helping awarding organisations transition from pen and paper examinations to digital assessment in over 140 countries, including IELTS moving online and the first remote proctored tests for the British Council.
In 2023 I pivoted to working in the Ed-Tech sector, where I’m now at risr/, the world’s leading medical assessment and learning company, as Chief Customer Officer where I’m responsible for our Thought Leadership, Industry Engagement, Consultancy and Community Engagement.
A significant part of my role, alongside my ‘day job’ at risr/, is the time I dedicate to the assessment community. I’m currently the Vice-Chair of the e-Assessment Association and a Director and incoming Board Chair of the Association of Test Publishers (ATP), where I’m also the chair of the ATP Test Security Committee and a member of two AI working groups.
Due to this, I’m a frequent speaker at conferences around the world on everything from Test Security, AI (from an implementation and ethics perspective) and various ‘angles’ on digital assessment.
And now I’m thrilled to take on the role of Ambassador for The Assessment Network at Cambridge. It feels like coming home!"
What do you see as the biggest challenges for people working in assessment design and delivery today? And how do you think we can mitigate those challenges?
"One of the biggest challenges today in assessment design and delivery is maintaining relevance and fairness in increasingly diverse and digitally-dominated learning environments.
Learners are bringing various backgrounds, access levels and learning preferences, while technology - including AI of course - reshapes how, where, and when learning and assessment happens. Designing assessments that are both valid and inclusive in this context is an increasingly complex task.
Another significant challenge is balancing innovation with evidence. There's growing interest in adaptive testing, AI-assisted ‘everything’, and data-driven insights, but these must be underpinned by robust psychometrics and ethical oversight to ensure reliability, transparency, and most importantly trust. It’s not dramatic to say trust is going to make or break AI use/acceptance in our sector.
Delivery logistics also remain a concern, especially in high-stakes contexts. Remote and hybrid delivery models can present risks around access, security, and standardisation that require thoughtful mitigation and is where globally we see the biggest challenges around the Digital Divide. AI and digital assessment have a huge opportunity to narrow the digital divide globally, but without careful design, it could end-up exacerbating the problem.
To address these challenges, we must invest in professional development that strengthens both technical and pedagogical understanding across the assessment community. Conveniently for me, this is where the work of organisations such as The Assessment Network at Cambridge, the e-Assessment Association and Association of Test Publishers come in, with collaboration to share research, tools, and lessons learned."
What were your takeaways from this year's Assessment Horizons conference?
"I know some will roll their eyes, but AI and more AI!
Yet, it’s important we continue to talk about it as the pace of change within the sector is phenomenal and the AI tools we use, and our learners have access to, are improving at an ever more rapid pace.
I thought the conference did a great job of covering this from an assessment design perspective, with a great session from The Open University and NCFE showcasing with their research how far AI tools still have to go, but also with the research being completed in 2024, how far the newer models that have since been released are already challenging those findings.
We also saw the tables flipped and the impact of AI through the student lens thanks to University College Dublin, which is something we don’t get to see too often at conferences and we need to hear more of!
The variety of breakout sessions also stood out for me and it’s a shame I couldn’t split myself into 3 as the parallel sessions on topics as wide-ranging from AI use in marking, what parents need to know about assessment, the benefits of digital assessment and Oracy in assessment were different and brilliant.
Finally, as always with conferences in our sector it’s the people who make it what it is and the chance to network and learn from our peers cannot be beat."
One of the questions around digital advances in assessment is how do we ensure learners don't get left behind with the 'digital divide'? What are your thoughts on this?
"The digital divide remains one of the most pressing equity challenges in education, and assessment is no exception. As digital tools (including AI) and platforms become more central to how assessments are designed, delivered, and experienced, we must actively guard against reinforcing existing inequalities.
Access is the most visible aspect, some learners lack reliable internet, suitable devices, or private, quiet spaces to take assessments. But the divide is also about digital literacy, confidence, and culturally relevant content. Without inclusive design and support, digital assessments can disadvantage the very learners they aim to serve.
As an assessment community, we must also avoid assuming digital means automatic improvement. High-quality digital assessment demands just as much rigour in validity, reliability, and fairness as traditional formats, perhaps more so, given the risks of algorithmic bias or overreliance on data proxies.
As mentioned in response to the earlier question, AI and digital assessment, designed and implemented correctly, has the potential to actually narrow the divide in assessment and bring about a more equitable testing world. Offline-compatible formats, mobile-first designs, and flexible timing options are just some ways we can help mitigate access barriers.
Finally, investment in digital skills, not just digital infrastructure, is key. That means supporting both learners, educators and test-owners to develop the confidence to engage meaningfully with digital assessment tools.
And that is what The Assessment Network at Cambridge and our other assessment community groups are very much there to do..."
This discussion first appeared in Perspectives on Assessment, the Cambridge Assessment Network member newsletter, which features key voices from the assessment community along with other member-exclusive content. Would you like to feature in a future Member Spotlight? We'd love to hear from you - get in touch.