Shifting maths and science assessments onto screen: what’s different?

by Joanna Williamson, 22 September 2022
Man at laptop with calculator

Digital assessment is growing from strength to strength. Besides innovative new assessments – including those integrated within learning tools – there’s demand for existing high stakes assessments (such as A level exams) to be available on screen.

In a previous blog we explored why Cambridge University Press & Assessment doesn’t simply migrate all our existing paper tests onto screen. In this blog, we zoom in on the particularities of maths and science.

What’s special about maths and science?

It’s widely recognised that converting maths and science assessments into on-screen versions can be more difficult than converting assessments in other subjects (1). In a recent piece of research, we investigated precisely why this is the case, and what the consequences are for the digital assessments that we want to develop.

In summary, challenges arise because:

  1. The expression of ideas is often not in words (e.g., drawings, equations, graphs)
  2. Many items require learners to do things, and to provide evidence of the ‘doing’
Three examples of written exam questions involving fractions

These features aren’t unique to maths and science. It’s important to recognise that they pose challenges in other subjects too, but they affect maths and science particularly strongly.

What does the research show?

Thinking and expressing aren’t separate. Shifting a paper-based item into a digital format can alter both what learners think and do in response to an item (e.g., sketching, using trial and error, choosing a calculation), and how learners express their ideas (2). The lack of a seamless way to express mathematical and scientific ideas is one way a digital test platform can change learner activity: for example, if it’s cumbersome for a learner to write down intermediate steps in their working, they are likely to reduce or omit the intermediate steps, and rely more heavily on mental operations (3, 4)

It’s more than lack of familiarity. Although familiarity is a partial explanation, there are inherent reasons why inputting mathematical and scientific notation in digital environments can be hard. The notation allows us to express complex and creative ideas in concise ways, but this language requires a very wide range of individual characters, and – unlike conventional text – it isn’t linear in construction (5)

On-screen tools as a solution. On-screen equation editors and on-screen menus allow learners to insert mathematical symbols and layouts using point-and-click. The drawback of these systems is that they are non-standardised, slower than typing, and cumbersome in comparison with handwriting (1). To input multiple lines of mathematical working in this way a learner must mentally plan each line of working, work out the symbols required, work out the order in which to click to achieve the right arrangement, then repeat the planned clicking and typing many times. It is important to recognise that some digital assessment solutions can be both unfamiliar and objectively cumbersome.

Different task types as a solution. Items can be adapted to minimise ‘difficult’ inputs in their on-screen format, but this is likely to alter the item demand or what is being assessed, or both (6, 7)

Touchscreens and stylus as solutions. Replacing keyboard and mouse input with a touchscreen and stylus reduces but does not remove the difficulties of expressing mathematical and scientific ideas. The reality of writing with a stylus (e.g., needing to ‘hover’ their hand in mid-air) means learners can find it effortful in comparison with pen and paper (8)Tablets also introduce new challenges in terms of screen space, comparability of devices, and availability of devices.

Impact on performance. There can be significant differences between learner performance on paper-based and digital items, even for items that were apparently straightforward to shift to an on-screen format (9, 10)

Match with teaching and learning. Unless on-screen assessment of maths and science takes place using the same tools and methods used in teaching and learning, there is a high risk of assessing ICT literacy rather than mathematical and scientific skills and knowledge (1, 11)

How is this research taking us forward?

The research helps us to meet the needs of learners and teachers with high-quality assessments: valid and fair assessments, that allow learners to show what they can do.

This research is helping us:

  1. Predict which maths and science items may be tricky to shift onto screen.
  2. Adapt items so that they still assess the maths or science that we want to assess.
  3. Evaluate maths and science items once they are on screen.
  4. Inform the feedback we seek from teachers and learners who trial our on-screen assessments.
  5. Explore which platforms and on-screen tools are best for our learners.

Looking ahead, we’re confident that digital assessment holds truly exciting prospects for maths and science. Unlocking this potential, however, will involve thinking innovatively: how can the affordances of a digital environment best combine with the mathematical and scientific skills and knowledge we want to assess? In this blogpost, we’ve tried to explain why simply taking maths and science paper exams and putting them on screens isn’t good enough.

References

1↩ Ofqual, Online and on-screen assessment in high stakes, sessional qualifications. 2020: Ofqual/20/6723/1.

2↩ Threlfall, J., et al., Implicit aspects of paper and pencil mathematics assessment that come to light through the use of the computer. Educational Studies in Mathematics, 2007. 66(3): p. 335-348.

3↩ Johnson, M. and S. Green, On-Line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies. The Journal of Technology, Learning, and Assessment, 2006. 4(5).

4↩ Logan, T., The influence of test mode and visuospatial ability on mathematics assessment performance. Mathematics Education Research Journal, 2015. 27(4): p. 423-441.

5↩ Mills, R. and J. Hudson, eds. Mathematical Typesetting: Mathematical and Scientific Typesetting Solutions from Microsoft. 2007, Microsoft Corporation.

6↩ Shepard, L.A., Commentary on the National Mathematics Advisory Panel Recommendations on Assessment. Educational Researcher, 2008. 37(9): p. 602-609.

7↩ Green, C., Item types and demand: What is the impact on demand of manipulating items types on Computer Science? Cambridge University Press & Assessment.

8↩ Aspiranti, K.B., E.E.C. Henze, and J.L. Reynolds, Comparing Paper and Tablet Modalities of Math Assessment for Multiplication and Addition. School Psychology Review, 2020. 49(4): p. 453-465.

9↩ Fishbein, B., et al., The TIMSS 2019 Item Equivalence Study: examining mode effects for computer-based assessment and implications for measuring trends. Large-scale Assessments in Education, 2018. 6(1).

10↩ Jerrim, J., et al., PISA 2015: how big is the ‘mode effect’ and what has been done about it? Oxford Review of Education, 2018. 44(4): p. 476-493.

11↩ Drijvers, P., Digital assessment of mathematics: Opportunities, issues and criteria. Mesure et évaluation en éducation, 2019. 41(1): p. 41-66.

Before you go... Did you find this article on Twitter, LinkedIn or Facebook? Remember to go back and share it with your friends and colleagues!

Related blogs

Key bloggers

Tim Oates Blogger
Tim Oates

Director of Assessment and Research

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.