Alana: [00:00:00.93] Hello and welcome back to the Cambridge Assessment podcast. My name's Alana Walden and I'm here to introduce the latest in our series of podcast guest hosted by our colleagues in the Cambridge Assessment Network. In this episode, we're joined by Tony Emmerson, senior deputy head at the English College in Prague, and Mark Fraser, teaching, learning and assessment lead at CEM to explore the differences between testing and assessment and how assessment data can be utilised in schools to support learning, particularly after the disruption of the pandemic.
Penelope: [00:00:37.01] Welcome Mark and Tony, thanks so much for joining me today. Mark, if you'd like to start off and tell me a bit about who you are and what you do.
Mark: [00:00:46.78] I'm Mark Fraser. I'm teaching and learning lead at CEM. I've been with CEM for about six years now. And prior to that I came from the world of primary education. I was a teacher and head teacher for about 17 years in schools in the north east of England. So at the moment, I work with the team that develops assessments and guidance and feedback materials for teachers in the UK and internationally.
Penelope: [00:01:15.01] And Tony, please tell us a bit about what you do.
Tony: [00:01:19.42] My name is Tony Emmerson. I'm the senior deputy head at the English College in Prague, we're a H.M.C. school that teaches very much the British model. Before working out here. And I've been here for 15 years now, I used to teach for the Girls Day School Trust in the UK. I'm responsible for assessment, curriculum, data, those sorts of things, as well as day to day operations. So I would say that my specialism is working with these tools in a way that colleagues and students are going to find as effective as possible.
Penelope: [00:01:59.23] And Mark, could you tell me a bit about the assessments that CEM offer for schools and for what purposes?
Mark: [00:02:08.11] Certainly we have a wide range of assessments. I speak mainly about the computer based systems that we have, and they range from a system we call base, which is a system that you can use for children in reception class. Basically, when they're coming into school, it's a really good way of finding out what children know and can do so that teachers can target the resources and the support that these children need. And we move up through the age range through, again, just all computer based systems here with our quirky little names for them. So we have INCAS next, which is the innovative computer adaptive system, which is a primary assessment. This is suitable for children between the ages of six and 11. And we look at a basic knowledge and skills in literacy, numeracy and non-verbal reasoning, and we build up a holistic view of what a child is able to do. And one of the unique things about INCAS, it's a great way of benchmarking children against age related expectation. It's a good way of teachers to see again who's doing well, who needs to be challenged and pushed on, who needs a bit of support and useful classroom tool. Schools typically to use this every year, every two years, and just build up that profile as children progress through the primary phase.
Mark: [00:03:36.92] We also have our secondary systems a little bit different. And there's a family of systems and we have MidYIS the middle years information system that's suitable for 11 to 14 year old students. Yellis, the year 11 information system that's kind of you know, the top end of the secondary range, ages 15 to 16, and then ALIS the A Level information system for students about 16 to 18 years old. And those three systems tend to be fairly forward looking. In the first instance, students can take an assessment. And based on the sea of data that we have, we know in relation to how they've performed because we've got a fabulous database which tells us what students in the past did in terms of the assessment and what their outcomes were at GCSE or a level, so you've got a very robust predictive model. So it's a good way of looking forward, predicting what likely outcomes may be, but they are their predictions. And also, if that information, as I've said, GCSE and a level results, is fed back into the system, then that gives schools the ability to calculate value added. So in a nutshell, there's a whole suite from across the phases from early years up to the age of 18.
Penelope: [00:05:00.44] And Tony, can you tell me a bit about the assessments that you have utilised in your school?
Tony: [00:05:07.85] We're a secondary only school, so the ones that are most useful to us are the MidYIS, Yellis and the IBE, the International Baccalaureate evaluation that Mark was referring to, and we find those absolutely invaluable for when students join our school and also at regular check points throughout. So everyone does the MidYIS if they join in key stage three at the start of what is our key stage four, we put everyone through Yellis and then everyone does the IB version of ALIS before they start on their sixth form programme. We then combine and compare this information with our own assessments, we have entrance exams in English, mathematics and non-verbal reasoning, and this can be really good for producing a rich picture of a student's abilities, because we can compare how they do in our English exam that tests a variety of skills with the very vocabulary focused testing that you get from CEM. And this is very good for showing for instance, when a student has what we would refer to as superficial fluency, they can write a nice little sentence. They can hold a nice cheery little conversation with you. But the depth of their vocabulary isn't so deep. So by using this and a few other reading type assessments like Accelerated Reader, we try and create this really this really 3-D profile of a child's strengths and weaknesses on the way in.
Penelope: [00:06:50.29] And Mark, could you tell me a bit about how we distinguish between assessments and testing?
Mark: [00:06:59.44] Yes, a very interesting and almost philosophical question, I think, and I suppose it's a matter of semantics in some ways it's something I've looked into in terms of my academic background. And I'm yet to sort of come up with a sort of a cast-Iron definition. And I think sometimes these words unhelpfully are used quite interchangeably. And I know through experience that the term test is a far more emotive word and I think carries with it some negative connotations, certainly perhaps the notion of passing or failing, whereas assessment is a more rounded and holistic approach, perhaps to finding out about your students and understanding what they know and can do and what they need to focus on next in order to move on. So I suppose CEM tends to refer to our products as assessments when the computer based systems and the more formal say we do produce paper-based of entrance products for schools who are selective and need to sort of choose pupils to enter their school, they tend to be referred to as tests. So suppose there's this, uh. In some ways, it's I think of a test as being more of a summative tool, it's measuring to a particular threshold of understanding and I think assessments as being more formative and providing actionable information to teachers and learners. But what it doesn't get around, I suppose, is we need to be very concrete and very clear about what the intended purposes of whatever the instrument is and how that information is used. But again, very happy to talk about that with anybody who would who's interested. It is a fascinating subject for our for people who are sort of fairly niche in terms of their assessment interest.
Penelope: [00:09:06.42] And Tony, I wonder if you could tell me a bit about how often you're assessing the students at your school?
Tony: [00:09:13.56] I think we have another philosophical question here. I'm sorry, you're probably going find it quite hard to get a totally straight answer from any of us on any of these things, because I would raise the question, when are we not assessing them? The process of assessing them, how are they doing? I view and I expect to be almost continual, there are these formalised points and we have three progress checks during the year and the end of year reports where grades are put into the into the information system and action is taken. But the assessment is taking place on a day to day basis through either major pieces of work or end of topic testing. I'm a big believer in having major pieces of work that mirror what the IB diploma coursework will be like. So they are developing their skills as they go along. These are skills that aren't immediately apparent if you give them a conventional test, but they are very valuable to the to the children as learners. Our approach is looking for, as I mentioned before, creativity, analysis and evaluation. And these are things that can be assessed through extended pieces of work, through conversations. I would argue that you can assess and should be assessing anything where you can verify the the integrity of the information that you get. And of course, this has been an issue with remote learning. They can cheat. They can cheat very easily. You've got no idea what's going on outside of that webcam. And unless you're going to request their parents, put CCTV all over the room from every angle, this conventional model of controlled conditions no longer exists. So that's where getting them to produce work, that would be very hard to to either plagiarise or crib from somewhere else provides the route to assessing accurately. We've been forced because of the lockdown, because of the closure, to open our minds to the way that we assess and I will be slightly provocative and say that I think that it's a good thing. So these formalised assessment points, I think, should always come. At times during the academic year where the information can be used to stage interventions if necessary, which is why actually end of year report so that the nice things for the parents to keep and they require a lot of effort from teachers are of limited use in terms of intervention because it's telling you what they know at the end of the spring. And that may bare very little relation to what they know on the 1st of September. So these points spread throughout the year as far as formal points where it's recorded. But assessment, I would argue, should be going on all the time. I think that if you ask a teacher how the how the pupils in their class are doing. They will quite often be able to give you spontaneously a rundown of how each child is doing and what their strengths and weaknesses are, if you ask them a more formal question about can you give me an assessment of each pupil suddenly like the difference between the word test and the word assessment, it starts taking on a bit more of a connotation. But the question, how is that child doing? You'll find a lot of teachers can give you a a fairly good answer to that. And what that child would do in a test will only back up the teacher's opinion rather than shape it.
Penelope: [00:13:07.22] And speaking of remote learning and teaching, Mark, how do you think assessments and student data have been able to facilitate this period of remote learning?
Mark: [00:13:20.54] Well, last last year, we certainly had a lot of interest internationally at the height of the crisis, and we did a lot of work in opening up our systems so that home access could be facilitated, so teachers could find out more about their students, certainly because they were suddenly put on the spot. And I think nobody at the start of that academic year would have imagined the way it all panned out. In the end, teachers were suddenly required to provide teacher assessed grades, perhaps for, you know, O levels, GCSEs, IGCSEs, IB, A level all across the range of qualifications. And we hope our systems acted as a useful item in the toolkit that teachers were able to access quite quickly and pulled together a body of evidence on which to base their predicted grades. That said, we're always we always advocate that information from CEM systems is used alongside all of the other sources of evidence that schools and teachers have relating to student performance. And Tony has just mentioned this. You know, these are the sources of evidence are incredibly valuable and must not and should not be overlooked. I agree completely with what Tony went on to say in terms of last year. I think the circumstances revealed quite a few weaknesses in the traditional examination system. And teachers were called to act quite quickly. And if we were able to help, that was great. But it does leave the question as to what the future looks like wide open. And, you know, this ongoing and holistic approach to assessment must surely be considered. One of my favourite academics, if you like, is van der Vleuten in the Netherlands. And they have a really nice take on holistic assessment. They've modelled for the mainstream education practice based on practice in the medical world. And certainly one of their pillars of their work is that no assessment does it all. So building up this this broad picture is essential and hopefully using CEM assessments can be one tool in the toolkit which would allow you to do that.
Penelope: [00:15:48.22] And, Tony, have you found that your historical data on your students has been able to support the students getting back on track after this disruption?
Tony: [00:16:01.73] I don't think they're off track. I think they're on a different track from the one that they would normally be on, but I don't think in itself it is inherently bad. Now, I'll explain our context in saying that, our students have been out of school, for many of them, the best part of a year. Our key Stage four have only been in the classroom for four weeks in the last calendar year. That's the situation in the Czech Republic. We are a school where our families can afford laptops, can afford broadband, there is no pupil in our school that has trouble accessing online learning. So that's the situation I am coming from. And I appreciate that not every school is in that situation, but I think the principle still applies if what we're going to try and do is take them from where they are, so lift them with a crane and plonk them onto the academic track that existed two years ago. Well, good luck with that. But you're going to be wasting a lot of energy. I think that that ship has sailed. What we've got to do is make sure that the track that the students are currently on is going to lead them to where they need to go. So the way that I'm proposing that we approach that, and this is just one model. Is when they return to school and we know that they're actually going to return to school and not be in and out through a revolving door system. They will have diagnostic assessments in every subject that they're doing in order to work out areas of strength and weakness, because I don't think it's just a case of what's been lost.
Tony: [00:17:49.08] I'll give you a practical example. Before I became a more of a desk jockey, I was a chemistry teacher and I still teach a little bit of chemistry. Obviously, we haven't been doing any practical work when they've been out of school, apart from a couple of little experiments they can do at home. But I have been using that time, to do some quite sophisticated data analysis exercises with them. So giving giving them sets of data and they'd be working on electronically, plotting the graphs, evaluating it, comparing it to what it should be, skills that they would not normally acquire for another couple of years. So I think an audit is needed, both of both of the areas where they are weaker than expected, but also the areas where they are potentially stronger than expected because teachers have been busy, teachers have been working hard, doing what they can with the materials that they have available. They have still been teachers and the children have still been learners. So have that audit and then plan what needs to be done to get them to the point, whether it's GCSE, a level, IB, get them there successfully. Now, the baseline testing will perform a very important function here as will historic data, as it will allow us to recalibrate our expectations and make sure that even after a year away, we know from the numbers that we're holding from the baseline tests, who should be at the top end, who should be somewhere towards the middle and who's probably going to be struggling a bit anyway. And those those pupils who would be struggling a bit anyway, it's important that they don't just get labelled as, you know, lock down learners who suffered with this, they would have needed some support anyway.
Tony: [00:19:43.34] So using the baseline tests to to ascertain our expectations. And then using historical data, I think, mapped on top of that with the chances graphs and the standardised scores to to try and create individualised expectations and say to the students that this is where you should be aiming for. These are where the gaps are. This is where you're going to have to work on it. More than ever, and this is going to be hard on the teachers. I'm not saying this is easy at all. It's going to require a personalised approach to learning because the students have really it's been a polarising experience for them. You know, some I found have actually enjoyed the lockdown, despite what the extroverts will tell you. It's not just extroverts in this world. And we found that some who are self-regulating and sort of quite quiet, introverted, have really flown. And they've actually been having quite a good time. They enjoy working from their bedroom, whereas those that need school to provide this external framework of discipline or indeed the social cohesion throughout their teenage years, they've really suffered. So the difference between both ends is bigger than ever and as such will require new solutions. So baseline testing to remind ourselves of the expectation for each child, historic data to keep in our mind how this maps on to real achievement and then a lot of hard work. But it needs to be eyes on the future. It needs to be eyes on the prize.
Penelope: [00:21:29.94] Now, I think, Tony, you gave a really great, well-rounded answer there. But Mark, I just wondered if you had anything to add.
Mark: [00:21:36.77] No, I was just going to say I think I can just go and have a cup of tea at this point. I think Tony's answered the question beautifully. I think and it was again, I can maybe just just add I would agree with pretty much everything that was said there in terms of the disruptions affected different children as different affected children and young people in different ways, similar thrived, receiving more attention than they otherwise would have. Some will have received virtually no support at all. And as we've just heard, the school is going to be identifying where the issues sit. And the teachers know the range of abilities and skills demonstrated by their students is often wide. And this year, the disparities between children's educational and social experiences over the past year will greatly exacerbate this gap. You're magnifying the disadvantages and allowing some children to slip even further behind whatever that means, you know, because I think the world has been turned on its head. So CEM does have that sort of baseline historical baseline data. But again, we we are thinking very carefully about what that means in the future because as has the norm shifted completely. But from a practical point of view, what to do in the here and now? I would certainly advocate the benchmarking or baseline approach to begin with just because I think it allows practitioners to develop a comprehensive overview of what their students can and can't do in the key aspects of learning.
Mark: [00:23:16.32] And that's the evidence you need for forming groups or various support systems where the interventions need to happen, targeting support, where it's required, basically. And then I suppose further down the line, this will provide some useful evidence for us in terms of how that norm may or may not have shifted and how future progress relates to that and how it may be measured. And I think one of the great strengths of CEM is over the last 30 odd years, we have this enormous amount of student response data and also for the secondary systems where those students moved on to in terms of their subsequent qualifications. And that information just allows us to understand the age related expectations and the sort of don't like using the word necessarily because it's a flight path perhaps of where they are maybe expected to go. And we hope that by comparing this year's data to our established benchmarks that in the first instance might just help to expose some problems at an early stage and allow teachers to react quickly and decisively, which I think is the unenviable task they have.
Tony: [00:24:35.43] And, Tony, I wonder if you will be relying on teacher assessed grades and if so, how will you be able to use the data collected to inform the process?
Tony: [00:24:48.72] And this is where you need to go into battle with your sword and shield, one of which is your CEM Chances graphs and the other is your value added reports. This is where they they really do become so, so useful. You've got one document that. Based on the baseline assessments that the that the pupils did, it gives broad indications of how they're likely to do in the formal assessment, whether that's IB, a level GCSE or whatever. But importantly, while you're holding that in one hand, in the other hand, you're holding your Value-Added report, which is your evidence that the school is capable of allowing the children to reach those grades or whatever it shows. International schools can have some very strange Value-Added reports because we're dealing with things like English as a second language, but they provide a track record that you can use as evidence to calibrate the predictions in the chances graphs. To that, of course, you can add in other things and it depends how long you've been locked down. So we've been like this for a year. So the amount of relevant, reliable test type material that you could use to persuade an exam board is is relatively low because of the because of so much of it being done at home that maybe schools where they've been in classrooms for a lot longer and that can be added to it. If if you're using a course that still has coursework like we do, that will that will still be a part of it as well. So there are bits of information there, but well, I normally try and stay clear of predict two grades per say as anything other than a conversation starter with the students, with pupils to see how they're doing and to see whether their self-image of how they're doing matches what the chances graph show. I think here you've got this excellent tool that says this is how they should be doing according to this predictive method. And this is the evidence of how our school relates to that predictive method. And the two together, I think should make a compelling case.
Penelope: [00:27:17.07] So we've spoken a bit about how assessment data would be useful in the under the current circumstances, but I just want to ask, Tony, as a senior school leader, how do assessments in your school inform school policies? And I wonder if you could give me an example of a successful intervention.
Tony: [00:27:39.44] Of course, I think my first point is that. All the assessment and data related policies that I think should exist in school should always be focussed at student benefit because I think one of the main ways of getting teachers to engage with data and not every teacher is a data fan. I presume that everyone listening to this is either really interested in assessment and data and the issues around it or has been sent to listen to this as some sort of re-education for their sins we know that not everyone loves this stuff, but when it's about student benefit, then it creates policies that is very hard for anyone to to not want to follow. So in terms of how it informs our policies, I would never be using, say, a Value-Added report to tell a department that they're underperforming or something like that. There's plenty of ways to find out if a department is underperforming before the Value-Added report gets gets printed off. So I like to keep things nice and simple. And our tutors, the form tutors, they use the the CEM standardised scores, they use the entrance exams, they use the reading assessments and any prior information we hold, including if it's either from our school or where they've come from, and they produce a broad baseline expectation for each student. So a grade that we think that they should be getting across the across the curriculum. At the point of the official progress checks, we flag up those who are underperforming across the curriculum and this picks out the coasters in a way that just looking for low performance doesn't, you know, chemistry or biology or mathematics can tell when one person has got an incredibly low grade that that stands out.
Tony: [00:29:44.80] And if it's just localised in that subject, I like to keep it a faculty matter. But where you're looking at that underperformance across the curriculum, that's somewhere where we can have whole school interventions. And again, I like to place to tutor and the senior pastoral team at the heart of it. This is another policy element to how I think data should be used in school. It is both academic and pastoral because a drop in academic performance can be a first indicator of serious pastoral problems. And also we spend a lot of time talking about our underperformers. But what about our over performers? By that I mean the pupils who have got a relatively low baseline assessment but still do very, very well. Now, it could be that they were having a bad day when they took the when they took the adaptive tests, or it could be that they were working very, very hard, maybe their own worst critic. And at the moment they fall. What they need is lots of TLC, not pointing out that they dropped a grade. So I think the policy should be looking after both ends. So I'll talk specifically about the underperformers I spot who the underperformers are numerically agree a course of action with either the form tutors or the or the senior tutors, the House Masters, and that can range from anything from a pep talk. And I find a pep talk is very useful with the coasters who you can't be telling them off because their grades are actually OK. What you're trying to tell them is they really should be flying. And to get them from OK to flying, you need them onside.
Tony: [00:31:26.47] You can't just force them into that. So I think sometimes quite a I think an encouraging talk that they can see is an act of faith in their abilities rather than a telling off. In some cases, it's a bit more drastic with specific targets that are then shared with the parents and followed up in the subjects quite, quite intensely. With every progress check, then look back at the concerns list from last time, and you're asking me about success stories, I consider it a little success when anyone is taCEM off that list and the and it happens. It happens a pleasing amount. Now, of course, you get your occasional big U-turn where someone starts off as an academic disaster and suddenly turns the corner and heads off to Oxford. But they are the there's a limited number of those where I think the real work is done is this this constant nudging the increments of improving performance of getting students to realise the potential. Yeah, well, you know, a word here, a nudge there, a letter home to the parents there and just getting them to live up to the expectations that they should be having with themselves. I think you change the world one grade at a time, doing this far more effectively than just focussing on one or two peoples and trying to try and turn them around. But all this hinges on having an expectation that externally calibrated of where each people should be. So that's where the baseline tests come in. You need that expectation and you also need some sort of assessment that tells you how they're currently performing based on something you value.
Penelope: [00:33:18.19] And Mark, do you think teachers having a deep understanding of assessment and improved data literacy is important in giving insight to their everyday assessments in the classroom?
Mark: [00:33:33.07] Yes, I certainly do, however, I'm not convinced that everybody is operating at the optimum level of skill and I do see through. Things like helpdesk enquiries and, you know, when I've been working in school, such an enormous gap between teachers and their levels of knowledge and understanding and data literacy, I know of some incredibly skilful and knowledgeable practitioners who have built very insightful systems of their own, but also many teachers who are not so confident. Just as an illustration of this, about three years ago, I was working in a school and we were sorting through their assessment data with a senior leadership team and. One one of the senior teachers I was working with seemed just to have absolutely no experience at all, or so it seemed, as an indicator of that they did not understand that Microsoft Excel, which is the spreadsheet package that we were using, was capable of adding together two digits in in the spreadsheet to produce the total, never mind any of the more sophisticated functionality or statistics that were potentially available. And I think in a nutshell, they could not form an overview of the data they had. Data seems to drive everything these days, you know, it's it's central to. Practice in schools, but it's a double edged sword and it must be used in an appropriate and proportionate way. And I'm not. Sure, that everybody, as Tony mentioned, loves data, so it must be made accessible and actionable, and that's increasingly the focus of future product development in CEM.
Mark: [00:35:33.50] But without that insight, it's very difficult to move on and data in particular, qualitative data, rather than the traditional standardised scores, you know, and other measures that grades that we would produce, that's essential to providing students with the actionable feedback they need in order to learn on take their learning to the next step. I think there are some organisations out there such as ResearchEd, and I know other organisations are available, but they just to pluck one out of the air as an example of doing great work to upscale teachers and bridge that gap between educational research and real life in the classroom. And there are a lot of well informed and skilful school leaders out there, as I mentioned. And it's good to see colleagues coming together to deepen their collective professional understanding. Assessment Literacy is greatly underemphasised in initial initial teacher training and certainly an ongoing professional development. It's not actively encouraged from what I can see. And if I can indulge in a plug for your colleagues Penelope at the Cambridge Network. They run a series of courses, A101, A102, three and four, which cover the basics of assessment theory and data literacy. And I think if you're interested in finding out more, that would be a really good place to to start. But yeah, I think getting a secure understanding of data and how it can be used in an appropriate way in the classroom is certainly the way forward.
Penelope: [00:37:14.69] And Tony, how does insight from assessment data empower your teachers?
Tony: [00:37:20.71] And it's best, it really can empower them in two ways, I mean, quite often it will support and reassure them that their professional judgement is is pretty much on the mark by being able to calibrate it to something from the outside world. And sometimes it will it will challenge them and cause them to rethink. And I would argue that not making a mistake in your judgement is also an empowering experience. I'd much rather be right than reassured any day of the week. But Mark's point about the the understanding of data is absolutely crucial to to feel comfortable and empowered. People have to feel comfortable with it and. Quite often, people's first experience of using assessment data in a school, it'll be. It will be coloured and governed by the first school that they work in that does something with data, that holds a strong opinion. So if you're if you're first teaching experiences in a school that uses data in a really positive, pupil centred way, you'll think it's wonderful and empowering. And if it's used in some sort of teacher bashing, not very credible, hang everyone by their predictive grade sort of way, you'll think it's terrible. So the empowerment really does come from understanding it at its best. It helps refine that judgement and it helps the teachers know that their pupils better. And that really is at the core of it. You walk into that room and you actually have information that's not immediately apparent to the naked eye of the naked ear. You've got an insight into the brains in the way that they think. And that should be very, you know, a very empowering experience for teachers in feeling more confident in whether it's little things like how they group their class or whether it's their expectations. So it gives you it gives you that extra room, gives you that extra information.
Penelope: [00:39:33.41] And Mark, how are CEM assessments used to benchmark students and predict their capabilities?
Mark: [00:39:43.42] In a variety of ways, I think in a system like base, for instance, for children in foundation stage early years, we have quite an accessible way of reporting using short descriptive statements coupled with graphics which show expected benchmarks, you know, typical expectations of children in the early years at certain points. And the primary systems, they're perhaps more normative in that they throw out data. We have an excellent understanding of student performance across the face and can benchmark scores against those typically observed for thousands of other children of the same age and provide a bit of context in that way. But again, as Tony said, this is this is not absolute. This is just a guide and should just be treated as such and not not misinterpreted or certainly not used in any kind of high stakes situation or for the purposes of accountability. Absolutely not. And secondary systems, this a slightly different model, which in the simplest sense is based on relating scores in one of our assessments to real outcomes. And by which I mean, you know, hundreds of thousands of students over time now, over the last 30 years or so have taken the assessment. And we know typically what their likely outcomes at GCSE, A level, IB or whatever eventually are likely to be.
Mark: [00:41:15.07] Again, just tempering that and as Tony pointed out earlier, that's a starting point for a conversation. And, you know, it's a prediction and it is that it is a prediction, it's not an absolute guarantee, and students may perform badly on the day of our assessment. They may perform well. And the outcome is, you know, thrown in that. In that sense, however, it's just a very robust statistical model based on past observations, which can be helpful in predicting where students go to. And also, we know quite a few schools because you can sort of plug this into a variety of subjects. You can see where a student is most likely to succeed. So it can assist when students are taken options to to, you know, choose their qualifications. It's just, again, just another source of information and. Benchmarking, again, just to go back to my previous message, we're obviously really sort of looking at that quite closely now and thinking how has the past year affected this? And obviously all of that needs to be factored into our models moving on.
Penelope: [00:42:32.57] And, Tony, what have you seen as a main benefits in your school from using assessments and data to inform the learning process.
Tony: [00:42:44.74] It's hard for me to pick out one or two things because it's become such an integral part of the way that we work. I mean, there was a time not so long ago when you met a new class and the the only information that you had was a list of names and probably the tutor groups that that they came from. And to me, that now seems incredible. I've got that mentally filed with smoking on aeroplanes and driving without a seatbelt. It used to be normal, but we look back and think, what were we thinking? So being able to have this extra understanding of the people in front of you, especially in our case. So I think this is the one thing that I'd really like to pick out, because we have. About 70 to 80 percent, second language learners, they are mainly, mainly native Czech, but few other nationalities as well, very few experts. Most of our pupils come to us behind this language barrier. And the sooner that we can understand what's happening behind that, the better. And it's getting behind, getting behind that mask and being able to have. Targets or baselines for each pupil that are both realistic and aspirational and being able to base those on something that's quantitative and not reliant on our own interpretation of that moment.
Tony: [00:44:17.96] So we don't get swayed by fripperies like neat handwriting or or politeness. And we don't allow coaster's to get away with it because they managed to project this image of someone who really isn't very capable when in fact they are. It helps us know our pupils better, and I absolutely despair when I hear people talk about using data in schools and they use words like dehumanising or depersonalising and their children not numbers. Well, obviously, when these people go to the doctors, they don't have their blood pressure taken or anything like that because they're not numbers. These numbers that they tell you part of the story of that person, they tell you they give you indicators of individual strengths and weaknesses and this this personalisation. Is so important, and so I would say the more that, you know, the closer you are to that ideal of delivery, delivering lessons where you're actually teaching the children and not the syllabus.
Penelope: [00:45:30.55] And Mark, how do you see schools and teachers using classroom data in the future?
Mark: [00:45:37.60] I think I would like to see a more finely ground approach to assessment in the future. It's mentioned a few minutes ago, you know, I take a lot of inspiration from a group of academics in the Netherlands who've looked to the domain of medical education for models of good practice, van der Vleuten et al, over the last few years have described programatic systems whereby evidence is gathered from a number of sources, such as traditional paper based assessments or computer based assessments presentations. The children of the students have done observations, interactions with the teacher. Which I think is a lot of soft information just flows down the drain and that isn't captured, so finding a way of of capitalising on that would be great learning, artefacts, classwork, coursework. You object the students of designed or created photographs of things that they've done or have produced. I think, you know, I'm probably saying very, very obvious things to a lot of teachers and certainly to religious practitioners. Certainly when I was a teacher of a school, you know, the staff I had were fabulous and keeping these huge, you know, ring binders filled with fabulous evidence of what children had done in that time in reception class for the evidence towards the foundation stage profile. But I do think there's an absence of such an holistic approach to assessment with older students, that sort of evidence gathering seems to drain away a little bit, probably because it's quite unmanageable and quite unwieldy, unless it's dealt with in a sensible manner. So I think there's a. There's definitely room there for a return to having more coursework, perhaps other elements in sort of the the later years, but again, just needs to be dealt with in a measurable and manageable way that isn't going to ruin teachers lives, you know, by the time it takes to produce these, well, kind of works of art, really, in many cases, these are fabulous things.
Mark: [00:47:53.82] And certainly, you know, when teachers show them the parents, they were quite amazed at, you know, what the child had been and hadn't been doing in school. And it was just brilliant, brilliant resource, however. So that's evidence gathering perhaps, and maybe assessment methodology we've been talking about there, and that's one thing. But how assessment data is used and more importantly, misused is a far more pressing matter. Again, Tony just referred to this. I think the unintended consequences of target driven systems and using assessment data for purposes for which they were not designed is well documented. We know that this leads to a narrowing of taught curricula and focussing in on more didactic teaching approaches. So in many ways, this is more of a kind of educational policy problem and it flows from the top down and. People tend to measure what they're being leaned on to measure, and I think as somebody once said, to paraphrase them, you know, we better be sure of what we want because if that's what we measure, that's what we get. So it's about being focussed in on what matters most to children's learning and what enables learning to progress. It's got to be all about learning and not about measurement or performance or accountability. So I would hope. Teachers and school leaders and everybody else in the system can develop this sensible and holistic overview and use it for the right purposes.
Penelope: [00:49:34.99] Well, Mark Tony, thank you both so much.
Penelope: [00:49:39.37] If you enjoyed listening to this podcast, you can find out more about the Assessment Network via the links in the description. Join our community on LinkedIn and look out for the next podcast in this series.
Alana: [00:49:54.40] Thank you for listening to the Cambridge Assessment podcast, you can find more of our podcasts on our website, just search podcast gallery, or you can find us on Apple podcast or YouTube.
Return to top