Blogs
/
Periodic assessments are better when they’re curriculum linked and standardised

Periodic assessments are better when they’re curriculum linked and standardised

In most schools, students are given some sort of formal or standardised assessment once a term or so* to provide an indication of what they have learnt. At primary, these assessments are often purchased from an external provider like Hodder, GL, NFER or CEM. At key stage 3, schools usually write these assessments themselves, though they may well also use external standardised assessments in English and Maths alongside their in-house assessments.

I think there are quite a few problems with this status quo.

For starters, your assessment may not match your curriculum. Let’s take the example of primary maths. Many — perhaps most — schools follow an external mathematics curriculum such as White Rose Maths, Power Maths or Mathematics Mastery from Ark Curriculum Plus. These come with really granular curriculum sequencing, and while there are a lot of commonalities, there are also notable areas of variance. And yet, by far the most common way that schools measure periodic progress in these subjects is by using a standardised assessment from an external provider. Typically, this leads to students being given assessments including questions on topics that they haven’t learnt yet.

You might think that, providing the alignment is mostly ok, this isn’t a significant problem, but I would disagree. I’ve worked with a bunch of schools and MATs on their assessment strategies now and I often see the curriculum-alignment problem leading to what I call the progress mirage. What happens is this: students take an autumn test, and they don’t do so well, because it contains a topic they haven’t covered. The same thing happens in Spring, but by the time the summer comes around the assessment generally matches the curriculum, since the primary maths national curriculum is explicit about what content should be done in a given year.

Once you know this is happening this sounds obvious, but it’s rare that schools build this understanding into their data analysis. Instead they might look at the average percentile rank of their cohort, and they see it increasing, and so they celebrate progress… but it isn’t real.

Which leads me on to a related issue with non-curriculum-linked assessments — the overall grade is over-used and topic analysis is under-used. I argued in my last blogpost that every summative assessment outside of national assessments like SATs and GCSEs should have some formative use. For this to happen there surely has to be meaningful analysis beyond the overall grade. Don’t get me wrong — overall grade analysis is often the right place to start — for example to identify the students who require intervention. But if that’s all you do then how on earth are you going to work out how those students need to improve?

For me the key question a classroom teacher should be asking after a periodic assessment is: “what do I need to reteach?” To do this, you need to be able to analyse results at topic and question level in a quick and intuitive way. After all, they’re not going to improve just because you tell them they need to do better. In the words of the immortal Dylan Wiliam:

Telling students that they need to “try harder” is no better than telling a bad comedian that he needs to be funnier.

Feedback is the best nourishment, Dylan Wiliam & Paul Black, TES, 2002

Of course, at secondary there aren’t external assessments available to buy for most subjects, so schools write their own assessments. This somewhat solves the curriculum-linkage problem, but it leads to two others:

  1. Most secondaries don’t have a reference point against which to compare their performance. If you write an assessment for your school only, by definition you won’t be able to benchmark your performance against students from other schools.
  2. It’s hard to write a good assessment. Creating an assessment involves making lots of philosophical choices. Are you bringing in content from previous terms, or just testing what you learned that term? Do your questions effectively measure the full ability range of all your students, including the weakest and the most able? Do you ask simple, short questions or do you go for more complex questions that combine knowledge from different topics? It is unrealistic to expect the typical head of department to get all this right — or even be aware of the trade-offs they necessarily make when writing a test — every time.

So what can schools do to address these issues? Well, here’s how we help at Smartgrade:

  1. We make sure periodic assessments link as closely to the curriculum as possible. In primary and at Key Stage 3, we partner with both Ark Curriculum Plus and White Rose Maths, with both organisations making their assessments available on our platform for us to standardise. This gives schools the best of both worlds: schools get a standardised assessment, but it’s also curriculum-linked.
  2. We let groups of schools (e.g. MATs) author and share assessments. Smartgrade isn’t just a platform for big curriculum providers; school groups can also create assessments on the platform. This is particularly valuable where multiple schools share a common curriculum, as we can then standardise results from participating schools within a MAT. In our view, while a national standardisation sample is ideal (and we do achieve that for a number of our assessments, including some larger MATs), the benefits of benchmarking are apparent as soon as 2+ schools use the same assessments. There’s something incredibly powerful about seeing your performance compared to a set of teachers and students that you don’t know inside-out when trying to understand your own strengths and weaknesses. And by having one person lead on writing a periodic assessment for use by a whole MAT you save workload for those schools that no longer have to write their own assessments, while also upskilling the theoretical knowledge of the person who does create the assessment.
  3. We give feedback on assessment quality. A feature we’re particularly proud of is our Quality Checker (developed in partnership with the awesome Evidence Based Education), which offers information on things like the reliability of an assessment to its creator. This includes things like the internal consistency of the items in an assessment. Most importantly, the feedback is actionable: for example, we tell you if a question has performed weirdly, so you can investigate what went wrong, and remove it from the assessment next time.

Whenever we think about assessment in schools we need to remember that the curriculum comes first. A good summative assessment needs to measure how effectively that curriculum is being learnt, and give feedback to teachers and leaders to make decisions that will improve students’ understanding of the curriculum. If we’re to do that we need our assessments to be curriculum linked and, wherever possible, standardised against as broad a cohort as possible.

* Other schedules are not uncommon of course— I come across plenty of schools with two assessment points a year; and a few have just the one formal assessment window in the summer, perhaps alongside shorter, unit-style assessments throughout the year. Back in pre-levels days you saw plenty of schools setting a half-termly formal assessment; mercifully this is mostly a thing of the past.

Sign up to our newsletter

Join our mailing list for the latest news and product updates.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Everything you need for smarter assessments

Book a Demo Today