Now that preliminary KS2 SATs results have been released, our thoughts inevitably start to turn to next year’s year 6 cohort and how best to prepare students for the SATs experience. Most schools do practice SATs assessments at some point, but why?
One of our most popular services at Smartgrade is our Practice SATs series of assessments. Between Autumn 1 and Spring 2, we offer four practice assessment windows in which schools sit the same Practice SATs papers and get back standardised results comparing them to children around the country at the same point in time, as well as granular analysis at question and topic level. This means we’ve learned a thing or two about what to do (and what not to do) when using Practice SATs. So, here are are our top tips:
1. Decide in advanceyou’re doing practice SATs, and then make your process support that outcome
There are plenty of good reasons to give practice SATs assessments to year 6 students in advance of the real things in May, but “everyone else does it” isn’t one of them. Every assessment we use in school should justify the time taken to administer it, and practice SATs are no different.
So it’s important to think through why you’re doing Practice SATs, and then make sure you achieve those goals. For example:
- Do you want a sense of how your real SATs results are likely to shake out?
This is a reasonable — and common — thing to want, but if this is your goal you’ll need to have a way of benchmarking results in some nationally meaningful way; and that’s harder than you might think (see point 4 below). - Are you looking to target students for intervention?
If so, you may decide that overall scores aren’t enough. Outcome grades might tell you who’s struggling, but they don’t tell you what areas require attention. In the immortal words of Dylan Wiliam discussing systems that focus on putting students in rank order: “It’s like telling a bad comedian he needs to be funnier. […] It’s true, but it’s not helpful.” So you may want to collect question-level data — and perhaps also perform topic-level analysis with that underlying data (something that Smartgrade does for you automatically). - Are you evaluating the performance of your curriculum?
Again, this is a logical thing to use practice SATs for, but overall test scores won’t help you much if this is your goal. Topic analysis is probably the most useful thing here — and you also want to make sure you’re ignoring questions relating to content you haven’t covered yet. (Smartgrade tags all questions with the year in which the content is featured in the national curriculum to support this process.) - Are the Practice SATs meant to familiarise students with the test process?
Some test prep is justifiable; it doesn’t serve anyone’s interests for a child’s first experience of a SATs paper to be the real thing. That said, while SATs are inescapably high stakes for teachers and schools, they do not have any material impact on a child’s life, and you’ll want to avoid processes that imply otherwise to your students. So it’s ok to do Practice SATs to offer a dry run, but if this alone is your aim then you’ll probably find that a couple of run-throughs is plenty. You may also want to put thought into how you communicate to students why they’re doing the assessments: ideally, you want them to take the tests somewhat seriously without stressing out about them.
2. Consider Year 6 content when picking your maths past paper
The following chart shows the percentage of year 6 content that features in the maths SATs paper from each of the past 5 years (including 2024):
What jumps out is that the 2022 and 2023 papers have far fewer questions relating to year 6 topics. Therefore, if you’re doing a practice paper in the autumn term, you may decide that it’s sensible to pick either of those past papers to avoid confronting your students with questions on content they’ve not yet been taught.
3. Don’t be suckered by Scaled Scores
In our experience, the most common way that schools analyse past papers is to look at the scaled score, which they calculate by using the government’s conversion table from the “real” SATs paper from the year in which it was a national assessment.
The problem is, as we explained in a blog earlier this year, these scaled score conversions alone aren’t very useful. For example, if you’re using a past paper in autumn 2 and a student’s raw score equates to a scaled score of 98, what does that tell you? It’s below the expected standard of 100, but there’s plenty of teaching time left, so does that mean they’re on track for EXS or not? And if you’re comparing two past papers, how do you control for the potential high variance in year 6 content as described in point 2 above?
To illustrate these issues, we looked at standardisation data from our 2022/23 Practice SATs series. What we found was that scaled scores could be deceptive. For example, an increase from 98 to 100 between our autumn 2 and spring 2 practice assessments sounds like an improvement, but when you look at the associated percentile ranks from our live standardisation, that performance would have meant a drop from the 59th to the 40th percentile rank.
So essentially, without additional context, you can’t do much with the scaled score. That’s why Smartgrade offers “live” standardised grades like percentile ranks — they give you a more comparable scale from which to draw inferences.
4. Benchmark against others
We’ve already explained in point 3 how standardised percentile ranks can help greatly when analysing performance, but with good quality benchmarking you can also go deeper. For example, Smartgrade offers question- and topic-level benchmarking, so you can compare your performance to other schools nationally at a granular level and spot areas of your curriculum which are underperforming and topics that may require reteaching.
5. Aggregation is your friend
Tests are always imperfect and affected by all manner of environmental factors. If a child slept badly, or didn’t eat a proper breakfast, they may well do worse than they would have done if well rested and fed. By extension, it’s also risky to overinterpret single assessment points.
This is where aggregation can help you. If you sit two or three practice SATs assessments during year 6, you may find that the most accurate analysis of a student’s level comes from aggregating their results across multiple data points (particularly if you’re using a comparable metric like Smartgrade’s live percentile ranks mentioned in point 3 above). Similarly, while individual student’s results will inevitably vary for all manner of reasons, the picture becomes more reliable when you aggregate the results of multiple students — which is why Smartgrade automates class, school and MAT aggregation for you.
To find out more about our Practice SATs package, book a 30-minute personalised demo with one of our team.
Sign up to our newsletter
Join our mailing list for the latest news and product updates.