With many thanks to James Pembroke, founder of independent school data consultancy Sig+, for sharing his take on the benefits of standardised tests.
There are a wide spectrum of tests carried out in schools from the regular, mini tests that teachers use as part of their day-to-day practice to check pupils’ understanding, to the statutory end of key stage assessments that we can’t avoid. In between those sit the optional, externally set, standardised tests from third party providers, and it’s those that we are focused on here.
Some schools are opposed to introducing any form of standardised tests fearing that they may deter pupils as well as undermine the value of teacher assessment; others use them sporadically, perhaps not making full use of the data they provide; and then there are those schools that use them every term for all year groups as the main tool for monitoring standards. Clearly there are diametrically opposed viewpoints when it comes to standardised tests with some teachers seeing them as invasive and unnecessary whilst others consider them to be a highly effective tool.
Crucially we want assessment to provide us with useful information that can be acted upon so before implementing any new form of test we need to ask ourselves one vital question: will it tell us anything we don’t already know? With any well designed standardised test, the answer is almost certainly yes – the pros outweigh the cons – and I’ve outlined the numerous benefits below.
- They provide question level analysis
The primary reason for implementing a standardised test is (or should be) to help inform us about what pupils do and don’t know. Question level analysis will reveal the things we thought pupils could do but struggle with, as well as the things we thought they couldn’t do but in fact can do rather well. Tests therefore have a formative as well as a summative function.
- They benchmark attainment against other children nationally
This is something that teacher assessment in isolation can’t do. Schools may want to define pupils as below, at or above age-related expectations for the purposes of reporting to parents, governors and others, but how do we know what age-related expectations are? This is something that teachers are struggling to do, especially in the early days of this new national curriculum, and standardised tests, which benchmark pupils against a large sample, can help. This leads on to the next point.
- They can help validate teacher assessment
This may sound rather authoritarian but that’s not the intention. Many teachers admit to struggling to define whether pupils are meeting expectations or not. One teachers ‘expected’ may be another teacher’s ‘greater depth’ and the value of teacher assessment can easily vary from classroom to classroom let alone school to school. A standardised test that helps teachers identify pupils that are, for example, well below or above expectations is therefore highly valuable. Teachers can use this as a guide to inform their own assessment and can have confidence that it is rooted in benchmarking against a large sample of pupils nationally. Some schools have developed reports that cross reference teacher assessment against test outcome to spot inconsistencies. For example, the test places pupils into one of 5 categories: well below average, below average, average, above average, and well above average. The teacher meanwhile makes a summative judgement every half term along similar lines: well below expectations, below expectations, meeting expectations, above expectations, well above expectations. One would expect (or hope) that there is some correspondence between these two assessment schemes and we can test this by plotting the data on a grid. If we find we have a large number of pupils achieving above average test scores yet whose teacher assessments indicate that they are below expectations, or vice versa, then we may have a problem that warrants further investigation. It is of course possible that a pupil could perform well in a particular test, yet the teacher does not believe the pupil to be secure in all key aspects of learning, or conversely, a pupil may perform poorly on a test despite showing plenty of evidence of understanding in class. This will result in disparities between teacher assessment and test result. The important thing to remember about any data is that it raises questions and promotes discussion and further enquiry. It is often said that data is a signpost, not a destination. This is true, but it is not a signpost pointing one way.
- They provide us with a more robust progress measure
For as long as most of us can remember, schools have been measuring progress using a series of points linked to sub-levels, and it seems the death of levels has done little to change this. Schools, and software companies, have reinvented levels in a thousand different ways. All too often it seems the primary purpose of teacher assessment is to measure progress, but it is this obsession with using teacher assessment data to measure progress that is undermining its effectiveness. Teacher assessment is best used formatively as a way of identifying gaps in learning. Once we shoehorn all that rich, granular detail into some form of best-fit band just so we can assign it a value in order to measure progress, the data ceases to be useful. Worse still, and somewhat ironically, such approaches may actually be a risk to pupils’ learning because we become focused on moving pupils onto the next threshold despite the gaps in their learning.
If we really want a robust progress measure then we should use standardised tests – that’s what they’re designed for. We can of course track changes in standardised score, age standardised score, or percentile rank although we do need to exercise some caution here. A pupil achieving a score of 101 at the start of the year and 107 at the end of the year should not be viewed as making the same progress as a pupil achieving 93 and 99. Despite both gaining 6 points, one may be less likely than the other and may therefore indicate better progress. Age standardised scores are intuitively appealing in that they are adjusted to take account of each pupil’s month of birth, but we need to bear in mind that the tests at the end of key stage 2 are not age standardised. Percentiles are also appealing in that they are easy to understand but we need to be aware of the bunching in the middle where a small increase in standardised score can lead to a big jump in the percentile rank whilst at the lower and upper end a greater increase in score is required to move up a percentile.
In order to help us make more sense of pupils’ progress some test providers use a value added model, which provides positive and negative scores to identify pupils making above or below average progress. Others have their own progress measure, such as the Hodder Scale used in Rising Stars’ PiRA and PUMA tests, which enables us to define an ‘expected’ progress path, predict where pupils may go next, and therefore state whether or not they are ‘on track’. Some will use standard deviations to define pupils’ attainment as well below, below, average, above, well above; and these can be useful when constructing progress matrices. For example, we can see five pupils that were below average last term are now average, whilst 3 that were average are now below. Some tests may actually do this for you by defining progress in broad categories such as well below average, below average, above and well above. But there’s not just the benefit of a meaningful in-year progress measure; data from standardised tests can also help counter official key stage progress measures that all too often fail to provide an accurate picture of achievement. Progress of pupils in Junior schools is the obvious example but it’s a wider issue than that. Current primary progress measures based on key stage 1 data are highly unreliable and this is set to get worse in 2020 when the current year 3, with new key stage 1 assessments, reach the end of key stage 2. The baseline will be even less granular than it is now and progress measures dependent on that data are unlikely to do justice to the efforts of schools and the achievement of pupils therein. It therefore makes sense to arm yourself with something more accurate.
- They provide test practice
This is important but it does not mean that all tests should have the look and feel of a year 6 SATS paper. No doubt schools will use past papers for SATS practice in year 6, but we must not lose sight of the primary purpose: that the test is checking what pupils do and don’t know.
- They compare standards between schools
It is can be difficult to compare standards between schools due to the non-standardised, subjective nature of teacher assessment, and this presents a particular problem for MATS. Standardised testing is the most effective solution to this problem in that all pupils are taking the same test at approximately the same time, much like the SATS.
- They can reveal gaps between groups
Much like the above issue, it can be difficult to compare attainment of groups of pupils using teacher assessment. Standardised test scores can be averaged to track and monitor attainment gaps between key groups such as boys, girls, pupil premium/disadvantaged, EAL and SEN, thus giving schools RAISE-style data in real-time.
Many people – both parents and teachers – have justified concerns about tests. No one wants children to be tested all the time, but if they help identify gaps in learning then they are a positive force. In an ideal world teachers would know instinctively whether a pupil is where they are expected to be at any point in time but we’re not there yet; and as schools attempt more nuanced approaches, judging pupils’ depth and breadth of understanding, assessment is becoming ever more complex and subjective.
Then there is the measuring progress issue. For as long as we continue to obsess about quantifying distance travelled, we are better off using standardised tests. They are more robust and will help free teacher assessment to focus purely on what should be its core function: formative assessment. Using teacher assessment for multiple purposes – for teaching and learning, measuring progress and performance management – has polluted it and made it less reliable by introducing perverse incentives to start low and over inflate at the end of the year. Using standardised tests to measure progress removes the opposing forces that can pull teacher assessment in the wrong direction and set it back on the right path.
I know that many teachers worry that tests put pupils off, which I fully understand but it’s all about balance: not testing too often, ensuring they are not too onerous, and are done for the right reasons. Rising Stars recently sent me a pack containing sample PUMA and PiRA tests along with the accompanying manual. I was mainly interested in the guidance but my 8 year old daughter took one look at the test papers spilling out of the packet and said “Oh! We do those at school. Can I have them?” Obviously, being an honest sort, I didn’t want her to have the year 4 tests she hadn’t yet done in school but was happy for her to have the year 3 papers.
The next morning she didn’t appear in our bedroom bright and early as she normally does and there was sound coming from her room, so we assumed she was asleep and left her to doze. But after 20 minutes, with school looming, I went into her room to wake her up. There she was, sat up in bed, pencil in hand, halfway through the Year 3 Summer PUMA test. So, yes, tests are informative – they can tell us things we don’t already know and provide us with some very useful data – but children can enjoy them, too.
Get it right and there are benefits on many levels.