Thanks to Michael Tidd for the following article.

Following on from nearly 400 written submissions, and my own appearance last month, the Education Select Committee recently took further evidence from academic experts in assessment and data – and some common trends are arising.

This time, the evidence was from organisations such as Education Datalab, Ofsted, and the assessment experts of Durham and Cambridge Universities. The main strands of discussion focussed again on the impact of accountability – no surprises there – and it seems that the experts agreed with the classroom teachers by and large: it’s the high stakes that can cause the risks.

Becky Allen set out early on her view – as someone who deals with the data all the time – that we are making substantial decisions on what is always going to be rather fragile data in primary assessment. The limitations have long been known to teachers: the snapshot of test, the unreliability of KS1 data as a baseline, the small numbers of pupils. She echoed the point that has been made before that we really shouldn’t be making judgements of schools based on a single year’s data.

The inevitable of Ofsted was raised, in part perhaps because Joanna Hall, the deputy director was there. Several panel members mentioned their concerns about the consistency of Ofsted judgements, both in respect of data and their wider role. The topic of the need for a reliable baseline was also raised, but inevitably there was too little time to really thrash out the idea of whether a useful reception baseline measure can be achieved.

One comment worthy of note was from Tim Oates – one of the experts behind the National Curriculum – who commented that while he thought the Year 6 Grammar test was a fair representation of the curriculum, that it was perhaps the case that the curriculum was too weighed-down by high demands of language about language. I’m sure many primary teachers would agree.

Much of the discussions was about how we can improve the system we have at the moment. Perhaps unsurprisingly, there was plenty of disagreement here. Some argued for greater focus on training for teachers to make their own judgement; others pointed out that asking teachers to make judgements in high stakes assessments is likely to lead to some inaccurate decisions – and even cheating in some cases.

Once again, comparative judgement was brought up as a potential alternative to the existing framework for assessing writing. There are some positive words of its potential, but also caveats about its risks, not least the fact that it doesn’t easily provide formative information in the same way as a tick-box framework. Of course, some might argue that that’s a benefit!

Hopefully the committee will soon report back to the DfE and tell it to get on with resolving these issues!

Leave a Reply

Your email address will not be published. Required fields are marked *