By Alastair Montgomery - Director of Spotlight. This article was first published on www.aptonline.co.uk
There is a lot of debate around SATs, the stress they are putting on young people and the effect they are having on primary schools - particularly for those in Year 6. There is plenty of projection from schools and the Department of Education that these tests are purely data-gathering for various purposes around school performance and curriculum effectiveness. However, these purposes have unarguably changed over the years and the affect these tests are having on young people is a growing concern.
I'm writing this following a recent debate on BBC Radio Five Live (Date 12/05/2023), which discussed with parents the impact SATs are having on their children. "My child has been incredibly stressed, she's missed out on play and other activities, and its really affected her feelings about school" - says Heather, a parent speaking on the program*.
Additional pressures are being felt across the sector where expectations are remaining exactly the same despite the impact COVID-19 has had on all of these pupils' primary education. Parents are asking if testing like this is really necessary, and what is it doing to their children's experience of school?
"We're being told SATs are being used to measure progress and yet there is suddenly, in Year 6, a lot of additional activity to improve the scores - this isn't a measure of progress, they're suddenly working really hard to inflate their performance." - Steven, another parent speaking in the same debate.
At Spotlight, we are in favour of measuring academic performance - of course we are - but we are very conscious of the reasons for testing and what the information is then used for. Effective assessment is central to our mission and as a result, we are very aware of what happens when testing becomes a bad word.
*Parent names have been changed
One of the things assessment developers (like the team at APT) focus on when creating tests is the purpose. What is it being used for? What do we want to find out? One of the most common failures in testing is when the original purpose is lost amidst a host of additional uses of the same data. SATs are a great example of this. Introduced in the 1990s, they were originally developed to measure progress within primary schools - for the purpose of evaluating the curriculum and school effectiveness. Then the same data was used to rank schools into league tables when families were given more choice about which school their child would attend. Then senior schools started to use the data for steaming, setting, and predicting future results. Then, they were linked to teacher's pay.
Originally the intent was for SATs to be largely internal and to measure the impact of the new curriculum and school performance - at a macro level rather than for specific individuals. The results were held within the school's senior leadership team, the Local Authority (LA) and the Department of Education (DoE). Rarely were they circulated widely, making them fairly low-stakes for teachers and pupils. However, over the years the number of stakeholders has significantly increased. The results now impact (1) the students' future classes and potential opportunities, (2) teacher's pay and performance measures, (3) the school's reputation and performance rating, and their access to funding and resources, (4) secondary performance tracking, monitoring and GCSE predictions, as well as (5) the original intents for LA and DoE purposes. The overall effect is a significantly high-stakes set of assessments significantly removed from their original intention.
"Assumptions are made about my children that stick with them for years after the SATs. They are used to set challenge grades and expectations at GCSE, and determine whether my child has certain opportunities at secondary school - [the primary school] can't tell me that the results don't matter." - Says one of the callers from the radio debate.
High Stakes vs Low Stakes
One of the affects of high-stakes tests is the perception of pass or fail. A low-stakes test is generally one where the outcome is considered inconsequential for the test taker; and the data gathered is only of value to the purposes of the test. High-stakes tests are much the opposite. There is the perception that there is much to be gained or lost depending on the results, that failure is a real possibility and opportunities will be missed if the results are not good enough.
A good barometer of how high-stakes a test is perceived is how much preparation material there is for them. Are there practice papers? Do they do mocks? Can I gain advantage by seeing lots of examples of the types of questions they might ask? Often what happens in this scenario is that the assessment stops being an affective measure of ability, skills or knowledge and becomes a measure of how good the candidate is at this kind of test. At this point the purpose and value of the test has changed, it now answers: How well can the student answer this type of exam question? and no longer says a great deal about their progress between Year 2 and Year 6. SATs have very much changed from low to high stakes tests. Does this mean they are bad? If they were designed to be low-stakes, and have become something else - then they are certainly not doing what they set out to, and this is having unintended, and significant, consequences.
The team at Spotlight are big fans of assessment, and remain so. We see data as a really useful tool in getting a sense of what students find easy and what they find more challenging. But most importantly, it's not the only thing we're interested in. At Spotlight we insist that upon completion of our assessment, the results are always coupled with an in-depth discussion with the parents about the student to learn more about their personality, their interests, and what makes them tick. It's when you combine this information with assessment data that you start to develop a picture of their academic profile and we don't permit Spotlight to be used in isolation without this important context.
We do everything we can to keep our assessments low stakes, low pressure and purely evaluative. There is certainly no pass or fail; or advantage or disadvantage based on the performance of the assessment. Spotlight is not an entrance test, or a ticket to any specific opportunity. We will always support assessment in helping to guide great academic support because this is the sole purpose of our tests - and when used strictly in this way, the answer to the title question here is: 'No, tests are not bad'.
We are always happy to discuss issues or challenges your children are facing - particularly regarding testing. Do feel free to get in touch if anything in this article is affecting your child's learning experience.