Skip to content

Assessment Strategies Revisited

In early 2012, I invested a substantial amount of time developing an assessment strategy for KS3 that was much more reliable than what we had been using previously.  This was one of the first steps that I took in a radical overhaul of the Scheme of Work that we were running at Baylis Court and it led to us trying out some very ambitious projects that largely paid off well.  This was an assessment system and SoW that was very much born out of the needs of that department in that school at that time and I have not tried to recreate this at Wimbledon College – there are different needs at this school.  What works in one place does not necessarily work in another and, as such, I have been working with my department to develop a strategy suited to the needs of this department in this school at this time.  

Wider context

Obviously, this has been a good time to design a new assessment strategy as the government announced that it would be removing the requirement to use National Curriculum Levels and they would not be replacing this system.  You can read the official documentation on the government’s  ‘Assessing Without Levels’ page.  This has led to some very interesting work from the music education community and I have found the following to be particularly interesting:

Early Development

I have never been satisfied with any assessment system that I have worked with as I have found them to be either too ‘unmusical’, too limiting or too ‘woolly’.  As a result, I made one of my performance management objectives for this year to:
Develop an assessment strategy that is musical and reliable

One of the first things that I did to help with the musical aspect was to implement one of Jane Werry’s suggestions from an article in Music Teacher Magazine, where she suggested that pupils would benefit from leaving the recording going once pupils have performed.  This means that the feedback that I give after their performance is recorded and this includes me playing instruments/singing to model what I would expect from them.  This gave me some confidence that the pupils had something solid to refer back to when seeking to improve their work.  They had one lesson to improve their work (I suppose that you could call this  ‘Directed Improvement and Reflection Time’) before they re-recorded their work.  Over time (and thanks to reading Ross McGill’s  ‘100 Ideas for Secondary Teachers’) I started to refer to this first recording as the ‘F.A.I.L.’ recording, with ‘FAIL’ standing for ‘First Attempt in Learning’.  

This approach was certainly worthwhile in that it gave the learners an accessible version of my musical modelling after it had happened. That said, the only differences between what happened in my previous practice and this was (a) the additional recording (b) the title ‘F.A.I.L.’ for that recording. It hardly felt revolutionary and I was eager that the process improved the feedback itself rather than just tightened the procedure around it.

Catalogues of recordings

In some format or another, I have been making recordings of pupils’ work since I started teaching. It just makes sense that musical work is collected in its most ‘natural’ format – sound. Collecting and sorting these recordings is certainly easier in the world of smartphones, tablets and wifi, so it makes sense that the pupils should take on a significant amount of the responsibility for this side of things.

We have now purchased Soundation accounts for every pupil in Key Stage 3 and many of the pupils are very excited about this. For us, however, the biggest advantage is that they can collate all of their recordings in one place and we can record additional musical and verbal feedback using a microphone and an extra audio track. We are about to implement a routine where:

  • the F.A.I.L. recording is completed by the teacher and uploaded to SoundCloud using the iOS app
  • the final recording is complete by the pupils using Rode Rec on one of our iPads (Rode Rec is used because it can record to WAV format)
  • the students download the final recording from SoundCoud using a school PC and store that in Soundation (Soundation only accepts WAV files)

This allows us to have every recording that a pupil completes from Year 7 until Year 9 easily accessible and categorised in a way where the pupils have a great deal of control.  The advantage of doing all of this with Soundation is that they can easily use its DAW features to create additional material in their own time, which can help us to build an even more comprehensive picture of their musicianship.

This is something that we have been looking at for a while and has been experimented with during some upper school lessons (smaller class sizes being preferable when experimenting with things that cost money!). I was glad, therefore, to see that a cataloging approach was supported in Robin Hammerton’s recent blog post.

I am also currently working on collating all of the recordings from this academic year into five categories:

  • far above expectations (++)
  • above expectations (+)
  • at expectations (=)
  • below expectations (-)
  • far below expectations (–)

This catalogue will be comprised of sets on the department’s SoundCloud page, which will then be embedded onto our website. This should give our pupils a really clear idea of what it is we expect from them and help us to communicate our vision of ‘excellence’.

Forming an assessment

I have very intentionally avoided referring to the upcoming approach as formative, summative or even ipsative assessment because I think that we are all carrying a lot of baggage with what those terms mean.  This strategy is just intended to be a method of forming a reliable opinion of the extent to which a pupil is meeting our expectations. For the sake of ‘feeding the system’, I go on to convert this into an NC Level but I am hopeful that with the implications of ‘Assessing Without Levels’ being felt more readily in schools that this aspect of things becomes less and less of a necessity.

The first stage of this process draws very heavily on the work of Martin Fautley and Jane Werry who have both experimented with using radial diagrams to represent a pupil’s musical learning. Both Martin and Jane have used this against a criterion based assessment model (does the piece use an effective ostinato? can the child play the chords in the verse? does the reggae piece use a one drop drum beat?).  I certainly see the value of such an approach and I think that it will be the right one for many schools but, after an initial trial, we quickly realised that it didn’t suit the need of our school. We teach all lessons using the various Musical Futures approaches and the myriad of potential outcomes that this provides dramatically complicates a criterion based assessment approach.

If a group comprises of a singer, bassist, guitarist, pianist and drummer, then a criterion that relates to being able to sing the verse is irrelevant to the majority of the group. If we are working on a classroom workshopping model, then the unpredictable nature of this work makes it near impossible to create any criteria. We were left with two options, create a nearly endless list of criteria and choose the most suitable for each child in each situation or design a system that embraces the nature of our approach to teaching and learning. We chose the latter.

The pentagon

We felt that there were five main things we consciously or subconsciously asked ourselves when listening to how a pupil can move on in their musical learning. They are:

  • pitch (how accurate is it? is it appropriate? does a drummer select appropriate drums, methods of hitting the drums, etc?)
  • rhythm (how accurate is it? is it appropriate?)
  • ensemble skills (are they in time with each other? are they responding to each other?)
  • contrast (are they making adjustments to the music to maintain the listener’s interest?)
  • style (is the piece in a recognisable style? is there sensitive attention to articulation, phrasing, breathing, etc?)

Importantly, these aren’t musical concepts that we only think about when completing a summative assessment. This is how we approach teaching and learning in music in the first place; attention to detail, developing accuracy, creating an end result that the pupils can be proud of, etc.  By keeping these five points quite open, we allow for the flexibility of our Musical Futures approach while also having a consistent message for the pupils – we are focusing on your music making.

We then mapped these five points to a radial diagram on a five point scale. That scale is a very simple approach directly inspired by Martin’s work:

  • ++ (working far above expectations)
  • + (working above expectations)
  • = (working at expectations)
  • (working below expectations)
  • (working far below expectations)

Originally, we had this as a three point scale (+ = -) but our initial trials in classroom lessons indicated that we needed the additional two layers to give a more accurate picture.  This also allowed us to eliminate any ‘errors’ that we made in our judgements (incorrectly identifying a pupil as ‘above expectations’ for an aspect of learning is less anomalous on a five point scale than it is on a three point scale).  We also felt that, when the system is used for a summative assessment, that the additional points of the scale made more sense from a musical perspective.  A pupil can be exceeding your expectations for a child of that age but still have a long way to go before they are far above our expectations.

The radial diagrams that we developed look like this:


The colour coding strategy was really there just to help us when filling in the sheets.  I have some concern that ‘meeting expectations’ is orange (shouldn’t it be a good thing?) but the traffic lights approach certainly aids clarity when filling in the chart and reading it later.  The PR (Practice Room) box is there to keep an eye on who is where and the ‘final mark’ box allows us to add up the total marks to get a score out of 25 (but more about that later).  The teacher comments box can be used if we see it as necessary but we are being clear that we would rather that comments were musical.

Using this system

When we were first designing this, it was with an understanding that we would have to continue reporting a level every two weeks (for us that is every eight lessons) and the need to generate levels to serve the school system was a consideration.  With the arrival of Robin Hammerton’s recent blog entry, our Head Teacher has agreed that we can develop a system that allows us to report a level just twice a year provided that we are using a catalogue of recordings in the manner that Robin describes.  The rest of this blog entry will explain how we found we could use this system for regular reporting of summative assessment and how we intend to use it with the new strategy agreed with our SLT.

Frequent reporting of summative assessment

Generating a frequent summative assessment from this is a straightforward, but admittedly unsatisfying, process.  Simply ensure that there is a mark recorded for each of the five points, add them up and then you get a mark out of 25.  We experimented with a number of scales for mapping this against a holistic expectation and some very obvious approaches led to some strange results.  In particular, we found that it was difficult to get a meaningful representation of a child’s work  and effort using just a five point scale at this stage.  As a result, we added a sixth point on the holistic scale, which we called ‘Unacceptable’, which would indicate a complete lack of effort as opposed to a child who is struggling.  The scale that we have settled on for our half-term trial looks like this:


Since our school is continuing with National Curriculum levels for another year, we needed to convert this into a level.  For that, we created a chart that is relevant to each year group:


I have always been uncomfortable with using levels in this manner as it is, effectively, treating levels as grades.  It also seems woefully unfair that a Year 7 is stuck being able to achieve no more than a Level 6.  The important thing that we have kept in mind here is that the level that we are reporting is for the system and tracking purposes.  The information that is for the pupil is the catalogue of recordings in Soundation and the information contained on the radial diagram.  This way the assessments that are valuable to the learner perform what Boud would describe as ‘double duty’ (thanks for that reference, Martin!) as assessment data for the school systems.  To me, this is much preferable to an assessment that is valuable to the school systems performing ‘double duty’ for the learner.  A difference that is perhaps subtle but certainly important.

This sort of rapid summative assessment can be produced as often as required but the obvious opportunity is when a student has completed a piece of work.  From a workload point of view, it makes sense to complete parts of the radial diagram whenever you notice a pupil meeting or exceeding your expectations as this leaves less paperwork at ‘assessment time’.

Less frequent summative assessment

With us looking at moving to reporting a level twice a year, we are looking at an approach where we only note down what is clearly noticeable at any point.  We still have a radial diagram for each unit so that we can track when, where and how achievement was achieved but we don’t look to complete the entire diagram every unit.  This way, a learner can spend a significant amount of time focusing on what they need to develop so that they may move on.  It may be important for one child to really focus on accuracy of pitch for several lessons or to really take their time in developing a strong sense of style.  Not having to complete all five strands of the diagram every unit allows for this flexibility.

With both this and the more frequent version, we are quite clear that any results that surprise us should be looked at from an holistic perspective.  If the radial diagram suggests that a child is below expectations but we thought otherwise, then we would review our assessment (probably a departmental moderation) rather than assuming that our assessment system must have been more accurate than our professional judgement (or even vice versa).  A surprising result should make us ask ourselves questions, it shouldn’t be a ball and chain to trap us into a judgement that we don’t feel makes musical sense.

Over the course of three units, we gather the best score from each of the five points and then generate a number out of 25 from that.  Then we just follow the same system from the frequent summative assessment approach.  Ideally, I would like to just report ++, +, =, – or — but that would need to come from further discussion with our SLT.

Feedback please

We are not fully committed to this system as yet.  A lot of work has gone into developing it but we are eager for this to be right.  I have created a topic on the forum page to discuss this so please feel free to leave comments there or on Twitter  (@johnskelleher).  Any and all comments are more than welcome – I am not precious!

Many thanks to Maria Gilmartin for all of the work that she has put into developing this system with me.