Monday, June 2, 2014

National Standards - Moderation - How do we know if we're right?

Since the introduction of National Standards we as a school have undergone PLD, and many a staff meeting and informal conversation, trying to get a united understanding of moderation.  In my opinion, we have made great gains in our understanding of how moderation works and I've watched that understanding grow since its inception in 2010.

My concerns around moderation sit in the 'individual understanding' on a national scale and this is a concern that I'm sure all teachers/managers have.  While there are exemplars, progressions, ELLPs and illustrations of the standards themselves, how can we be sure our judgements are accurate?  How do teacher's remain objective in their opinions and decisions around individual students?  I believe that the answer to these questions lie in the collaborative efforts of teachers and managers to gain a global understanding together - through PLD, discussions and plenty of sharing.  It's imperative that everyone use the resources available, knowing them inside out - back to front, in order to have an understanding that ties all curriculum strands together.

At Tamaki we carry out whole day syndicate data discussions each term, using these discussions as an opportunity to share students progress and make comparisons across the school.  Within syndicates, teachers take ownership of all students...not just those in their own classes.  This means teachers sharing data, comparing progressions and reaching a joint understanding of what learning levels look like across the school.  This year we have created a rubric for writing in students speak (collated as a staff) and students use the rubric from years 1 to 8.  Our next steps from the development of the rubric is to collate exemplars of our students work at each level to illustrate the progressions of the rubric.  After assessment, teachers moderate GLOSS and writing to check judgements and talk through their decisions, debating in a non-threatening environment where all that matters is getting a joint understanding of progressions.  When teachers create OTJs, we gather in syndicates to share samples of work at levels to ensure there is a joint understanding of what the levels 'look' like across the classes.

So while we work hard to have a joint understanding as a staff - and this is ongoing as understanding still varies due to experience and the introduction of new staff members - how do we know if we've got it right on a National scale?  The Tamaki cluster have writing moderation at the beginning and end of the year to check joint understanding of judgements based on the the easttle writing rubric.  These moderation sessions have been successful and as a cluster we've seen accuracy and agreement grow through a collective understanding over time.  These sessions are carried out by school management, and I believe these would be even more effective if we involved more teachers at these sessions as PLD for staff.  As a cluster, we don't collaborate in the development of OTJs understanding.  I think this is an aspect of assessment that needs to be addressed to encourage transparency between schools.  Of course some schools would be hesitant to share their data to this scale, but I believe that this type of collaboration would be beneficial to making National Standards work.  After all...the alternative of Standardised Testing is hardly the path we want to go down.

So if we can come to an understanding in making National Standards work there should be more done to unite teachers, encourage collaboration and provide a platform of trust (not fear) in assessment judgements.  I could keep going into what National Standards identifies as expected levels of achievement and whether these levels are fair and equal to all students...but that's a whole other post!!!  Not exactly comparing APPLES to APPLES...

No comments:

Post a Comment