Thursday, 4 February 2010

Off to Scotland


Helen Barefoot and I were invited by the HEA to talk run a workshop on assessment and feedback at an all day Assessment event at the Robert Gordon University. What follows is a collection of (very quick) thoughts on the day …

Helen and I have run a few workshops together (inside UH) and so it was just great to take our work (much of which is guided by our ESCAPE activity) to colleagues outside UH. For various reasons the workshop did not run :-(. Despite our disappointment we did get to hear some great assessment related presentations.

Dai Hounsell presented a really grounded key-note and, in fact covered, much of what we were covering too. That assessment is not a new challenge, that good assessment is planned activity and that good assessment stimulates learning. We did not need the NSS to get us thinking that assessment is important. Some really useful slides from Dai that I will explore and come back to in a later post. Great start to the day.

A couple of student perspective presentations followed.
A student led campaign that successful introduced a turn-around-time policy for coursework and interestingly a policy to provide feedback on examination scripts. The learning gains to be had from providing feedback on examination scripts seems rather limited to me. I’m always banging-on about feedback creating consequences, and I’m just not convinced I know what consequences flow, or are able to flow, from feedback on end-of-process, high stakes assessment tasks. That’s a post for another day. But just to say, I’m off the fence on this one. I just don’t get it. Another delegate did note they had a similar policy at his institution and only 15% of the scripts (with feedback) were picked up. Surely we would be better placed putting our feedback on work that will be picked up and more importantly attended to, by the students. And relax!

Steve Draper, engaging as ever, had a couple of threads running through his presentation. First, was the interesting anomaly that overall a department was rated 5th against other departments (107 in total) for the NSS question overall, I am satisfied with the quality of the course and yet questions relating to feedback were ranked much lower. Feedback on my work has been prompt (ranked 54/107), Feedback on my work has helped me clarify things I did not understand (ranked 79/107) have received detailed comments on my work (ranked 101/107). Steve asked us to explore what might be going on. Was there a complex weighting algorithm for all the items on the NSS? Should the individual items of the NSS not sum to the overall score? If not, what was missing, what was the missing ingredient? Second, Steve separated declarative and procedural learning and pondered as to where our efforts on providing feedback might prove most effective. i.e. might we get more learning from less or (better targeted) feedback?

No comments: