I have a confession to make. Despite this being my third ResearchEd event, I am not a research lead (don't tell Tom Bennett), yet I keep crashing these research lead events because they are just so good that I know they will benefit me. When I started my masters a year or so ago, I found a passion and interest for doing research (and it occurred to me that my education truly had been wasted on my younger self). It awoke in me an interest to read and write and research and hypothesise, and then I had to stop my masters studies and I found ResearchEd instead, which is trying to find ways in which teachers, educators, all people in schools, can become research engaged.
And I think this is important. I am not sure I am in a context where suggesting that a research lead should be appointed would be approved. And yet. I still feel this desire to pursue this and find out more about it and read the work of other research leads. It is interesting, inspiring and exciting, and to paraphrase Tom, it is a grassroots movement that reflects what teachers seem to want.
The day began with Daniel Muijs talking about evaluating teacher performance. Muijs was engaging, thought provoking and included discussions with the audience in considering the four main measures of teacher effectiveness: student outcomes, value added measures, classroom observation and student feedback. On the subject on using student outcomes, he outlined an approach which many schools use and in many ways is a positive thing - if all we look at is the outcome, then teachers have true autonomy in deciding on the methods that get them there - however it is important to define what we actually mean by outcomes - what are we measuring? Many of the desirable outcomes of education are not easy to objectively measure so what we actually mean by student outcomes is cognitive outcomes and what we mean by cognitive outcomes is test results and exam grades. These seem like a good measure as they are reliably measured, reflect the desired outcomes of education and are directly influenced by teaching, and yet as a measure of teacher effectiveness they are problematic, as they fail to take in to account factors such as social background, gender, ability and prior attainment. This led the discussion onto a potential alternative way of measuring outcome - value added - a measure that was previously used as a measure of school performance. Value added measures progress over time, negates the impact of prior attainment, controls for social factors and is objective, in that it is calculated using a statistical model. Indeed in a number of countries value added is the measure used to evaluate teacher performance. However there are also a number of problems with this approach - it demands extensive testing, it could be open to cheating the system, there are issues with stability over time and there are issues with confidence levels. So it is good, but complicated and certainly not the complete answer that education is looking for. At this point Muijs turned his attention away from data and onto classroom observation - the most common measure of teacher effectiveness. There are lots of advantages to using lesson observation, it is under the control of the classroom teacher, it has both formative and summative elements, it has immediate results, can be reliable and occurs at a school level. The problem is that not everything is observable (learning is invisible) and the things that are observable are open to biases and issues of reliability as well as changes in behaviour and the issue that observations do not always reflect typicality. There is hope for lesson observation as an approach though. Research suggests reliability can be improved through proper training of observers, the use of valid observation schedules, sufficient frequency of observation (between 6 and 12 is recommended) and an awareness of unintended consequences. The final measure discussed by Muijs is student feedback, for example, in the form of student questionnaires. This could be a really useful approach, after all students are the most direct observers of teachers, it is a cheap and convenient method and it has strong correlation with external observations suggesting it is a good method, however as with all other methods there are concerns. The age of the student is a potential issue, there is a risk of bias and possible perverse incentives. This left us with the suggestion put forward by the MET project that a balanced approach combining VA, lesson observation and student feedback is best, however the correlation is modest, this is an expensive approach to take and there is still unexplained variance - as one member of the audience pointed out 3 minuses do not make a plus. So where did this leave us? Pretty much exactly where Muijs predicted we would be. At the start of his talk he did warn us that if we were hoping to leave with answers, we might be disappointed. He did leave us with the following conclusion though - evaluating teacher performance is not straightforward if the system is to be reliable and fair. No one method will work. If adopting a balanced approach then that approach needs to include a broader framework but that framework probably should include observation and student feedback as these are useful components. This was a really interesting talk. Did I leave with any answers? No. Was I disappointed? Absolutely not.
The second session I attended was about a web resource http://www.researchrichschools.org.uk/ The session was really interesting and gave us the opportunity to explore the website which is full of useful resources, helpful starting points and a self audit tool which made me feel slightly less fraudulent about being in attendance as some of the work I am currently doing would suggest that I am in the emerging category of a research school. The best thing you can do honestly is go on and have a play with the website. It looks really good.
My third session was with Andy Tharby who did a great job of presenting on behalf of himself and Brian Marsh who was sadly unable to attend. He discussed with us the role he had as 'research lead' and how he had rolled this out in his school. It was really nice to hear him talk about the success of the edu-book club which is something I am hoping to introduce in my current role. It was also interesting to hear about the role Brian has played in supporting staff who are completing research projects with getting started, identifying literature, developing action research and so on, but more importantly about the dynamic of the relationship with both or neither party being seen as the expert but rather it being a relationship of reciprocity.
Next up was Lia Commisser who was talking about using insight from neuroscience to improve education. She shared a number of research articles and a really neat web resource designed to help teachers separate fact from fiction when it comes to claims made on the basis of neuroscience. This was an informative session which left me lots of things I would like to read further. If you are interested in knowing more , this seems like a good start point https://thinkneuroscience.wordpress.com/
The final two sessions were really interesting and I am not sure I have quite finished processing them yet. I suspect what will be a brief summary here, may well become far more developed thought at a later point. James Mannion talked to us about praxis, a term he feels is better suited than many others to this discussion about research engaged teaching. He drew on the work of McIntyre and argued that the way to bridge the gap between teachers everyday practice and codified research knowledge is through teachers engaging in systematic research inquiry as a fundamental tenet of their initial and continuing professional development. He then highlighted that there are lots of good examples of this - action research, lesson study, practitioner-led research - all variations of the same thing but phrased in over-complicated language that provides a barrier in the same way that lack of access to journals provides a barrier. For that reason he has proposed a more simple concept that he felt summarised what all of these approaches are trying to encapsulate - the praxis. He defines this as 'reflection and action upon the world world in order to transform it' (taken from Freire) and summarises the process of teacher engagement with research as 'theory followed by reflection followed by action followed by reflection'. He then introduced us to praxis-education.com, a platform he has designed to try and enable teachers to share what they are doing and to gain support from peers which he sees as the solution to a number of the barriers facing schools. I liked this and thought that this may have some scope for those of us wanting to dip a toe into the water, so I look forward to exploring the platform and seeing what it can do. Nick Rose, in the final session of the day, identified a number of developmental tools for teacher inquiry. He discussed some of the feedback strategies currently used to provide feedback to teachers such as observations, student data, student surveys, teacher self reports and work scrutiny - all useful measures but all problematic in their own way. He drew on Goodharts Law 'when a measure becomes a target, it ceases to be a good measure. Rose argued that teacher inquiry will be effective if evaluation of outcomes is rigorous - this means that we need formative assessment tools. Some of the examples he drew on included the use of coaching logs, formative lesson observations, use of student data in terms of specific analysis of individual questions/topics, classroom climate logs (drawing on the work of Hayden 2014) and finally student surveys in which he introduced us to this piece of work http://www.metproject.org/downloads/Asking_Students_Summary_Doc.pdf from the MET project. He then discussed some ideas for structured observation techniques such as time and event sampling and content analysis as an alternative to work scrutinies. I liked a lot of these ideas but something that Rose said really stuck with me - even a good instrument if implemented poorly will produce bad information - it is really important to find ways of measuring the impact of inquiry but this needs to be developmental, appropriate to context and implemented well. There was a lot in this session that I want to read more on and mull over for a time but I think this could be a bit of a springboard for me.
Unfortunately I missed the final session but this was another excellent event that gave me lots to think about and fueled my own interest in research even further. To Tom and Helene, the organisers, I am in awe!