As the debate over evidence-based teaching continues there appears to be two separate strands emerging:
Strand 1: Teaching should be evidence-based (or evidence-informed).
I certainly have no issue with this, although the view of ‘what works’ is perhaps a secondary debate.
Strand 2: Teachers should also be researchers.
At a superficial level this appears like a pretty good idea – imagine the amount of evidence teachers could gather if they were all carrying out their own research studies within their own schools?
To be honest, it’s more than likely that strand 2 would lead to complete chaos. Let’s face it; there is enough bad educational research out there already, without research naïve teachers adding to it.
I view educational research through the lens of a psychologist and hold a very similar view to other psychologist-teachers (e.g. @turnfordblog) that (in terms of science) if psychology is in the Dark Ages then Education is in the Stone Age when it comes to research. Thus, I tend to make references to psychology when discussing education and, as a result, I take a positivist view of the research process.
So what’s the problem?
This list is by no means exhaustive, but it does represent at least some of the issues that need to be discussed before strand 2 can be fully recognised.
1. Sample Size:
Small sample sizes are common is school-based educational research. The majority of schools in their entirety don’t have enough participants to ensure an acceptable sample size, so carrying out a study using a few classes can only give us some small indication of the effect of any independent variable. The other issue is that we can’t force our pupils to take part in a study and even when they do volunteer, they must still be given the right to withdraw themselves (or the data) from the study.
The replication debate is huge in psychological research at the moment. It has been found that many of our long-standing assumptions about human behaviour are based on studies that simply cannot be replicated. Replication studies are rarely published, null hypothesis studies end up in a dusty cupboard somewhere or in a folder marked ‘failed studies’. One study doesn’t make a theory so implementing interventions based on the results of single teacher conducting a study in a single school with a small sample tells us very little about anything.
Does a study conducted in a middle-class school with a low number of students receiving free school meals, below average number of ethnic minority pupils and low number of special need pupils tell us anything about those pupils in deprived, inner-city or ethically-mixed schools? If research is to be useful then it needs to inform us about learning, not just about learning is a particular school (although this data can be useful at a more local level).
Accept it or not, we all want to be ‘right’. Bias is a major problem in psychology and there is no reason to believe it won’t also be an issue with the teacher who is trying to support a hypothesis (or perhaps prove a point). Bias is usually unconscious but can often also be deliberate.
5. Lack of research training:
I’m a teacher and a Chartered Psychologist. I’m also conducting research as part of a part-time PhD at the University of York. Even though I have a psychology degree, a Masters in Education, and have attended more research methods workshops and seminars than I care to recall, the process of research still often baffles me. Those teachers whose degrees have not included a substantial amount of science or social science research training are far from equipped to carry out serious research.
6. Analysing results – the use and abuse of statistics:
How would a teacher know if the results they have obtained are significant? Most undergraduates are baffled by statistics, as are many post-graduates and post-docs. A non-British academic recently told me that us Brits don’t do stats very well, mainly because the support isn’t there at undergraduate and post-graduate level. Statistical analysis is confusing and often very time consuming and even the best statistical software packages can seem like a long-forgotten language. What about p values? Should we trust effect sizes? Is it alright for us to shave off some of our outliers to get an acceptable level of significance (it’s not by the way – it’s called p-hacking and it very much a no no)? What about Type 1 and Type 2 errors?
What can be done?
I can only offer a few suggestions:
1. Don’t go it alone.
Partner up with other schools to get bigger sample sizes. Partner up with universities or research centres that can give advise on how to carry out research or involve you in some of their own research.
2. Replicate replicate, replicate.
Get other teachers in other schools to carry out the same study.
3. Publish/Blog results (even null hypotheses) – and accept advice/criticism.
Make others aware of what you are doing and take on board the advice offered. Let’s face it; sometime it’s hard to get past the egos that dominate the Internet. We all have something to say and we don’t like being criticised for it but if we’re serious about using research to inform our teaching we really need to get over it.
4. Get some training.
CPD is a major issue and most isn’t worth the time and effort (or money) involved. Introductory workshops in research methods are often cheap (and sometimes free) and there are plenty of resources available online (try OpenLearn from the Open University). Linking such CPD to a recognised research qualification would be a great incentive.
It’s perhaps time to move away from the debate about the acceptability of teacher-researchers and try to work out how it can be practically done. Even though there is a great opportunity here for educational research, there is an equally realistic possibility that it could end in disaster and confusion.