Monthly Archives: July 2014

Metacognition and Academic Growth

What do we mean by ‘meta-cognition’?

Meta-cognition relates to the process of actively thinking about our own learning. It’s often referred to as ‘learning skills’ or ‘learning to learning’ and is centered on one’s ability to evaluate and monitor one’s own learning and to readjust as necessary through continual self-monitoring. It also includes the ability to self-regulate one’s own learning in terms of managing motivation.

Meta-cognitive Regulation

This refers to the adjustments people make in order to help them control their own learning and includes:

  • Planning
  • Information Management Strategies
  • Comprehension Monitoring
  • ‘de-bugging’ strategies
  • Evaluative and Progress Goals
  • Knowing when and where to use particular strategies for learning and problem solving
  • How and why to use such strategies
  • The use of prior knowledge to plan a strategy for approaching a learning task
  • Taking the necessary steps to:
    • Problem Solve
    • Reflect on and/or evaluate the results
    • Modify the approach as needed

Meta-cognitive Knowledge

This relates to what individuals know about themselves as ‘cognitive processers’ as well as what they understand about the different approaches that can be used for learning and problem solving as well as a knowledge about the demands of a particular task.

In my experience, many students are generally unable or unwilling to evaluate their own learning. However, the students who do best are often the ones who can self-evaluate and self-regulate when given the opportunity to do so (for example, through careful consideration of teacher feedback). For this reason I’m going to look at my own practice, specifically the way in which I present feedback and how I expect my students to approach it.

Does it really work?

Over the past few years teachers have become more concerned with ‘evidenced based’ approaches to teaching rather than relying on untested and often highly erroneous ones (e.g. Learning Styles and Brain Gym). A great deal of the pressure for evidence based learning has grown from a grass-roots level though social media (predominately Twitter), culminating in the ResearchED movement.

The teaching of metacognitive strategies, as well as an awareness of meta-cognition in general, has strong empirical support.

Hattie (2009), in his synthesis of more then 800 meta-analyses of learning interventions found meta-cognitive strategies to have an effect size* of 0.71, suggesting a high impact on educational achievement.

The Education Endowment Foundation has found similar results, finding that meta-cognitive and self-regulatory strategies can add between 7 and 9 months additional progress on average.

Hattie

How should we ‘teach’ meta-cognitive strategies?

If the impact of meta-cognitive strategies is so large, why are students still so poor at self-evaluation and self-regulation? It could be that many schools view it as a faddy bolt-on rather than a highly effective tool to improve students outcomes – the strategies never become imbedded into the system. Meta-cognitive stills need to be part of the culture of the school and be employed in every lesson (rather than being taught in isolation). I would also argue that feedback is a major part of the process and that feedback needs to be detailed, useful and attached to growth goals. The process then becomes a cyclic one that spirals outwards as learning and growth becomes visible.

The recognition of meta-cognition is particularly interesting as it so easily feeds into a more joined up set of initiatives that incorporate other evidence-based interventions such as resilience/buoyancy and Mindset.

*Effect Size is a measurement of the effectiveness of the intervention or strategy based on the results of meta-analysis (the analysis of several studies in the same area). An effect size of 0.4 or above is considered to be within the ‘zone of desired effects’. The greater the effect size, the more the strategy or intervention is seen to be effective. But note that some meta-analyses will be based on far fewer studies than others, leading to lower reliability.

Advertisements

Can teachers really be researchers?

As the debate over evidence-based teaching continues there appears to be two separate strands emerging:

Strand 1: Teaching should be evidence-based (or evidence-informed).

I certainly have no issue with this, although the view of ‘what works’ is perhaps a secondary debate.

Strand 2: Teachers should also be researchers.

At a superficial level this appears like a pretty good idea – imagine the amount of evidence teachers could gather if they were all carrying out their own research studies within their own schools?

To be honest, it’s more than likely that strand 2 would lead to complete chaos. Let’s face it; there is enough bad educational research out there already, without research naïve teachers adding to it.

I view educational research through the lens of a psychologist and hold a very similar view to other psychologist-teachers (e.g. @turnfordblog) that (in terms of science) if psychology is in the Dark Ages then Education is in the Stone Age when it comes to research. Thus, I tend to make references to psychology when discussing education and, as a result, I take a positivist view of the research process.

So what’s the problem?

This list is by no means exhaustive, but it does represent at least some of the issues that need to be discussed before strand 2 can be fully recognised.

1. Sample Size:

Small sample sizes are common is school-based educational research. The majority of schools in their entirety don’t have enough participants to ensure an acceptable sample size, so carrying out a study using a few classes can only give us some small indication of the effect of any independent variable. The other issue is that we can’t force our pupils to take part in a study and even when they do volunteer, they must still be given the right to withdraw themselves (or the data) from the study.

2. Replication:

The replication debate is huge in psychological research at the moment. It has been found that many of our long-standing assumptions about human behaviour are based on studies that simply cannot be replicated. Replication studies are rarely published, null hypothesis studies end up in a dusty cupboard somewhere or in a folder marked ‘failed studies’. One study doesn’t make a theory so implementing interventions based on the results of single teacher conducting a study in a single school with a small sample tells us very little about anything.

3. Generalizabilty:

Does a study conducted in a middle-class school with a low number of students receiving free school meals, below average number of ethnic minority pupils and low number of special need pupils tell us anything about those pupils in deprived, inner-city or ethically-mixed schools? If research is to be useful then it needs to inform us about learning, not just about learning is a particular school (although this data can be useful at a more local level).

4. Bias:

Accept it or not, we all want to be ‘right’. Bias is a major problem in psychology and there is no reason to believe it won’t also be an issue with the teacher who is trying to support a hypothesis (or perhaps prove a point). Bias is usually unconscious but can often also be deliberate.

5. Lack of research training:

I’m a teacher and a Chartered Psychologist. I’m also conducting research as part of a part-time PhD at the University of York. Even though I have a psychology degree, a Masters in Education, and have attended more research methods workshops and seminars than I care to recall, the process of research still often baffles me. Those teachers whose degrees have not included a substantial amount of science or social science research training are far from equipped to carry out serious research.

6. Analysing results – the use and abuse of statistics:

How would a teacher know if the results they have obtained are significant? Most undergraduates are baffled by statistics, as are many post-graduates and post-docs. A non-British academic recently told me that us Brits don’t do stats very well, mainly because the support isn’t there at undergraduate and post-graduate level. Statistical analysis is confusing and often very time consuming and even the best statistical software packages can seem like a long-forgotten language. What about p values? Should we trust effect sizes? Is it alright for us to shave off some of our outliers to get an acceptable level of significance (it’s not by the way – it’s called p-hacking and it very much a no no)? What about Type 1 and Type 2 errors?

What can be done?

I can only offer a few suggestions:

1. Don’t go it alone.

Partner up with other schools to get bigger sample sizes. Partner up with universities or research centres that can give advise on how to carry out research or involve you in some of their own research.

2. Replicate replicate, replicate.

Get other teachers in other schools to carry out the same study.

3. Publish/Blog results (even null hypotheses) – and accept advice/criticism.

Make others aware of what you are doing and take on board the advice offered. Let’s face it; sometime it’s hard to get past the egos that dominate the Internet. We all have something to say and we don’t like being criticised for it but if we’re serious about using research to inform our teaching we really need to get over it.

4. Get some training.

CPD is a major issue and most isn’t worth the time and effort (or money) involved. Introductory workshops in research methods are often cheap (and sometimes free) and there are plenty of resources available online (try OpenLearn from the Open University). Linking such CPD to a recognised research qualification would be a great incentive.

It’s perhaps time to move away from the debate about the acceptability of teacher-researchers and try to work out how it can be practically done. Even though there is a great opportunity here for educational research, there is an equally realistic possibility that it could end in disaster and confusion.