Tag Archives: metacognition

Metacognition and Academic Growth

What do we mean by ‘meta-cognition’?

Meta-cognition relates to the process of actively thinking about our own learning. It’s often referred to as ‘learning skills’ or ‘learning to learning’ and is centered on one’s ability to evaluate and monitor one’s own learning and to readjust as necessary through continual self-monitoring. It also includes the ability to self-regulate one’s own learning in terms of managing motivation.

Meta-cognitive Regulation

This refers to the adjustments people make in order to help them control their own learning and includes:

  • Planning
  • Information Management Strategies
  • Comprehension Monitoring
  • ‘de-bugging’ strategies
  • Evaluative and Progress Goals
  • Knowing when and where to use particular strategies for learning and problem solving
  • How and why to use such strategies
  • The use of prior knowledge to plan a strategy for approaching a learning task
  • Taking the necessary steps to:
    • Problem Solve
    • Reflect on and/or evaluate the results
    • Modify the approach as needed

Meta-cognitive Knowledge

This relates to what individuals know about themselves as ‘cognitive processers’ as well as what they understand about the different approaches that can be used for learning and problem solving as well as a knowledge about the demands of a particular task.

In my experience, many students are generally unable or unwilling to evaluate their own learning. However, the students who do best are often the ones who can self-evaluate and self-regulate when given the opportunity to do so (for example, through careful consideration of teacher feedback). For this reason I’m going to look at my own practice, specifically the way in which I present feedback and how I expect my students to approach it.

Does it really work?

Over the past few years teachers have become more concerned with ‘evidenced based’ approaches to teaching rather than relying on untested and often highly erroneous ones (e.g. Learning Styles and Brain Gym). A great deal of the pressure for evidence based learning has grown from a grass-roots level though social media (predominately Twitter), culminating in the ResearchED movement.

The teaching of metacognitive strategies, as well as an awareness of meta-cognition in general, has strong empirical support.

Hattie (2009), in his synthesis of more then 800 meta-analyses of learning interventions found meta-cognitive strategies to have an effect size* of 0.71, suggesting a high impact on educational achievement.

The Education Endowment Foundation has found similar results, finding that meta-cognitive and self-regulatory strategies can add between 7 and 9 months additional progress on average.


How should we ‘teach’ meta-cognitive strategies?

If the impact of meta-cognitive strategies is so large, why are students still so poor at self-evaluation and self-regulation? It could be that many schools view it as a faddy bolt-on rather than a highly effective tool to improve students outcomes – the strategies never become imbedded into the system. Meta-cognitive stills need to be part of the culture of the school and be employed in every lesson (rather than being taught in isolation). I would also argue that feedback is a major part of the process and that feedback needs to be detailed, useful and attached to growth goals. The process then becomes a cyclic one that spirals outwards as learning and growth becomes visible.

The recognition of meta-cognition is particularly interesting as it so easily feeds into a more joined up set of initiatives that incorporate other evidence-based interventions such as resilience/buoyancy and Mindset.

*Effect Size is a measurement of the effectiveness of the intervention or strategy based on the results of meta-analysis (the analysis of several studies in the same area). An effect size of 0.4 or above is considered to be within the ‘zone of desired effects’. The greater the effect size, the more the strategy or intervention is seen to be effective. But note that some meta-analyses will be based on far fewer studies than others, leading to lower reliability.