Monthly Archives: May 2014

Motivation, Learning and Memory (Part 2)

Kids-To-Cash-in-For-Trade-S

Do monetary (extrinsic) rewards enhance motivation and/or memory consolidation?

I’ve been very clear in the past about my issues with school incentive schemes and how they don’t necessarily produce the intended results (here and here). That said, research conducted by Kou Murayama (University of Reading) and Christof Kuhbandner (University of Munich) suggests that the cognitive and neurological processes involved are even more complicated than I thought.

The assumption that extrinsic rewards are an effective and reliable way to enhance motivation in students remains very strong, with several companies now making a considerable profit through selling their schemes to schools.

But does money (or Xbox games, iTunes vouchers, etc.) really lead to greater motivation and performance?

The answer is far from clear. Neurological findings appear to suggest they do. It has been discovered that monetary rewards promote memory consolidation by activating the mesolimbic reward system, which increases dopamine release in the hippocampal memory system.

But there are problems with this…

Studies have found that hippocampus-dependent memory consolidation requires an extended period of time to complete so that the effects of money on memory manifest themselves only after some time has elapsed. However, very little research has been conducted into the impact of money at different time points (for example, immediately after encoding or after a delay).

Another problem is highlighted by motivational psychology. Psychological studies have found that monetary rewards can actually undermine task engagement, especially for ‘interesting’ tasks. The generally held view is that extrinsic rewards can ‘crowd out’ the intrinsic value of the interesting task – this process is known as the ‘undermining effect’.

What’s interesting to note here is that the ‘undermining effect’ only seems to occur when the task is interesting.

This is pretty much what Murayama and Kuhbandner found. Participants were divided into two groups (money group and no-money group) and presented with a list of trivia questions. Some of the questions had been classed as uninteresting (e.g. “What is the name of the author of the book 1984?”) while others had been classed as interesting (e.g. “What is the only consumable food that won’t spoil forever?”) – the classifications of interesting and uninteresting had been decided by a panel of independent judges.

Both groups were presented with the trivia questions that they had to answer. This was followed by an immediate memory test then a time-delayed memory test one week later.

…they discovered:

  • Monetary rewards helped memory only after a delay
  • Monetary rewards helped memory only when the material was uninteresting
  • Monetary rewards had little impact on memory when the material was considered interesting

…and now for some neuroscience:

The striatum is a subcortical part of the forebrain and is thought to be responsible for executive functions and working memory but is also responsible for reward induced memory consolidation. fMRI studies conducted by Murayama et al (2010) found that extrinsic rewards appeared to dampen down activation of the striatum, but only when the task was considered to be interesting. This would suggest that paying someone to do a task they love could actually make them worse at it!

Striatum - active and inactive

Striatum – active and inactive

Implications.

It seems odd that we could actually negatively affect the learning of students through extrinsic rewards, but this is (in part) what this research is suggesting. If the student is genuinely interested in the subject area, extrinsic rewards could actually prevent the learning from taking place because the area of the brain needed to induce memory consolidation has been dampened down by the extrinsic reward. This has been posited for some time (for example, as far back as 1973, Stanford psychologist Mark Lepper discovered that rewarding young children for something they loved to do actually led to a reduction in motivation within two weeks of the implementation of the incentive scheme). With the evidence from brain scans the hypothesis is strengthened further.

On the positive side, extrinsic rewards work for the boring stuff – just not straight away.

The problem is that not all students find the same things interesting and others are motivated by other factors unrelated to extrinsic rewards. There is also an ecological validity issue here (as there is with all laboratory-based experiments) as well as the continuing discussions surrounding the interpretation of brain scans.

Nevertheless, all this does suggest that we should be cautious before we begin to employ extrinsic rewards.

References:

Murayama K & Kuhbander C (2011) Money enhances memory consolidation – But only for boring material. Cognition 119, 120-124

Murayama K, Matsumoto M, Izuma K & Matsumoto K (2010) Neural basis of undermining effect of monetary reward on intrinsic motivation PNAS doi: 10.1073/pnas.1013305107

Motivation, Learning and Memory (Part 1).

Underrated_Manga_07-bokurano

Why do we remember the stories we read in comics but forget what we learned in school?

I recently attended a presentation (organised by the Psychology in Education Research Centre, University of York) by Kou Murayama, a researcher at the University of Reading. Dr Murayama uses a range of research methods, including behavioural experiments, longitudinal studies and neuro-imaging to investigate, among other things, the link between motivation and learning.

As Murayama pointed out, students often recall a great deal about topics that interest them but are often unable to do the same with topics related to school – Murayama used the example of learning Japanese history and spoke about how, at school, he would memorise the entire textbook in order to pass his exams. That information (or at least most of it) is now forgotten, unlike the stories from his favourite Japanese comics, which will remain with him forever.

It has long been proposed by researchers including Edward Deci, Mark Lepper and Carol Dweck that motivation can be viewed as either intrinsic or extrinsic (for an excellent introduction to this I would highly recommend ‘Drive’ by Dan Pink). It has also been understood for many years that interest and curiosity play a key role in the consolidation of learning, often leading to what Mihaly Csikszentmihalyi calls ‘flow’.

Goal setting can also be described in terms of intrinsic/extrinsic motivators:

Mastery Goals (Intrinsic) – goals/striving based on personal development (e.g. “my goal is to develop my knowledge”)

Performance Goals (ego/extrinsic) – goals/striving  focusing on the demonstration of normative abilities (e.g. “my goal is to beat other people”)

Both approaches can facilitate elaborative learning processes but it appears that these processes are different for each type of goal. Because mastery-approach goals are linked to curiosity, exploration and an interest-based focus on learning they may facilitate a broad scope of attention beyond the target items. Essentially, mastery-orientated goals lead to greater long-term consolidation of learning while performance goals lead to only short-term learning.

In one particular study, Murayama asked a group of university students to learn a list of words and then carried out an immediate recall test. They were then asked to carry out another recall test a week later. However, one group of participants were given the following instructions:

If you work on this task with the intention to develop your ability, you can develop your competence

The second group were given the following instructions:

The aim of this task is to measure your cognitive ability in comparison with other university students

(The first instruction represents the mastery-goal condition; the second represents the performance goal condition)

There was very little difference between the scores for the first recall test, suggesting that the instructions had little impact on short-terms learning. However, when tested a week later it was discovered that the mastery goal condition produced a significantly higher recall rate than the performance goal condition.

Of course, learning material within an experiential situation such as this reduces the study’s ecological validity due to its artificial nature. Neither does this study suggest anything about learning over the longer term or within specific classroom settings. However, it does allow us to make strong causal inferences between different types of motivators.

The greatest strength of this research (and it is only a small sample of the huge volume of research Murayama has produced) is that it supports the findings of other researchers such as Putwain and Dweck (who I have written about before), and this adds to a growing literature on academic motivation that supports the view that intrinsic motivators are more powerful than extrinsic ones. Not only that, this kind of research also suggests that cognitive functions like memory are influenced by emotional factors such motivation, interest and boredom. It also supports the Dweckian view that implicit theories of intelligence (i.e. ‘Mindset’) can impact heavily on motivation.

References:

Murayama, K., & Elliot, A.J. (2011). Achievement motivation and memory: Achievement Goals differentially influence immediate and delayed remember-know recognition memory. Personality and Social Psychology Bulletin, 37, 1339-1348.

Questioning the stability of academic buoyancy.

Back in October I conducted a small-scale exploratory study into three constructs (academic self-concept, academic buoyancy and implicit theories of intelligence). You can read the details here. A few weeks ago I asked the same students toStressed-Student complete the questionnaires again to confirm that these constructs remain stable over time. I was particularly interested in academic buoyancy (day-to-day resilience) due to the forthcoming AS exams. What I wanted to confirm was that those students who considered themselves resilient at time 1 (October 2013) still considered themselves resilient at time 2 (May 2014). This would be measured using the Academic Buoyancy Scale (Martin and Marsh, 2007), a four-item measure of academic buoyancy (AB) that has proved reliable over time and within different settings.

Let’s get some of the problems with the ‘study’ out of the way now.

At time 1, the sample consisted of 41 year 12 students. At time 2, and due to a number of factors (including subject/school drop-out and a lower volunteer rate) this had dropped to 27. The final sample is therefore very low and is far from representative.

The sample is small and unrepresentative – predominantly white, middle-class and with a higher percentage of female participants.

However, as this was an exploratory study, I was looking for general patterns needed to establish possible further avenues of investigation.

Ethical Issues.

The study was conducted in line with ethical procedures of the University of York. Participants were volunteers and gave informed written consent (all participants were over the age of 16). They had the right to withdraw from the study as any time (including the withdrawal of their data).

What did the data show?

Data analysis was conducted using the R statistical package. The results of the t-test found a significant difference between AB at time 1 and AB at time 2 (p<0.01). Further analysis found an effect size of 0.675. If we apply Cohen’s (1988) conventions for effect size, we also find a highly significant difference between time 1 and time 2 (so we can be pretty confident that timing was a major factor).

What does all this mean?

Results would suggest that AB isn’t stable and is mitigated by other factors. The timing of the second data collection activity (a week before the start of AS exams) could play a role in the difference between the two sets of scores, begging the question “Do students feel less confident about their abilities at different times?” Outcome measures (in the form of AS results) can be examined in August and could (but only ‘could’) yield more information.

Where now?

The plan now is to use experience-sampling methods (ESM) to collect data on a number of factors ‘as they happen’. The problem with much of the research into academic buoyancy is that participants are asked to complete measures in isolation (i.e. “I am good at dealing with setbacks”). ESM allows for participants to think about these measures in a more realistic and moment-by-moment way via electronic ‘prompts’ sent to mobile devices. ESM tends to result in large data sets, dependent upon the number of prompts and length of the study, so sample sizes can be smaller (and, for practical reasons, need to be). An additional possibility would be to supplement the ESM data with a end of day/end of week questionnaire to investigate the difference between immediate and retrospective self-assessments.

What’s the point?

Emotion appears to impact on learning. Research has suggested that factors such as self-concept, boredom, anxiety and resilience can have both positive and negative effects on academic outcomes, as well as cognitive functions like attention. Understanding the nature of these factors could help to develop interventions to stabilise some of them. Emotion impacts on cognition, for example, stress can heighten recall to a point but too much anxiety leads to inaccurate recall. The so-called Yerkes-Dodson suggests that performance increases as physiological and mental arousal increases to an optimum level, at which point cognitive functions begin to decline. Although the Yerkes-Dodson law in somewhat dated, more recent research appears to support its validity.

In a system where more and more of our young people are suffering from heightened levels of anxiety (the reason for which is highly debatable), examining their daily classroom lives can be provide rich data into how, when and why they do and do not learn.

Teaching, neuroscience and the teenage brain.

There is a great deal of debate at the moment about neuroscience and its potential within educational settings. The Association of Teachers and Lectures (ATL) have even debated the possibility that neuroscience should become part of teacherTeenBrain training, partly inspired by the recent interest teachers have shown towards evidence based teaching and learning. Some of the most fascinating research to come from neuroscience over the past few years has been from neuroscientists and cognitive psychologists like Sarah-Jayne Blakemore and her colleagues at University College London. Blakemore and her team have spent a great deal of time looking at the way the teenage brain develops in comparison to the brains of younger children and adults. They use a technique known as Magnetic Resonance Imaging (or MRI) in order to examine the inner workings of the living human brain. Before the introduction of MRI the only way psychologists and neuroscientists could investigate brains without surgery was through post-mortems of the recently deceased so the main advantage of MRI is that researchers can now study living brains while they are in the process of remembering, deliberating and making decisions. It was generally considered that the crucial period for brain development was the first three years of life and, certainly, there are many major changes taking place during this period, changes that include the growth of specialist cells known as neurons.

Astonishingly the adult human brain contains about 80 to 100 billion neurons (just as interesting is that the brain at birth contains only slightly less) but these neurons are only part of a much bigger story. Even before birth the majority of the critically important aspects of the brain are already in place, having begun to develop during the first week of gestation. By the seventh month of gestation pretty much all of the neurons that will make up the mature brain have already been formed. The most significant transformation during the early years of life is not the neurons themselves but rather the wiring of connections between them, known as synapses. The synapse is the way in which neurons communicate with each other in the form of electrical impulses or through special chemicals known as neurotransmitters. These neurotransmitters can have a major impact on our behaviour and emotional state, for example, low levels of the neurotransmitter serotonin has been linked to depression, which is why anti-depressants known as Selective Serotonin Re-uptake Inhibitors (or SSRI’s) like Prozac, have proved highly successful in the treatment of depression and related conditions. The neurotransmitter dopamine has been linked to other psychological disorders including schizophrenia; anti-psychotic drugs help to regulate these levels and appear to successfully treat the symptoms related to such conditions. Neurons don’t touch each other so information (electrical or chemical) is released by one neuron and received by another via a gap known as the synaptic cleft (this process is known as synapses). As we learn new things, be it reading, writing or riding a bike, a new connection between neurons is made and the more often the activity is carried out the stronger the connection becomes. This is why the more we repeat a procedure the easier it becomes to do so that, in same cases such a driving the car to work and back each day, our actions become so automatic that we often forget having carried them out.

This increase in connections during the early years of life is called synaptogenesis and can last for several months depending on the species of animal. Astonishingly, the number of connection in the young brain is so vast that synaptogenesis is followed by a period where many unused connections are eliminated through a process known as cognitive pruning which continues for a number of years. Once the process is complete the density of the connections will have reached adult levels. Studies conducted on monkeys have found that such density declines to adult levels at around three years, the point at which the monkeys reach sexual maturity.

Of course, monkeys aren’t humans and it would be highly erroneous to suggest that the development of a human infant mirrors that of other primates. Because the monkey develops faster, reaching sexually maturity at around three years of age, we must assume that the human infant develops somewhat more slowly. This view is astonishingly recent and prior to this it was assumed that humans, like monkey’s had reached maturity in terms of brain structure in early infancy. Unfortunately this error led to the view that infants reach a critical stage in development, after which they might not be able to learn certain skills vital to human growth such as language learning. A more probable situation is that infants pass through a sensitive period where certain aspects of learning are easier to achieve. Studies of feral children, those children who spend the first few years of life raised in the absence of human contact, have discovered that even if they fail to master language in early infancy, this skills can be obtained later in life – albeit with extreme difficulty. In fact, rather than brain development reaching full term in early childhood, Blakemore has discovered that teenage brains are still developing; it’s just that development is only taking place in certain brain regions. This has actually been known since the 1960’s but it is only now that researchers have access to fMRI scanners that they can support these views with evidence. The human brain matures at different rates; for example, the visual cortex should be in place by about ten months. After about this time synaptic density declines (unused connections are cut away through cognitive pruning), reaching adult levels by about ten years old. However, development of the frontal cortex appears to last well into the teenage years and the pruning process in much slower. In fact, synaptic density doesn’t peak until about the age of eleven years and the pruning process continues into the early twenties. This late stage of brain development may go some way to explaining teen behaviour but, before we get excited, there is a great many other factors to take into consideration.

Essentially, there appears to be two major changes that occur before and after puberty. During this period the actual volume of the brain tissue appears to remain stable, however, there is a significant increase in the amount of white matter in the frontal cortex of the brain. As already explained, neurons are continuing to develop and new connections are being formed during this period. The neurons themselves are busy building up a layer of a fatty tissue called myelin on the axon of the cell. The axon is responsible from carrying electrical impulses away from the cell body of the neuron, down the shaft of the axon toward the dendrites, causing one cell to communicate with another. Myelin acts as an insulator and increases the speed of the electrical transmission between the neurons (so it might be related to intelligence – hence the omega 3 hype from a few years back). The fatty tissue of the myelin shows up white under a microscope (hence white matter) and would suggest that the speed at which they communicate with each other significantly increased after puberty. The second major change was first identified by Peter Huttenlocher of the University of Chicago. Brain development in the brains of children leads to a major increase in connections (synaptogenesis) in pre-pubescence followed by major decrease in the density of synapses after puberty. This appears to support other studies that have concluded that while unused connections are pruned; those that are used are strengthened. This appears to suggest that teenagers (and only teenagers) go through a process of brain fine-tuning in the frontal cortex throughput the teenage years.

The frontal cortex (literally the part of the brain at the front of the skull) is the home of what cognitive psychologists and neuroscientists call executive functions. These executive functions are involved in a number of activities including our ability to anticipate the consequences of our own actions, our capacity to decide between good and bad actions and the ability to suppress unacceptable unsocial behaviour. It is also concerned with what is known as social cognition – the way in which we co-operate and communicate with others so that we can successfully exist with members of our own species. The frontal cortex also allows us to modify our emotions so that they can fit within socially accepted norms. Could this later stage of brain development explain why some teenagers can become so difficult during this period of rapid and complex change? American Psychologist, Mike Bradley seems to think so. Bradley has even gone so far as to suggest that adolescence is a form of mental illness caused by the immature yet rapidly developing state of the teenage brain. While many would pour scorn on Bradley’s suggestion, it does appear that something is occurring in the teenage brain that compels them to behave in a certain manner, a manner that many adults might view as unacceptable.

So what does all this really mean to parents, teachers and other adults who work with teenagers? The research is all well and good but unless it can help us to help teenagers (or at least begin to appreciate the huge changes taking place within the context of educating children) knowing what is happening in the brains of our teens is of little use. Additionally, many neuroscientists are still unsure of how their discoveries can impact on education and learning. Blakemore’s research would suggest that teaching teenagers is even more complex than we currently believe because of the way the brain is continuing to develop and its impact on executive functioning. This doesn’t mean that we should reject neuroscience (I was completely taken with Blakemore’s research when I first read it a few years ago and became even more so when I attended her talks) but it does suggest caution.

Blakemore would herself admit that there is a great deal of uncertainly about how we can use this research to inform teaching practice – but that doesn’t mean we shouldn’t at least investigate some of the possibilities.

Is Daniel Willingham the new Mind Gym?

“…don’t follow leaders” (Bob Dylan – Subterranean Homesick Blues)

Over the past week or so I’ve been asked several times about what I think of Daniel Willingham’s book Why don’t students like school? With some embarrassment I’ve been forced to inform the inquisitors that I hadn’t actually read it. I knew much of its contents, of course, seeing as pretty much every educational blogger in the UK has reviewed, disseminated and debated it over the past year or so. In fact, its influence on teachers in the UK has been nothing short of phenomenal, so, over the past few days I’ve been making my through the book to see what all the fuss is about. Just to note, this is not a review of the book (others have done a much more thorough job of it than I can) it’s really just a few thoughts strung together concerning its success in the UK.

I have to admit that, despite being involved in either studying or teaching psychology for the best part of two decades, I hadn’t even heard of Willingham until a little over a year ago, so when his named kept cropping on Twitter and on educational blogs I thought I must have been sleepwalking through the last twenty years (my undergraduate degree is in psychology and much of my MEd was related to cognitive impairments in learning).

I found it refreshing that teachers were beginning to take an interest in cognitive psychology and educational research, yet I was slightly baffled as to why Willingham had become the poster boy for this burgeoning grassroots movement. I think I now understand.

Much of Why students don’t like school? would be familiar to most psychology graduates who studied topics such as memory and problem solving as part of their undergraduate degrees. I’ll admit to skipping a great deal of the book as I quickly realised I was returning to Karl Duncker’s Candle Problem and the Towers of Hanoi. Some of it will also be familiar to anyone with an A-level in psychology, especially the parts specifically related to working memory. My point is that essentially, there is nothing really ground-breaking here – it’s cognitive psychology, but with an educational twist.

What Willingham has wonderfully managed is to take an area of science, which is often shunned by teachers, and make it highly accessible – this is what academics are notoriously bad at. There are plenty of cognitive psychologists; just very few who have managed to capture the imagination of teachers in this way. This is evident in the educational blogs I often read. For example, I read little citing Alan Baddeley or Graham Hitch, the two giants of working memory (currently at the University of York – my own part-time stomping ground) or indeed, any other influential cognitive psychologists.

Willingham isn’t the only one who has managed to cross the divide. I suspect that one of the great success stories of the next 12 months will be Make it stick (Brown, Roediger and Mcdaniel) a book full of evidence and tips on how to get your students to use their memory more effectively. This is perhaps, in part, due to the rush to devour everything cognitive, while other areas of psychology are assumed to have little relevance.

Other, equally relevant, areas have perhaps been neglected. Asbury and Plomin’s G is for Genes caused a blip of controversy (mainly by those who didn’t read it) but appears to have died away quickly without making any major impact. Similarly, the work of Australian educational psychologist Andrew Martin tends to be confined to academia (and Australia) even though its possible benefits for teachers are highly significant (his Building Classroom Success is an excellent read and supported by a host of evidence). There is also some fascinating work coming out of both The Institute for Effective Education and The Psychology in Education Research Centre (both also at York – sorry!)

Cognitive psychology might just be the flavour of the month, or it might entice teachers to stay a little longer – it remains the dominant psychological paradigm so it should be around for while. Unfortunately, there are other guests waiting to be entertained and teachers shouldn’t neglect them.

Are we simply replacing Mind/Brain Gym and VAK with Willingham and Goldacre or are we all part of new movement still finding its feet? Are we creating new celebrities whose every word we hang on to or are we looking for a new direction?

Only time will tell.

In which I ponder my life as part-time researcher.

This year I am celebrating ten years as a teacher. To be honest, I never thought I would make it this far and I have lost count of the amount of times I have considered packing it all in. I recall the exasperated looks on the faces of family andBusinessman juggling friends when I informed them that I was going to do a PGCE, “Really!” they exclaimed, “Are you sure?”

I was met with somewhat the same response when, this time last year, I decided to embark on a part-time, self-funded PhD. Some, I believe, thought me insane or at the very least suffering from some sort of psychotic break – but this was something I felt compelled to do and I’m glad that I took the deep and dangerous plunge.

So, here I am, almost at the end of my first year of what could turn out to be a six year obsession. Juggling teaching with research is a tricky task and I often feel that I would be better off doing one or the other rather than both simultaneously. Nevertheless, exposure to the research side of education is beginning to inform my teaching more and more and I am more than ever aware of the role such research could play in teaching generally. I also know a little bit more about the lives of my students because of my chosen area of research. Through tentative exploratory investigations I have gleaned information about how they see themselves as learners, how they feel about their own resilience and what they believe about intelligence (and the possible relationships between these factors). What I have certainly realised is that I want to understand more about their daily struggles; about why some of my sixth formers engage deeply with their studies while others don’t appear to be bothered about anything related to school. I’m also starting to understand a little bit more about the interaction between school and student, the complex connections between school culture and student attitudes.

I’ve also started to look at research methods in a new light and am currently investigating some exiting methods for studying students’ daily lives such experience sampling and regular ‘snippets’ of study-related conversation. I intend to blog about some of these soon.

On a more practical level, every day is hard. As a part-time student I have to organise my time carefully. I have supervision meetings, research group seminars and other events to carefully slot into my limited time. I have managed to attend only two events this year – the Institute for Effective Education conference at the University of York (for which I was granted time off from school) and the NTEN ResearchED event which was held on a Saturday. Both were excellent events – far better than any CPD course I’ve ever attended (and whole lot cheaper – an added bonus seeing as I have to foot the bill myself). I will, however, miss the British Psychological Society conference this year as well as the ResearchED conference in September but this all part of the process whereby I need to select what is necessary and what is feasible. More stressing, perhaps, is the ever-present possibility that I will run out of money before the completion of my research. There are few funding opportunities available for part-time postgraduate students and those that are available remain highly competitive – this is partly why the drop-out rate for part-timers is so high. There is also a kind of self-imposed pressure to publish during the course of the PhD and many would argue that publication is more important than the final thesis. I’m desperate to publish but there is a need to prioritise and my teaching always takes precedence.

Would I advise other teachers to embark on a PhD?
It’s a difficult question one that depends upon individual circumstances. A good supervisor is perhaps the key and I’m lucky in that respect but there are more pressing concerns related to time and finances. At some point I will need to approach other schools in order to obtain a more representative sample and this will eat further into my limited time. Family life is also an important factor to bear in mind and dividing time between study, work and family can become quite a juggling act. If you think you can do it then it can be very rewarding but remember that you don’t need to do a PhD to do research.

Teachers are ideally placed to engage in all kinds of research and there are plenty of ways to learn the basics of research design and implementation. The rewards are unlikely to be financial – a PhD just makes you an over-qualified teacher so it’s worth thinking about why you want to do it in the first place. You might be looking for a career change but for most, I suspect, it’s more to do with personal development. The financial commitment is perhaps the biggest concern (most PhD’s will set you back around £2000 a year and this could be for up to six years) and there is still that nagging feeling that self-funded PhD’s are somehow less worthy.

A PhD is challenging at the best of times but for teachers (and anyone who has to balance work with study) I suspect it’s more so. The benefits can only outweigh the costs however, both in terms of personal fulfillment and the opportunity to add to the research literature.

True and false might not be as simple as you think.

[Note: This post was partly inspired by a talk given at the NTEN ResearchED event held at Huntington School, York on 3rd May, 2014. Also note that I am not a statistician – which might appear obvious!]

Many of us in the educational community are at last coming around to the realisation that research does have something to offer. We read it all the time on social media and have all witnessed discussions that seem to go on for days about whichweb-UCScalendar-2012-ScientificTruthPauls2011 research is best or how to be critical about things we’ve read. There is an assumption that if research supports our particular view then we have permission to take the high ground and shoot down all those who refuse to accept the ‘evidence’. Of course, we need evidence (that’s the whole point) but we also need to be critical of it.

A great deal of educational research is positivist; like psychology, educational research often assumes that outcomes can be measured using scientific principles and anyone who is familiar with academic papers in psychology will have noticed that there is an awful lot of numbers and bizarre equations involved. The scientific method is one of hypothesis testing – to paraphrase Richard Feynman, the first thing you do is make a guess and then you test the guess by conducting an experiment. If your experiment doesn’t support your guess then your guess is wrong.

In psychology (and many social sciences) what we are looking for is a statistical significance, the nuts and bolts of which are dependent upon statistical tests. The main criteria we use in order to establish significance is something called a p value (or probability value). Psychologists often set the p value as 5% and represent this using the statement p≤0.05.

What does p≤0.05 actually mean?

This is actually quite straightforward, despite the looks on by students’ faces when presented with it: “Maths! In psychology… nooooo.”

All it means is that there is a 5% (or less) chance that the results were due to something other than the manipulation of the independent variable (i.e. something the researcher was unable to control for). The p value is fairly arbitrary but there is a general consensus that 0.05 is a good place to start. We could set it higher but this might mean that we accept our hypothesis based on a false positive (a Type 1 error), or we could set it lower – but then we face the possibility that we reject our hypothesis and accept our null hypothesis when, in fact, the difference was significant (a Type 2 error). So, sometimes that which is true is actually false and that which is false is actually true (cue Robin Thicke).

P values are a hot topic at the moment with many suggesting that effect size might be a better measure to use (there are problems here as well). Nevertheless, while p values remain so influential, we need to be mindful that errors do occur. More worrying, perhaps (if less common) is the phenomenon of p-hacking. P-hacking involves removing the data which prevents the p value from being reached, thus manipulating the data in order to create a positive result. So a researcher might remove all the outliers, sometimes under the ruse that there was something ‘wrong’ with these results. P-hacking (and other such dubious practices) are often uncovered due to the inability to replicate the results – so be wary of single studies (especially if they are a few years old) with no recent studies to support them.

So, to claim that true and false (or right and wrong) are absolute in research is perhaps to misunderstand the workings of the scientific method as it applies to real people. Other factors such as bias, demand characterises and individual differences can blur the lines even further. This is perhaps the reason for the oft-used line ‘research suggests’ because there is always the probability (however small) that the results aren’t as statistically significant as we thought.