My NPJ Science of Learning Interview – ‘Educational implications of attention and distraction in teenagers’

The Nature partner journal ‘Science of Learning’ website is another useful addition to the increasing number of resources encouraging a more scientific approach to education and learning.

It’s also just gone up in my estimations greatly (!) as they’ve published an interview with me about my PhD work. Read it here. If you’re a teacher or researcher and any of this sounds interesting to you, please feel free to get in contact.

 

My NPJ Science of Learning Interview – ‘Educational implications of attention and distraction in teenagers’

Teach like a chimp! The validity-transportability paradox in teaching

validity-transportability-paradoxA paradox of attempting to apply ideas from research into the real world is that the simplified environments of scientific experiments allow for the formation of extremely complex explanations, whilst the application of those ideas into the more complicated real world often require that they are somewhat simplified. The ideal conditions required for creating validity, and those required for creating transportability (the easy transmission of an idea into the real world, to borrow a phrase from Jack Schneider’s 2014 book ‘From the ivory tower to the school house’), are almost completely opposing. This clearly creates a dilemma: how much erosion of validity do we accept in order to allow a theory to become transportable?

‘The Chimp Paradox’… Paradox

Until recently I was something of a validity purist on this matter. I remember a seminar I attended last year with Vincent Walsh, a neuroscientist at UCL who studies sporting performance and decision-making under pressure and works with various GB sports teams. At one point, mention was made of Steve Peters, the psychiatrist who has also made a name working with some of the biggest names in UK sport such as Chris Hoy, Victoria Pendleton, Steven Gerrard and Ronnie O’Sullivan, as well as a lucrative consultancy side-arm with hundreds of companies. Peters’ work, detailed in his book ‘The Chimp Paradox’ involves dividing the mind into two competing parts; a primitive “Chimp” brain (the limbic area), which deals with emotions, and a more rational “Human” brain (the frontal cortex). In Peters’ formulation the chimp brain works 5 times faster than the human brain. In some of his work with sportspeople a third region is introduced: the “Computer” brain, which is even faster (20 times as fast as the human and 4 times faster than the chimp!)

Peters’ clients are taught to recognise their mental states and to govern nerves, pressures and insecurities according to these three brain labels. Now, from a purely neuroscientific perspective, Peters’ ideas are at best a severe oversimplification, and at worst outright inaccurate. I won’t spend time here covering the reasons for this (though this page is a decent introduction to the difficulties of dividing the brain into regions according to their ‘development’). However (and this was the point made to me when I voiced these concerns), when you have Chris Hoy crediting Peters with his gold medals, you can’t really argue with his results. Sometimes an idea can be a ‘useful simplification’ (or even a ‘useful fiction’, depending on how stringent your criteria are), containing enough accurate information to help people whilst remaining widely transportable. The success of ‘The Chimp Paradox’, then, is precisely because the complicated science behind Peters’ claims have been simplified enough to have broad accessibility and appeal to people’s everyday lives – ‘The Chimp Paradox’ Paradox, if you will.

Validity vs transportability in teaching

Whilst I’m sure this observation is far from new, it has struck an interesting chord with me recently in thinking about applications of research to teaching. I was reminded of it last week when I inevitably stumbled across yet another ‘Edu-Twitter’ debate about the chosen methods of a particular North-West London school. I know I shouldn’t, but, like a fire in a carpet warehouse, it’s hard not to slow down and watch the carnage unfold. This particular debate centred on the school’s use of certain principles of cognitive psychology (notably interleaving and spacing) as a justification for some of their methods. Some comments on the thread accused the school of oversimplifying complex theories (and implied that this therefore made them worthless). I might previously have agreed with this position, but as we have seen it is clear that some simplified scientific ideas, properly packaged, can be enormously useful to some people. If we are to embed evidence-based practice in school, then the first step is surely to embrace it in any form initially, and work out the finer details from there.

The difficulty is that there are no clear indicators as to where to draw the line between validity and transportability. Indeed, the ‘Goldilocks zone’ may be different for each idea anyway, depending, for example, on the transportability of the original idea and the extent to which it can be simplified whilst still retaining a coherent message. The downside of this process is that more complex theories (which may well resist simplification for very good reasons) will lack transportability, limiting the extent to which they are able to be widely adopted. As an example from another Twitter feed last week:
screen-shot-2017-01-30-at-16-11-17

Whilst the post was widely appreciated in certain circles, I was struck by how Cognitive Load Theory, an idea that is so central to much educational scholarship (and which is potentially an extremely helpful concept for educators) has, in my experience at least, never really caught on in classrooms. I would argue that this might be because its rather nuanced division of cognitive load into three different types is not the sort of thing that is easily transported in 140 characters or casual break-time conversation. The validity/transportability balance for CLT is an interesting story in itself; this essay from Michael Pershan charts Sweller’s attempts to balance the complexity and validity of his theory with its transportability. It is hard to see how CLT could be further simplified without losing its essential essence, but equally in its current form I’m not convinced it’s that transportable.

Teach Like a Chimp

teach-like-a-champion
With apologies to Doug Lemov…

So what are the ‘useful simplifications’ in teaching? Which academic ideas can be easily simplified into transportable forms without losing their validity? I might suggest the following:

– Applied memory strategies (e.g. spacing, interleaving, testing etc)
– Elaboration (linking to previous knowledge)
– Encouraging metacognition
I would previously have suggested feedback, but following the EEF review into marking it’s clear that this issue is rather more complicated (or less well researched) than we might imagine.

What other ideas would people suggest?

Teach like a chimp! The validity-transportability paradox in teaching

From the Enlightenment to neurobollocks: The ‘myth of progress’ in teaching

In my last post I looked at recent research which illustrated that brain-training is not as effective as the adverts might make out, as well as reminding us that many of the benefits claimed by brain-training programs are available using existing, often time-honoured and rather mundane methods. This lead me to think about why this bias towards the novel exists, to the point where we may systematically ignore solutions that have been effective for years.

The concept of universal education can be traced back to the Enlightenment, indeed it is one of the most enduring products of the period. In a “post-factual” age when, in the words of Stephen Fry,

the achievements of the enlightenment are questioned, ridiculed, misunderstood and traduced by those who would reverse the progress of mankind

it is notable that the notion of universal education has never been seriously questioned. It was the product of two major Enlightenment advances, one scientific and one philosophical. Firstly, scientific breakthroughs from the likes of Newton, Kepler and Galileo led to an optimistic outlook regarding humans’ ability to comprehend and shape the world around them. Subsequently, John Locke and other figures from the emerging philosophical school of Empiricism began to argue that knowledge could only be gained through the senses; through our interactions with the world and by our subsequent reflections on the impressions that these interactions created. Empiricism led naturally on to ideas of universal education; since we all have pretty similar faculties for the sensation of the world, there seemed no obvious reason why all people should not be able to benefit from educational experiences which had, to that point, only been available to a privileged few. Presumably, then, the more people that were educated, the faster still would be the progress and development of the species. The confluence of these two factors – optimism about our scientific capabilities and an empiricist notion of education for all – created a powerful narrative which persists today: humans are capable of greatness and education is the tool for that greatness to be realised as widely and as effectively as possible. Education as the engine of human progress. So far so good, and I agree…

But this optimism can also have a corollary. It can create a general belief that, the occasional blip notwithstanding, we are on something of an inexorable march of ‘progress’. Indeed the notion of ‘progress’ was an important one for many Enlightenment thinkers, who drew a sharp distinction between more ancient voices such as Plato and Aristotle who saw society as a cycle, with periods of progress and development unavoidably followed by decline and disaster. Enlightenment thinkers, especially those armed with the emerging theories of evolution in the late 18th century and beyond, often presented ‘progress’ as an essential part of human nature, with our increasingly successful adaptation to our surroundings reduced to a simple (and inevitable) biological necessity. This narrative of progress is powerful and seductive, but it is also potentially dangerous one. Theories regarding the ‘progression’ of the species have been at the heart of some of the worst of subsequent human thought, justifying eugenics and genocide. In a less serious form, however, a blinkered faith in human ‘progress’ can lead to either a casual over-optimism regarding our current actions, or a tendency to embrace the ‘new’ and to reject the status quo. In both cases, this novelty bias can encourage us to change systems without due scrutiny being applied to their newer replacements.

Teaching, as we have said, is an Enlightenment profession. The nature of teaching means that many of the goals of the Enlightenment are also implicit assumptions of the profession. The belief in improvement through information, that the widest benefits for the world will come from the widest dissemination of knowledge, a passion for the democratisation of learning and so on. It is hard to imagine anyone entering the profession without holding these basic assumptions. In addition, it is not a profession where we can ever conceivably judge that we have done enough, or produced the ‘best possible’ results, so there is always a desire for progress: better exam results, value-added scores, enrolment figures, university entry rates etc etc – something could always be improved. Yet perhaps the same Enlightenment-era zeal which drives us can become something of a double-edged sword, leaving us vulnerable to falling for the ‘myth of progress’. I would argue that, just as it embodies many of the virtues of the Enlightenment, teaching is also prone to demonstrate the occasionally casual over-optimism of the time, and to embrace the novel over the time-honoured too unquestioningly. ‘Change for change’s sake’ is a frequent lament in the classroom in response to yet another management initiative, but teachers also need to critique their own classroom practice in the same spirit. How often do we jump to incorporate trendy new ideas or the latest cultural craze into our lessons (Pokemon Go, Minecraft, iPads etc etc), without really assessing how we are expecting it to make for a more effective learning experience? Another example might be the over-eager and premature adoption of new scientific ideas (gleefully exploited by unscrupulous edu-quacks), which has lead to widespread misinformation about the brain and learning amongst teachers (see e.g. here), and bogus interventions like Brain Gym, the Dore program or inappropriate use of the ‘growth mindset’. We also have explicitly named ‘Progressive’ education movements, which may embody many modern values, some of which have been translated into educational programs without due assessment of the relevance or efficacy of this translation. Changes in conceptions of individual rights and freedoms have metamorphosed into doctrines of student choice or ‘personalised learning’, which in turn have engendered such ineffective educational enterprises as ‘free-schools’ or learning styles.

I should be clear here that I don’t think that these problems are unique to teaching as a profession; I am sure that a good deal of this novelty bias is a natural human tendency shared by us all. But I do think that the aims of the profession, along with the inherent difficulty in ever defining or measuring ‘success’, make it especially vulnerable to a headlong search for the next new magic idea. Sometimes, however, what we’ve got already may actually work pretty effectively. Let’s try to remember that for when the next bandwagon rolls into town.

References:

Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in Education: Prevalence and Predictors of Misconceptions among Teachers. Frontiers in Psychology, 3, 429. http://doi.org/10.3389/fpsyg.2012.00429

Howard-Jones, P. (2014). Neuroscience and education: myths and messages. Nature Reviews. Neuroscience, (October). http://doi.org/10.1038/nrn3817

From the Enlightenment to neurobollocks: The ‘myth of progress’ in teaching

How effective do teachers find academic research? How easy is it to use? Survey results 2

In the last post I concluded, based on a survey of 223 teachers, that teachers’ positive attitudes towards reading academic research on teaching were not always matched by their behaviour in real life. The remainder of the survey contained two questions which may help us to explain why this disparity exists. One set of questions asked about how ‘easy’ teachers found it to build academic research they have read into their practice. Another asked how ‘effective’ they have found it when they do.

Academic research is not always easily accessible to even the interested outsider. Evidence also suggests that the approaches and engagement of non-specialists with such material is highly variable. In addition, research is often presented in fairly abstract terms, or based on experimental designs which have few obvious connections to a real classroom. It may be, therefore, that despite generally positive attitudes towards the use of research, teachers actually don’t find it very easy to apply what they read to life at the coal face.

A second possibility is that teachers do not find it difficult to apply the findings of research, but that they simply don’t find them very effective. There are many plausible reasons why this might be the case. The research could have come from a different educational setting (a rural, single-sex grammar school can be a very different place to a mixed, urban comprehensive), used slightly different resources, or staff may have had different training. Whilst no finding is a silver bullet and we should be patient when evaluating any new idea or technique; education is a multi-factorial game with a huge number of things that teachers could be potentially be working on to improve results. They will, and should, only have patience with any new ideas for so long if they don’t see results (unless there is very strong evidence that results will emerge slowly over time).

I therefore asked teachers how easy they had found it to build research findings into their practice, based on what they had read. A separate question asked them how effective they had found this when they had done so. As with the previous questions, respondents were separately asked about reading both research papers themselves, and research summaries/digests.

EASY to build in

My initial hypothesis when considering these questions was that teachers would find summaries of academic research easier to build into their practice than the research papers themselves (though not necessarily more effective).

Bearing in mind the caveats of the previous post about the very unequal group sizes here, I am again struck by the relatively consistent ratings across sectors. Collapsing the different types of school together and analysing the difference between research and summaries in each case, we find that for ease of use there is a significant difference between the groups (Research: M=4.98, SD= 1.76; Summaries: M=5.60, SD=1.75; t=4.251, p=<0.0001), suggesting that for the majority of respondents, it has been easier to apply summaries of research findings to their practice that it has when reading the research papers themselves. This is not necessarily surprising; the summaries may well have been written with teachers in mind and with a specific focus on translation into the classroom, whereas the research papers may not. Still, it is an important point in terms of suggesting future directions of the evidence-based teaching movement.

EFFECTIVE to build in

Comparing the effectiveness of the two sources of information, there is a marginally significant increase in the ratings for summaries, compared to research papers, but this is not as strong as the previous trend (Research: M=5.33, SD= 1.89; Summaries: M=5.60, SD=1.75; t=1.974, p=0.05). Further work would be needed to be more confident that teachers actually did find research summaries more effective (and of course judgements of what is meant by the term ‘effective’ may well vary widely).

Discussion of findings

For me, perhaps the most important result of the survey was the response to question “Would you be in favour of receiving digests or summaries of educational research, containing evidence-based suggestions for teachers to try?” This received very positive responses across all sectors:

The wording of the question here to some extent presupposed that this was something that the respondent was not receiving already, which may of course not be the case. I assumed, however, that if this was the case then they would probably give a low score. The fact that the mean scores were so high, and the number of people providing high ratings (2/3 of the sample answering 8 or above), is especially noteworthy.

Demanding a better supply

My main conclusion from this exercise is that there is a real demand amongst teachers for empirical information on their craft, but a shortage of material providing this. The demand seems to be lacking a supply. There is clearly an openness to engage with empirically tested ideas about teaching practice, and an appetite for these to be communicated to teachers. However, for whatever reason, they are not getting through. How can we make sure that fantastic resources such as the EEF are not preaching to the choir, but out there finding new converts? Is this something that could be publicised by organisations with a wide reach such as the TES? An EEF newsletter to all schools (the EBTN already do something similar, but on a smaller scale)? One thing that would guarantee it instant traction would be an OFSTED recommendation. This raises its own questions about whether top-down mandates are the best way to disseminate this kind of information, rather than hoping/assuming that popularity will slowly bubble up from below. It would be interesting to hear the thoughts of other teachers and researchers on these questions (or any of the other aspects of these last two posts).

Further research?

In retrospect, there are a number of questions that I didn’t ask in the survey, which I now wish I had.

  • A question about the most common reasons why teacher might not read academic research even though they might want to.
  • A question such as “How aware of you of the research base behind the strategies you use in the classroom?” or perhaps “How common is it for you to consider a teaching strategy, at least in part, because of research findings?”

Perhaps a lot of the strategies in use in classrooms are evidential already. It would have been useful to have a baseline figure for teachers’ awareness of the existing evidence base behind their preferred methods. These are issues that I would like to look at further at some point.

References:

John S. Zeuli, How do teachers understand research when they read it?, Teaching and Teacher Education, Volume 10, Issue 1, January 1994, Pages 39-55, ISSN 0742-051X, http://dx.doi.org/10.1016/0742-051X(94)90039-6

How effective do teachers find academic research? How easy is it to use? Survey results 2

What is the relationship between teachers and educational research? Survey results 1

In late October last year a survey by Nick Rose (@Nick_J_Rose) appeared on my Twitter feed, looking at the divisions in educational philosophy and pedagogical opinions that exist amongst teachers. When the results emerged soon after, they revealed a fascinatingly fractured profession, divided along lines of gender, rank and (especially) school type. Dichotomies between ‘progressive’ and ‘traditional’ ideas (false or otherwise) were alive and kicking, especially between primary and secondary schools, raising some extremely important questions about how we manage the transition between these two sectors with sometimes fundamentally opposed approaches to education. If you haven’t seen them, Nick’s four blog posts picking his results apart in forensic detail are highly recommended and can be read here.

Suitably inspired, I decided to do something very similar for an issue that I have been wondering about for a couple of years; to investigate the relationship between teachers and educational research. It is a platitude, but worth saying anyway, that teachers are always alert for ways to improve their practice and provision. The prevalence of ‘neuromyths’ and other pseudoscientific beliefs amongst teachers is often rightly cited as a worrying development, but it does at least illustrate the strong appetite for ‘empirical’ information which could help them to do their job more successfully. But where teachers actually turn for this empirical information is often not clear, for example selection and appraisal of academic research is entirely absent on most teacher training courses. My worry is that this creates a vacuum of demand without supply, and that into this vacuum the opportunistic and lightweight is sucked in before the rigorous and heavy.

My rough working hypotheses at the outset (based on my experience) were:

  1. Teachers would be generally in favour of using academic research to improve practice.
  2. Despite this, they would actually do so less than might be expected given (1)
  3. Teachers would find summaries of academic research easier to build into to their practice than the research papers themselves.

The survey went ‘live’ on 11th November, with the last completion on 4th January. The sampling involved appeals on social networks but also (mainly due to my paltry Twitter following) personal emails to contacts in schools. Whether this more direct approach led to a more representative sample of teachers than a Twitter-only survey is an open question. I know that a good range of schools were contacted and distributed the survey on their email lists (or otherwise semi-publicised it, in line with hilariously complicated ‘information management’ policies), but not how many people from each centre completed it. Demographic data was not collected in the survey in the interests of keeping the length down. Regardless of the type of school, it is probably a very unrepresentative teacher who takes time out of their day to fill in a survey, so I can’t claim that the results represent the teaching population at large.

school sectorA total of 223 teachers completed the survey, for whose time I am hugely grateful. Perhaps unsurprisingly, given that I was a secondary school teacher, so were the majority of my contacts and so, subsequently, were most of the respondents (Primary = 26, Secondary = 191, Sixth Form/FE college = 2, University, HE = 4).

The limited numbers from sectors other than secondary make it difficult to draw any firm conclusions regarding differences between the groups. This should be borne in mind for some of the graphs below.

Are teachers interested in reading research?

Respondents were asked “Do you think that teachers should have opportunities to read educational research?” and responded using a ten point rating scale where 10 = “yes, regularly” and 1 = “never”.

Teachers reading educational research

The mean ratings for all three sectors were pretty consistent, leading me to collapse the groups when looking at the overall distribution of responses:

Hist - opportunities to read

I was expecting that the overall response to this question would be positive, but perhaps with some polarisation of responses towards the lower end of the scale.

The results here exceeded my expectations in terms of the positive skew in the distribution. Clearly it seems that the majority of respondents feel very positively towards the idea that teachers should have opportunities to read educational research. From this we can draw the perhaps obvious, but useful, initial conclusion that teachers are not anti-research by and large.

The wording of this question was carefully chosen. I was worried that simply asking “Do you think that teachers should read educational research?” would lead to responses being confounded by worries about workload and other daily concerns. Asking about “opportunities” to read research allowed the distinction to be drawn between teachers’ theoretical opinions, and their day-to-day practice (which was assessed in the next question).

How often do teachers read educational research?

Q 2. asked “How often do you tend to read a piece of educational research?” (with subsequent clarification making it clear that this was referring to original research articles, rather than summaries or digests) and Q 4. “How often do you tend to read summaries or digests of educational research?” The summaries question was included in recognition of the fact that there are ways to engage with research results other than having to slog through each original article yourself. Websites such as the EEF or EBTN provide accessible research summaries, as well as tips on how best to incorporate them into lessons. I was interested to see if these options were popular alternatives to the ‘real thing’.

reading research

Again it’s worth pointing out here that the small numbers in the ‘6th form/FE/HE’ group especially mean that we should not draw any major conclusions from differences between the groups. It would not surprise me if the ‘6th form/FE/HE’ did score more highly on a larger scale survey here, as their daily practice might actually bring them into contact with research articles more frequently (my own reading of Educational Psychology articles only really started when I had to teach it!) Otherwise, it is interesting to note the increase in the frequency of reading summaries compared to reading the research papers themselves. Collapsing the school types into one group and just comparing responses to the ‘research articles’ question with the ‘summaries’ question, the difference between the two was significant (Research: M=2.78, SD= 1.25; Summaries: M=3.45, SD=1.31; t=5.552, p=<0.0001), suggesting that teachers read research summaries more often than the papers themselves. The histograms below (with all school types amalgamated) also help to illustrate this difference.

Although the distributions are relatively similar, it seems much more common for teachers to have regular (weekly) contact with research summaries than with the papers themselves. This seems to be where the difference between the groups revealed by the t-test is primarily found. Despite this difference, for both histograms fewer than half the respondents fell into the top two categories of ‘weekly’ or ‘monthly’, meaning that the majority of the sample are accessing research every few months or less.

One final piece of analysis was to see whether it is the same people who give high ratings in both categories:

Regression papers vs summaries Perhaps some people read lots of research papers and research summaries and other people don’t read anything.

Interestingly, although there is a positive correlation between the two questions (correlation coefficient r = .488, r2 = .2377), it is not extremely strong, so there is clearly some variation here within the sample, with some people regularly reading one medium but not the other. From this data it’s not clear whether this is due to availability or preference.

 

So, are teachers interested in reading research?

Yes… in theory!

It seems fairly clear from the data above that teachers are interested in academic research, and alive to the potential for using it in their own practice. However, this positivity towards reading research in principle is not completely reflected in practice, with over half of the sample reading a research paper or summary only every few months or less. These findings do seem to support my initial hypotheses 1 and 2.

So what are the barriers which are preventing good intentions becoming reality? The most obvious one is probably time. It is a very unusual school (though I have heard of them) which provides allocated time for its staff to engage with research on teaching and education. More commonly, it is a ‘desirable extra’ for which no actual provision is made. This is a ridiculous situation, making engagement with empirical data about how to do the job better a somewhat niche and onerous add-on.

However, there are other possibilities which might help to explain the gap between intention and practice, besides school timetabling and priorities. The next post will look at two possible explanations for why this might be, focusing on the content of the academic literature itself.

 

References

Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in education: Prevalence and predictors of misconceptions among teachers. Frontiers in psychology, 3.
What is the relationship between teachers and educational research? Survey results 1

Teachers and research. Whose responsibility is it?

In 1801, at the age of 38, Thomas Young was appointed a professor of natural philosophy at the Royal Institution in London. Over the next two years, he delivered 91 lectures, in which he covered… everything. Not just everything in physics, his particular area of research for most of his career, but pretty much the entire span of scientific knowledge at the time. Andrew Robinson’s book ‘The Last Man who Knew Everything’, not only tells the story of a fascinating life, but also invites the consideration of the philosophical consequences of the world since this seminal moment, a world where the only sensible response to the ever rising tides of information seems to be greater and greater specialisation. The polymath is dead; long live the compressed genius and the authority in their field.

stick_figures_accusing_pc_1600_clrThe last teacher who knew everything, of course, came much later; in fact I’ve worked with a number of them who are still going strong. Increasingly, however, another expectation is creeping onto the already polymathic to-do list of the modern teacher. Led by the EEF and organisations such as ResearchED, ‘research literacy’ has become a somewhat unlikely edu buzz-concept. In doing so they have given me a nagging dilemma which remains unresolved despite three years of having my limited cognitive resources thrown at it: ‘just what should the average teacher’s relationship be with educational research?

Here, as I see them, are the main points in support of both sides of this argument:

Position A – Teachers should be engaging personally with research

  1. Professional pride. Firstly, it could be argued that being acquainted with the empirically tested findings into the best way to do your job is an absolutely essential part of being a self-respecting professional. It would be mandatory in many jobs, so it’s ridiculous that doing so in teaching it is not only optional but a bit niche. This seems to be the position of some school leaders (see this article by @headguruteacher, for example). I have a lot of sympathy for this view, but see the ‘workload’ and ‘research literacy vs research authority’ sections below for counter-arguments
  2. If not teachers, then who? If teachers are not going to be responsible for the empirical appraisal of the material and techniques that they use in lessons, then this creates a vacancy for pedlars of quack cures and silver bullets. Even if this gap is filled by well-meaning organisations disseminating evidence-based suggestions to teachers such as the EEF or the EBTN, we still have a ‘who guards the guards?’ situation.
  3. As teachers we need to practice what we preach. We all want to develop ‘critical thinkers’ (whatever the phrase really means). What better way to develop our own critical thinking skills than to go back and check the research base on whatever sparkly new idea was the focus of CPD this term? We will have better lessons AND be better able to model these in-demand skills to students.

Position B – Teachers should not have to read educational research 

  1. Workload. Teacher workload is already at a level that is attracting governmental intervention. Teachers are stressed, overworked and beleaguered by the multifarious demands of the profession as it is. This is not to say that I think that the administrative and box-ticking load of teachers is more important that reading research – I don’t – but it’s a fact of life that many teachers are working in systems where they already have an unmanageable number of competing demands. Given that most of us are not Nicky Morgan or Sir Michael Wilshaw we can’t reduce these any time soon… so should we really be adding to them? Sometimes our solution to trying to speed things up, such as providing how-to guides (e.g. this one for literacy teaching), merely adds another sizable bundle of straw to the camel’s groaning back.
  2. Jargon and impenetrability. Whilst researching possible PhD topics last year I came across the following title of an Ed Psych paper: ‘A Psychoanalytic Phenomenological Approach to the analysis of Emotionally Intelligent Leadership in Higher Education: A Review of Current Practices’. This is not to cast any aspersions as to the quality of the work contained within (I’ll keep those private), merely to point out that academic research can appear fearsomely complex on first viewing. Once inside, things aren’t always much better. Steven Pinker’s ‘Why Academics Stink at Writing’ suggests that sometimes “academics have no choice but to write badly because the gatekeepers of journals and university presses insist on ponderous language as proof of one’s seriousness.” Too often academic communication seems almost deliberately designed to be accessible to as few people as possible, rather than as many. Teachers have better things to do than to wade through this.
  3. Academics have the responsibility to disseminate their work, without teachers having to root around for it. Something that has really struck me in my two months back in academia so far is the number of people I have talked to who seem to treat questions about the useful applications of their work as vaguely grubby and a bit low-brow. Of course, basic research is necessarily a long way removed from the real world and quite right too; variables must be controlled and relationships established free from the messy causality of the real world. At the same time, however, it is surely right that all research is done with some sort of an eye to how it could be useful beyond the lab. I had just assumed this would be obvious, so it has been a real shock to find that it does not always seem to be the case. Even when research does have clear real world implications, this insularity means that the pressure and expectation to communicate results beyond the academic bubble is not felt strongly enough. This is a travesty, but not one for teachers to solve.
  4. Research literacy or research authority? Cutting through the jargon and simply reading research is not actually enough. The hardest, and slowest, skill to develop in research engagement is the ability to critically appraise what is being read, and to come to informed judgements about its worth. If we want teachers to read research, but neglect to give them time and training to evaluate what they are reading successfully, then we are simply replacing one appeal to authority with another. I have been trying to read and reflect on research for the last ten years and find no shame in saying that it is still something I find very difficult. And I actually want to do it. Is there any value in forcing this on others who are not so inclined? Twenty years ago John Zeuli published this interesting investigation into how teachers read research, finding that “teachers differed substantively in terms of their willingness and/or ability to read and understand research”. Whilst I am at pains here not to be seen as patronising members of a profession I respect enormously, I wonder if a mandatory approach to research is the right use of many teachers’ talents.

I know the ideal answer here. Systemic reforms of teacher training, school priorities, OFSTED criteria and academic communication would probably do as a starting point. In the absence of these, and from the practical standpoint of a classroom teacher who needs answers now, I remain unsure as to whether the regular reading of research really would be an exercise worth the investment in time and effort. To use a helpful distinction from a recent blog by @DrGaryJones, teachers reading research may have merit, but is it worth it? I’d love to hear what other teachers and researchers think about this.

Inspired by Nick Rose’s recent fascinating investigation into the pedagogical opinions of teachers, I’ve prepared a quick survey to gauge teachers views on their relationship with research. Please fill it in and spread the word… https://docs.google.com/forms/d/1gN5cSg_FS9JR5KxtyjKZPhr-f9q63blJdwqBAdzmDE4/viewform if enough people fill it in I’ll publish the results on here soon.

References:

John S. Zeuli, How do teachers understand research when they read it?, Teaching and Teacher Education, Volume 10, Issue 1, January 1994, Pages 39-55, ISSN 0742-051X, http://dx.doi.org/10.1016/0742-051X(94)90039-6

Duke, N.K. & Martin, N.M. (2011). 10 Things Every Literacy Educator Should Know About Research. The Reading Teacher: A Journal of Research-Based Classroom Practice, 65(1), 9–22. doi: 10.1598/RT.65.1.2

Nick Rose’s analysis of his recent survey into pedagogical opinions – https://evidenceintopractice.wordpress.com/2015/10/27/teacher-survey-results-and-analysis-part-1-2/

Evidence-based Teachers Network http://www.ebtn.org.uk/
Educational Endowment Foundation – https://educationendowmentfoundation.org.uk/ ResearchED – http://www.workingoutwhatworks.com/

Teachers and research. Whose responsibility is it?

Can Neuroscience inform Education? Maybe not. Is ‘Educational Neuroscience’ a good thing? Definitely.

Teachers deal with brains. They may not like to think about their job in such explicitly reductionist terms, but at the end of the day a teacher’s job is to leave a student with a better brain than they started with. They’re like brain surgeons, but they (usually) get home without blood spattered shoes. Of course, we can argue all about what exactly is meant by a ‘better brain’ – therein lies the eternal debate about the point of education – but whatever sort of school you’re in, it’s the same game with a different hat on.

It seems understandable, then, that there has been considerable excitement over the past ten years or so about the fact that the ever expanding field of neuroscience may be on the cusp of orchestrating an educational revolution. Neuroscientists find out how the brain works, teachers do things that accommodate this, students learn more efficiently and everyone is happy. It seems so alluringly simple and, cards on the table here as a teacher-turned-neuroscience-researcher, I really really REALLY want this to be the case. I want it to be the case so much I quit my teaching job and went back to university to try and make it the case! It’s just that there may be some crucial, immovable factors which serve to prevent any useful cross-pollination of the disciplines, regardless of the enthusiasm and biased optimism of those within it.

I want to make two basic points in this blog post.

  1. Why there is scepticism about neuroscience’s ability to inform educational practice
  2. Why I don’t think this matters, and why the educational neuroscience movement is still a positive development for education.

Educational neuroscience. A bridge too far?

This topic has already been covered in detail by much more eminent and eloquent voices than mine (I would recommend Daniel Willingham and Dorothy Bishop as particular starting points), so I will only cover these objections in brief.

levels of explanation edneuro
Some of the levels of explanation separating neuroscience and education

The first main objection is that there are too many levels of explanation between neuroscience and education for it to be useful to teachers (Daniel Willingham would call this the ‘vertical problem’). Neuroscience primarily produces results about brains but, for a teacher, knowing which particular brain area is active is not at all useful to their practice. What are they supposed to do with this insight into their students’ neurological performance? What is needed is a ‘bridge’ between neuroscience and education and, so the argument goes, an effective one already exists – namely cognitive psychology: the scientific study of thoughts and behaviour.

John Bruer has argued persuasively that not only is a direct bridge between neuroscience and education not needed, it could potentially be harmful. Neuroscience might help to tell us why a particular teaching style is effective, but only cognitive psychology (and educational psychology) can find out that something is effective. Teachers don’t need to know why a particular strategy works, just which things work, so neuroscientific information for teachers pointlessly puts the cart before the horse, wasting teacher time and resources.

educational neuroscience bridges
Is a direct connection between neuroscience and education (bypassing cognitive psychology) desirable, or even possible? With apologies to Microsoft Paint.

A second major bugbear is often that many modern research projects that have been bracketed as ‘educational neuroscience’ are not really much more than repackaged testing of cognitive psychology hypotheses given a glossy finish and the all-important “neuro-“ prefix. Take the recent ‘Education and Neuroscience’ research grants awarded by the Wellcome Trust and the Educational Endowment Foundation. Although clearly informed by neuroscience, arguably all of the projects are testing cognitive rather than neuroscientific hypotheses. One example is the Sci-Napse Project, led by Paul Howard-Jones from Bristol University, based on research which has found a stronger neural response to a reward when the reward was uncertain, compared to when it was consistent. Though based on neuroscience research, the outcomes being tested here are very much cognitive and behavioural, such as the effect of the uncertain reward strategy on motivation and attainment. More egregious still for the sceptics is the ‘Spaced Learning’ project, which is testing the educational applications of the ‘Spacing Effect’ (the idea that information is more effectively learned in a number of spaced sessions than in one go). Yet this is a concept which has been known since Herman Ebbinghaus published his treatise on memory in 1885; a time when the field of psychology, let alone neuroscience, was still barely formed!

Is there any hope?

I have spent a good deal of time in the last few years shuttling back and forth along Bruer’s bridge; so much so that I am sometimes reminded of the time I got lost in Sydney and ended up crossing the harbour four times. In so doing, whilst I have a lot of sympathy with the arguments above, I have come to the optimistic conclusion ‘educational neuroscience’ research holds out the promise of huge benefits for education. The view from the bridge is perhaps not as bleak as it might seem, for the following reasons.

1. Who cares is it’s ‘neuroscience’ or ‘cognitive’? Good education research is good education research… and it’s desperately needed.

Firstly, there is clearly a tension between the two objections mentioned above. If you’re suspicious of anyone attempting to skirt the bridge of cognitive psychology, then you can’t criticise research when it turns out to basically be doing cognitive psychology all along. More broadly though, the quality and utility of a piece of research is entirely independent from the particular nomenclature we designate for it. If these are really cognitive or educational psychology studies, then fine. What is important, as far as I am concerned, is that they are large, well-funded and, in most cases, randomised pieces of educational research, in a field where the standards of research can vary hugely in quality. Any teacher who has attempted to mine the literature for signs of what they should be doing differently will be familiar with the enforced trawl through a sludge of underpowered and often poorly designed research in order to find that nugget of usable science. Teachers are always on the lookout for signs that they are doing the right thing, but all too often in the absence of clear guidelines and suggestions from the academic community, snake oil has oozed into the gap between research and practice. Where overzealous and premature marketing doesn’t penetrate, teaching practices are often passed on through an equally unscientific process of anecdote and personal testimony. Why this gulf exists is perhaps the subject for another blog, but anything credible that attempts to fill it, regardless of its position in the academic taxonomy, is fine by me.

N.B. It is important to make clear that I don’t think that even the most credible research findings are good enough to tell teachers exactly what they should be doing. As Tom Bennett is fond of pointing out, research results should be the start of the conversation, not the end. They just ensure that the conversation is being conducted in the right sort of areas.

2. “Neuro-” things are sexy, and sexy gets funded. If we have to call it “Educational Neuroscience” rather than “Cognitive Psychology”, then this doesn’t leave me feeling too dirty.

Speaks for itself this one really. We may not be totally accurate in our description of spaced learning as a neuroscientific concept (even if there has been a lot of neuroscientific research into the phenomenon), but if the end result provides useful pedagogical applications then you can call it ‘Ken Robinson’s Multiple Intelligences Brain Gym’ as far as I’m concerned. Actually, I’m not sure about that last bit.

Of course, we still need to make sure the funding goes to the right studies. I have read plenty of ‘educational neuroscience’ papers where brain scans seem to have been done as an afterthought and add nothing to the results, but this is more of a general issue with funding bodies understanding what they are funding than a problem for educational neuroscience per se.

3. Neuroscience helps to make our cognitive and educational theories realistic.

Teachers are currently bedevilled by educational schemes and theories which are simply not plausible given what we know about the neuroscience of learning and memory. From Brain Gym and Learning Styles – the time-honoured piñatas of evidence-based education – to more subtle but pervasive ideas such as constructivism, multiple intelligences and even the relative malleability of IQ, teachers attempting to sniff out bullshit find it woven depressingly deep into the fabric of their profession. Even if we have to pass through the bridge of cognitive psychology to get there, educational neuroscience can be hugely beneficial if it serves to point out which educational ideas do not make sense, and contributes to the effort to replace them with ones that do.

4. Give it time.

Educational neuroscience is still a baby. And like every baby we have to be careful not to throw it out with the soiled bathwater. Whilst we may never build a bypass around Bruer’s bridge through cognitive psychology (and perhaps we never should), our bridges can become bigger, hold more traffic, and greatly cut down the journey time from one end to the other. Nothing was more frustrating to me as a teacher than seeing ‘new’ educational ideas arrive which I knew had already been mostly discredited in the academic literature (neuro-linguistic programming would be one such example), yet such is the disconnect that this was not an uncommon occurrence. More investment in the field brings the possibility of a much faster transfer of knowledge from the neuroscience lab, to the educational psychology trial, to the everyday classroom, and this can only be a good thing.

For the time being at least, let’s ignore the semantics of precisely what this field is or isn’t. Instead I think we should celebrate a new age of what I hope will be well-funded, well controlled and well disseminated educational research which should be something we can all recognise as a good thing regardless of our original position. And if you still don’t agree then I’m sorry: I was only trying to build some bridges.

Can Neuroscience inform Education? Maybe not. Is ‘Educational Neuroscience’ a good thing? Definitely.