It is more useful for teachers to see attention as an effect not a cause

We frequently urge our students (and ourselves!) to “pay attention”, but what do we really mean by this phrase? The notion of ‘paying attention to’ an object creates an impression of attention as a resource which we have at our disposal, ready to be deployed (or not) at our command. This is reinforced by the numerous metaphors we have for the concept (a filter, a spotlight, a zoom lens, even a glue). In all of these metaphors attention is cast in the role of a ‘tool’ for us to use. Going a level deeper, such metaphors all take for granted a reified concept of attention, i.e. that attention is a real, measurable ‘thing’. Increasingly in the academic study of attention, however, there is some opposition to these traditional notions of what attention is. These can generally be summarised as an effort to recast attention as an effect rather than a cause (e.g. here, here and here, amongst others).

I think that this seemingly niche academic debate actually has some interesting implications for educators. Thinking of attention as an effect rather than a cause can throw a new light on the understudied problem of attention in schools, and what teachers can do about it.

What’s the problem?

There are two main problems with the metaphors which reify attention as a resource, one logical and one practical. The logical one (which is of less relevance perhaps to educators, but still useful for fans of logical validity), is that the evidence for these models of memory often rest on circular arguments. For example, if we take the metaphor of attention as a flashlight, imagine the following dialogue, taken from this paper by Vince Di Lollo 

Person A: a stimulus flashed at a location just ahead of a moving object is perceived more promptly and more accurately. 

Person B: why is that? 

A: because the attentional spotlight is deployed to that location, and stimuli presented at an attended location are processed more promptly and more accurately. 

B: and how do we know that attention has been deployed to that location? 

A: we know it because stimuli presented at that location are perceived more promptly and more accurately. 

The practical problem is that metaphors such as this place the burden for the control of attention firmly on the student themselves, to deploy as their preferences or abilities allow. Now I am in no way arguing that students are not capable of exercising control over their attention, nor that teachers should be held responsible when a student’s attention wanes; indeed I would strongly repudiate this. I do think, however, that a model which casts attention as a resource of the student is unhelpful to teachers. Teachers looking to improve student performance using this model are left with few options, other than perhaps brain training (so far unimpressive) or vague appeals to a student’s better nature (“direct your attention towards this pleeeease”).

Attention as an effect not a cause

Far more productive for educators would be the discussions arising from seeing attention as an effect, rather than a cause. This reconceptualisation naturally invites the consideration of

  1. which conditions are most likely to engender the effect of focused attention?

and importantly…

  1. which information should we create these conditions for?

Again, this suggestion in no way denies that students are causal agents in their own behaviour, merely that teachers will be more empowered by a focus on the conditions that they can create whereby attention emerges as an effect.

I will look at each of these two questions in turn

Which conditions are most likely to engender the effect of focused attention?

Decades of careful psychological work in dark laboratories has helped to confirm a lot of common sense notions about how attention can be captured (e.g. by bright colours, unexpected shapes or other features which stand out, stimuli which move, or loom, motivation, meaning, reward and so on.) If these features are present in the task, we are generally more able to focus on the task. If they are present in a stimulus which is not part of the task, then we are more likely to be distracted. 

So far so good; as teachers we need to try to make our stimuli as salient as possible and reduce other distractions. However this apparent simplicity leads us on to the second, less commonly considered implication of considering attention as an effect; if we can create the conditions to direct student attention, what information exactly do we want to create these conditions for? In other words (returning to the old metaphor for simplicity’s sake), what do we want student attention to be focused on?

Which information should we create these conditions for? Selecting the target of focused attention

Where attention is discussed in education, it tends to be focused on the ideas above, in terms of strategies for capturing attention. Indeed, the goal of my teacher training on this topic was entirely this, the creation of an attentive, engaged class. What exactly they should be engaged by seemed less of a concern. If attention could be attracted by the teacher, then learning was assumed to be an inevitability.

Sadly this position is mistaken. I have written before of the limited capacity ‘bottleneck’ of attention. This limited capacity means that only a tiny fraction of information arriving into our perceptual systems will ever be processed to a meaningful degree. Therefore whilst an attentive class can clearly be a step in the right direction, they will only be learning efficiently if we as educators ensure that their attention is focused precisely on the stimuli that we want it to be. Just as becoming distracted by low level disruption from other students may inhibit learning, so will engaging with superfluous material presented by the teacher. If attention is the result of creating the right conditions, then we need to be very clear about exactly what is worth creating those conditions for.

Take the case of powerpoint slides. I spent many hours early in my teaching career crafting aesthetically pleasing powerpoint slides. My backgrounds were salient, full of nice bright colours. Some of the words zoomed in from the side of the screen. Text was usually accompanied with a picture (or even a gif if I was feeling particularly creative), usually humorous and tangentially related to the main information. Looking back, especially through the lens of considering attention as an effect and questioning where I was encouraging that effect to occur, is sobering. My salient features were either surface level features (the movement of the text rather than its content) or entirely irrelevant to what I wanted the students to know (the background and the pictures). I was actively inviting attention to be directed away from that which I thought was most important. This is why important and potentially impactful strategies such as dual coding, which combine visual and verbal materials (and which has been valuably popularised recently by figures such as Oliver Caviglioli), need to be treated with caution.  Bad dual coding is not just ineffective, it leads to split attention or outright distraction.

Deciding on the right targets for our students’ attention is a challenge that cuts right across education, from the pedagogical issue of how best to deliver information to the curricular decisions required to identify precisely what it is that we want students to know in the first place. I have been delighted to see an increased focus on curriculum amongst teacher networks as a result (e.g. here, here and many great posts here for example, though to my knowledge the specific link between the importance of curriculum design and attention has not be explored either in blogs or research).

A new way to view attention in the classroom

Viewing attention as an effect makes us value it (and the contribution that we can make to it) more. It makes us consider more carefully how to attract attention, but crucially also what we want to attract attention to. Capturing attention is not in itself the aim. The goal is to provide the optimal conditions so that attention is captured by the exact stimuli that we have identified as most valuable. I have tried to argue here that this process may be assisted if we define attention less as a cause of student behaviour and more as an effect of the conditions that we put in place. 

Advertisements
It is more useful for teachers to see attention as an effect not a cause

The Multidisciplinary Gardner

We all need to be multidisciplinary these days. University students are confidently told by their tutors that the future lies away from narrow specialisation; the true twenty-first century student will be able to traverse the arbitrary boundaries of specific subject categories with ease. This is all fine, but our enthusiasm for the polymathic can, I worry, have some serious dangers in a world where even achieving specialisation in one discipline is so challenging.

The application of psychology and neuroscience findings to education seems in many ways an archetypal example of such a modern multidisciplinary project, one to which increasing numbers of teachers and researchers are contributing. This interest is naturally welcome, given that I am a firm believer in the potential gains for education of taking a more scientifically informed approach to learning. However, the increasing numbers of academic researchers looking to make connections between their work and educational practice does raise some concerns about what it truly means to be ‘multidisciplinary’ in such a new and expanding field. Specifically, I would suggest that the following maxim would be a useful one for both researchers considering their own results, and also teachers looking to evaluate the potential contribution of research to their practice:

Multi-disciplinarity doesn’t mean simply applying from your discipline into another one

Whilst this is an issue that has niggled at me for some time, the straw which broke the camel’s back this week was reading ‘The Gardener and the Carpenter’ by developmental psychologist Alison Gopnik. The book is an attempt to show why developmental psychology evidence demonstrates that the modern fashion for ‘parenting’ (the ‘carpenter’ of the title, in which parents desperately try to shape their child into a particular pre-defined adult form), is misguided. I really wanted to like the book, and given that I can’t stand ‘how to’ parenting books, and find developmental psychology fascinating, I thought I would be onto a pretty sure thing. And in parts I did enjoy it. The parts describing the developmental psychology research were superb; delightfully, almost whimsically, creative experiments which produce fascinating results. Gopnik is a world-leading expert in the development of young children’s (e.g. under 7 or so) reasoning skills and I read these sections with the confidence and excitement that comes from receiving a guided tour from a true specialist. It would have been a much shorter book had it stopped there, but I would have been a happier reader.

The basic message of the evidence presented is that young children are remarkably adept and efficient at learning from the adults around them, but also from their environment. This combination allows them to both absorb the wisdom and traditions of previous generations, but also to make new discoveries themselves. Too much of one of these two strands can be detrimental to the other; too much didactic adult instruction can impede natural curiosity and discovery. So far so good, for experiments on children who are mostly of pre-school age or just above.

Unfortunately, Gopnik cannot resist extrapolating the findings of these studies into suggestions for educational policy. Not just primary schools, but even secondary schools, would apparently benefit from a redesign to allow learning to occur in a more observational manner, with greater emphasis on learning by ‘apprenticeship’, where children develop ‘mastery’ from extended interactions with experts. Inquiry and discovery learning should therefore replace a school system which only teaches children ‘to learn how to go to school’ and ‘become experts at test-taking’. Is there evidence to support these conjectures? Does Gopnik have the same level of experience in the field of educational design as in developmental reasoning? Certainly, the continual references to studies dry up entirely in these sections. Instead, statements such as,

“Many of the most effective teachers, even in modern schools, use elements of apprenticeship. Ironically, these teachers are more likely to be found in the ‘extracurricular’ classes than in the required ones. The stern but beloved baseball coach or the demanding but passionate music teacher let children learn this way”

seem to suggest that her evidence is gathered mainly from film plots. In actual fact, such ‘minimal guidance’ instruction has been found to be generally much less effective as a form of instruction, unless the students are already knowledgeable about a topic.

Gopnik is undoubtedly a multidisciplinary figure (she also writes philosophy papers), but I’m not sure that the educational system is one of those disciplines. As a philosopher, she will presumably be aware of Hume’s distinction that what ‘is’ does not imply what ‘ought’. The fact that children do have a great ability (sometimes better than adults) to learn through their own experimentation and interaction with the world, doesn’t imply that they ought to always do this to acquire their knowledge. Let’s say, for the sake of argument, that her ‘apprenticeship and inquiry’ model of education was the most effective one available. Even in this case, a range of other factors also govern judgements of ‘value’ in education, such as the time taken, resources, teacher capacity, motivation and many others1. All these need to be weighed into the mix as well. As Dylan William has written, “Big effects may not be worth pursuing if they cost too much to secure. And very small effects may be important if they are inexpensive to implement.” Goodness knows how much it would cost in terms of hiring and training new teachers if we are to redesign schools around a small group apprenticeship model. We’d need to be unbelievably confident that it was the right thing to do. As we saw above, however, there is good reason not even to think that it is the most effective way to learn in many situations.

In terms of crimes against multidisciplinarity, I do not by any means think that Gopnik is alone (or even the worst offender). Indeed I have noted before that I am often uncomfortable with many so-called ‘Educational Neuroscience’ studies being used to make unwarranted and premature applications to the classroom. To me, the role of educational neuroscience is not at all about the creation of new sparkly teaching methods, but too often this seems to be the aim (off the back of a few brain scans). Multidisciplinarity offers potentially huge benefits to learning and education, but in order to reap those rewards, and to avoid cul-de-sacs and wasted effort, we must be as confident as we can be that our ideas will stand up to scrutiny in the new field. We must all ensure that, as researchers, we understand enough about the discipline we are applying into to be able to objectively evaluate our ideas (or to work with others who can). If we don’t, then we aren’t really being multidisciplinary, and we probably aren’t making progress.

References:

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist 41 (2): 75, 41(2), 75–86. Retrieved from http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Work+:+An+Analysis+of+the+Failure+of+Constructivist+,+Discovery+,+Problem-Based+,+Experiential+,+and+Inquiry-Based+Teaching#8

Footnotes:

  1. Incidentally, the is-ought distinction is also one that I think is useful for explaining why the issue of the role of genetics in learning (as ably summarised by Annie Brookman-Byrne recently) should not be as controversial as it often is. The (unarguable) fact that genetics does play a role in academic learning does nothing to imply anything normative about how people should be treated as a result of this.
The Multidisciplinary Gardner

My NPJ Science of Learning Interview – ‘Educational implications of attention and distraction in teenagers’

The Nature partner journal ‘Science of Learning’ website is another useful addition to the increasing number of resources encouraging a more scientific approach to education and learning.

It’s also just gone up in my estimations greatly (!) as they’ve published an interview with me about my PhD work. Read it here. If you’re a teacher or researcher and any of this sounds interesting to you, please feel free to get in contact.

 

My NPJ Science of Learning Interview – ‘Educational implications of attention and distraction in teenagers’

Teach like a chimp! The validity-transportability paradox in teaching

validity-transportability-paradoxA paradox of attempting to apply ideas from research into the real world is that the simplified environments of scientific experiments allow for the formation of extremely complex explanations, whilst the application of those ideas into the more complicated real world often require that they are somewhat simplified. The ideal conditions required for creating validity, and those required for creating transportability (the easy transmission of an idea into the real world, to borrow a phrase from Jack Schneider’s 2014 book ‘From the ivory tower to the school house’), are almost completely opposing. This clearly creates a dilemma: how much erosion of validity do we accept in order to allow a theory to become transportable?

‘The Chimp Paradox’… Paradox

Until recently I was something of a validity purist on this matter. I remember a seminar I attended last year with Vincent Walsh, a neuroscientist at UCL who studies sporting performance and decision-making under pressure and works with various GB sports teams. At one point, mention was made of Steve Peters, the psychiatrist who has also made a name working with some of the biggest names in UK sport such as Chris Hoy, Victoria Pendleton, Steven Gerrard and Ronnie O’Sullivan, as well as a lucrative consultancy side-arm with hundreds of companies. Peters’ work, detailed in his book ‘The Chimp Paradox’ involves dividing the mind into two competing parts; a primitive “Chimp” brain (the limbic area), which deals with emotions, and a more rational “Human” brain (the frontal cortex). In Peters’ formulation the chimp brain works 5 times faster than the human brain. In some of his work with sportspeople a third region is introduced: the “Computer” brain, which is even faster (20 times as fast as the human and 4 times faster than the chimp!)

Peters’ clients are taught to recognise their mental states and to govern nerves, pressures and insecurities according to these three brain labels. Now, from a purely neuroscientific perspective, Peters’ ideas are at best a severe oversimplification, and at worst outright inaccurate. I won’t spend time here covering the reasons for this (though this page is a decent introduction to the difficulties of dividing the brain into regions according to their ‘development’). However (and this was the point made to me when I voiced these concerns), when you have Chris Hoy crediting Peters with his gold medals, you can’t really argue with his results. Sometimes an idea can be a ‘useful simplification’ (or even a ‘useful fiction’, depending on how stringent your criteria are), containing enough accurate information to help people whilst remaining widely transportable. The success of ‘The Chimp Paradox’, then, is precisely because the complicated science behind Peters’ claims have been simplified enough to have broad accessibility and appeal to people’s everyday lives – ‘The Chimp Paradox’ Paradox, if you will.

Validity vs transportability in teaching

Whilst I’m sure this observation is far from new, it has struck an interesting chord with me recently in thinking about applications of research to teaching. I was reminded of it last week when I inevitably stumbled across yet another ‘Edu-Twitter’ debate about the chosen methods of a particular North-West London school. I know I shouldn’t, but, like a fire in a carpet warehouse, it’s hard not to slow down and watch the carnage unfold. This particular debate centred on the school’s use of certain principles of cognitive psychology (notably interleaving and spacing) as a justification for some of their methods. Some comments on the thread accused the school of oversimplifying complex theories (and implied that this therefore made them worthless). I might previously have agreed with this position, but as we have seen it is clear that some simplified scientific ideas, properly packaged, can be enormously useful to some people. If we are to embed evidence-based practice in school, then the first step is surely to embrace it in any form initially, and work out the finer details from there.

The difficulty is that there are no clear indicators as to where to draw the line between validity and transportability. Indeed, the ‘Goldilocks zone’ may be different for each idea anyway, depending, for example, on the transportability of the original idea and the extent to which it can be simplified whilst still retaining a coherent message. The downside of this process is that more complex theories (which may well resist simplification for very good reasons) will lack transportability, limiting the extent to which they are able to be widely adopted. As an example from another Twitter feed last week:
screen-shot-2017-01-30-at-16-11-17

Whilst the post was widely appreciated in certain circles, I was struck by how Cognitive Load Theory, an idea that is so central to much educational scholarship (and which is potentially an extremely helpful concept for educators) has, in my experience at least, never really caught on in classrooms. I would argue that this might be because its rather nuanced division of cognitive load into three different types is not the sort of thing that is easily transported in 140 characters or casual break-time conversation. The validity/transportability balance for CLT is an interesting story in itself; this essay from Michael Pershan charts Sweller’s attempts to balance the complexity and validity of his theory with its transportability. It is hard to see how CLT could be further simplified without losing its essential essence, but equally in its current form I’m not convinced it’s that transportable.

Teach Like a Chimp

teach-like-a-champion
With apologies to Doug Lemov…

So what are the ‘useful simplifications’ in teaching? Which academic ideas can be easily simplified into transportable forms without losing their validity? I might suggest the following:

– Applied memory strategies (e.g. spacing, interleaving, testing etc)
– Elaboration (linking to previous knowledge)
– Encouraging metacognition
I would previously have suggested feedback, but following the EEF review into marking it’s clear that this issue is rather more complicated (or less well researched) than we might imagine.

What other ideas would people suggest?

Teach like a chimp! The validity-transportability paradox in teaching

From the Enlightenment to neurobollocks: The ‘myth of progress’ in teaching

In my last post I looked at recent research which illustrated that brain-training is not as effective as the adverts might make out, as well as reminding us that many of the benefits claimed by brain-training programs are available using existing, often time-honoured and rather mundane methods. This lead me to think about why this bias towards the novel exists, to the point where we may systematically ignore solutions that have been effective for years.

The concept of universal education can be traced back to the Enlightenment, indeed it is one of the most enduring products of the period. In a “post-factual” age when, in the words of Stephen Fry,

the achievements of the enlightenment are questioned, ridiculed, misunderstood and traduced by those who would reverse the progress of mankind

it is notable that the notion of universal education has never been seriously questioned. It was the product of two major Enlightenment advances, one scientific and one philosophical. Firstly, scientific breakthroughs from the likes of Newton, Kepler and Galileo led to an optimistic outlook regarding humans’ ability to comprehend and shape the world around them. Subsequently, John Locke and other figures from the emerging philosophical school of Empiricism began to argue that knowledge could only be gained through the senses; through our interactions with the world and by our subsequent reflections on the impressions that these interactions created. Empiricism led naturally on to ideas of universal education; since we all have pretty similar faculties for the sensation of the world, there seemed no obvious reason why all people should not be able to benefit from educational experiences which had, to that point, only been available to a privileged few. Presumably, then, the more people that were educated, the faster still would be the progress and development of the species. The confluence of these two factors – optimism about our scientific capabilities and an empiricist notion of education for all – created a powerful narrative which persists today: humans are capable of greatness and education is the tool for that greatness to be realised as widely and as effectively as possible. Education as the engine of human progress. So far so good, and I agree…

But this optimism can also have a corollary. It can create a general belief that, the occasional blip notwithstanding, we are on something of an inexorable march of ‘progress’. Indeed the notion of ‘progress’ was an important one for many Enlightenment thinkers, who drew a sharp distinction between more ancient voices such as Plato and Aristotle who saw society as a cycle, with periods of progress and development unavoidably followed by decline and disaster. Enlightenment thinkers, especially those armed with the emerging theories of evolution in the late 18th century and beyond, often presented ‘progress’ as an essential part of human nature, with our increasingly successful adaptation to our surroundings reduced to a simple (and inevitable) biological necessity. This narrative of progress is powerful and seductive, but it is also potentially dangerous one. Theories regarding the ‘progression’ of the species have been at the heart of some of the worst of subsequent human thought, justifying eugenics and genocide. In a less serious form, however, a blinkered faith in human ‘progress’ can lead to either a casual over-optimism regarding our current actions, or a tendency to embrace the ‘new’ and to reject the status quo. In both cases, this novelty bias can encourage us to change systems without due scrutiny being applied to their newer replacements.

Teaching, as we have said, is an Enlightenment profession. The nature of teaching means that many of the goals of the Enlightenment are also implicit assumptions of the profession. The belief in improvement through information, that the widest benefits for the world will come from the widest dissemination of knowledge, a passion for the democratisation of learning and so on. It is hard to imagine anyone entering the profession without holding these basic assumptions. In addition, it is not a profession where we can ever conceivably judge that we have done enough, or produced the ‘best possible’ results, so there is always a desire for progress: better exam results, value-added scores, enrolment figures, university entry rates etc etc – something could always be improved. Yet perhaps the same Enlightenment-era zeal which drives us can become something of a double-edged sword, leaving us vulnerable to falling for the ‘myth of progress’. I would argue that, just as it embodies many of the virtues of the Enlightenment, teaching is also prone to demonstrate the occasionally casual over-optimism of the time, and to embrace the novel over the time-honoured too unquestioningly. ‘Change for change’s sake’ is a frequent lament in the classroom in response to yet another management initiative, but teachers also need to critique their own classroom practice in the same spirit. How often do we jump to incorporate trendy new ideas or the latest cultural craze into our lessons (Pokemon Go, Minecraft, iPads etc etc), without really assessing how we are expecting it to make for a more effective learning experience? Another example might be the over-eager and premature adoption of new scientific ideas (gleefully exploited by unscrupulous edu-quacks), which has lead to widespread misinformation about the brain and learning amongst teachers (see e.g. here), and bogus interventions like Brain Gym, the Dore program or inappropriate use of the ‘growth mindset’. We also have explicitly named ‘Progressive’ education movements, which may embody many modern values, some of which have been translated into educational programs without due assessment of the relevance or efficacy of this translation. Changes in conceptions of individual rights and freedoms have metamorphosed into doctrines of student choice or ‘personalised learning’, which in turn have engendered such ineffective educational enterprises as ‘free-schools’ or learning styles.

I should be clear here that I don’t think that these problems are unique to teaching as a profession; I am sure that a good deal of this novelty bias is a natural human tendency shared by us all. But I do think that the aims of the profession, along with the inherent difficulty in ever defining or measuring ‘success’, make it especially vulnerable to a headlong search for the next new magic idea. Sometimes, however, what we’ve got already may actually work pretty effectively. Let’s try to remember that for when the next bandwagon rolls into town.

References:

Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in Education: Prevalence and Predictors of Misconceptions among Teachers. Frontiers in Psychology, 3, 429. http://doi.org/10.3389/fpsyg.2012.00429

Howard-Jones, P. (2014). Neuroscience and education: myths and messages. Nature Reviews. Neuroscience, (October). http://doi.org/10.1038/nrn3817

From the Enlightenment to neurobollocks: The ‘myth of progress’ in teaching

How effective do teachers find academic research? How easy is it to use? Survey results 2

In the last post I concluded, based on a survey of 223 teachers, that teachers’ positive attitudes towards reading academic research on teaching were not always matched by their behaviour in real life. The remainder of the survey contained two questions which may help us to explain why this disparity exists. One set of questions asked about how ‘easy’ teachers found it to build academic research they have read into their practice. Another asked how ‘effective’ they have found it when they do.

Academic research is not always easily accessible to even the interested outsider. Evidence also suggests that the approaches and engagement of non-specialists with such material is highly variable. In addition, research is often presented in fairly abstract terms, or based on experimental designs which have few obvious connections to a real classroom. It may be, therefore, that despite generally positive attitudes towards the use of research, teachers actually don’t find it very easy to apply what they read to life at the coal face.

A second possibility is that teachers do not find it difficult to apply the findings of research, but that they simply don’t find them very effective. There are many plausible reasons why this might be the case. The research could have come from a different educational setting (a rural, single-sex grammar school can be a very different place to a mixed, urban comprehensive), used slightly different resources, or staff may have had different training. Whilst no finding is a silver bullet and we should be patient when evaluating any new idea or technique; education is a multi-factorial game with a huge number of things that teachers could be potentially be working on to improve results. They will, and should, only have patience with any new ideas for so long if they don’t see results (unless there is very strong evidence that results will emerge slowly over time).

I therefore asked teachers how easy they had found it to build research findings into their practice, based on what they had read. A separate question asked them how effective they had found this when they had done so. As with the previous questions, respondents were separately asked about reading both research papers themselves, and research summaries/digests.

EASY to build in

My initial hypothesis when considering these questions was that teachers would find summaries of academic research easier to build into their practice than the research papers themselves (though not necessarily more effective).

Bearing in mind the caveats of the previous post about the very unequal group sizes here, I am again struck by the relatively consistent ratings across sectors. Collapsing the different types of school together and analysing the difference between research and summaries in each case, we find that for ease of use there is a significant difference between the groups (Research: M=4.98, SD= 1.76; Summaries: M=5.60, SD=1.75; t=4.251, p=<0.0001), suggesting that for the majority of respondents, it has been easier to apply summaries of research findings to their practice that it has when reading the research papers themselves. This is not necessarily surprising; the summaries may well have been written with teachers in mind and with a specific focus on translation into the classroom, whereas the research papers may not. Still, it is an important point in terms of suggesting future directions of the evidence-based teaching movement.

EFFECTIVE to build in

Comparing the effectiveness of the two sources of information, there is a marginally significant increase in the ratings for summaries, compared to research papers, but this is not as strong as the previous trend (Research: M=5.33, SD= 1.89; Summaries: M=5.60, SD=1.75; t=1.974, p=0.05). Further work would be needed to be more confident that teachers actually did find research summaries more effective (and of course judgements of what is meant by the term ‘effective’ may well vary widely).

Discussion of findings

For me, perhaps the most important result of the survey was the response to question “Would you be in favour of receiving digests or summaries of educational research, containing evidence-based suggestions for teachers to try?” This received very positive responses across all sectors:

The wording of the question here to some extent presupposed that this was something that the respondent was not receiving already, which may of course not be the case. I assumed, however, that if this was the case then they would probably give a low score. The fact that the mean scores were so high, and the number of people providing high ratings (2/3 of the sample answering 8 or above), is especially noteworthy.

Demanding a better supply

My main conclusion from this exercise is that there is a real demand amongst teachers for empirical information on their craft, but a shortage of material providing this. The demand seems to be lacking a supply. There is clearly an openness to engage with empirically tested ideas about teaching practice, and an appetite for these to be communicated to teachers. However, for whatever reason, they are not getting through. How can we make sure that fantastic resources such as the EEF are not preaching to the choir, but out there finding new converts? Is this something that could be publicised by organisations with a wide reach such as the TES? An EEF newsletter to all schools (the EBTN already do something similar, but on a smaller scale)? One thing that would guarantee it instant traction would be an OFSTED recommendation. This raises its own questions about whether top-down mandates are the best way to disseminate this kind of information, rather than hoping/assuming that popularity will slowly bubble up from below. It would be interesting to hear the thoughts of other teachers and researchers on these questions (or any of the other aspects of these last two posts).

Further research?

In retrospect, there are a number of questions that I didn’t ask in the survey, which I now wish I had.

  • A question about the most common reasons why teacher might not read academic research even though they might want to.
  • A question such as “How aware of you of the research base behind the strategies you use in the classroom?” or perhaps “How common is it for you to consider a teaching strategy, at least in part, because of research findings?”

Perhaps a lot of the strategies in use in classrooms are evidential already. It would have been useful to have a baseline figure for teachers’ awareness of the existing evidence base behind their preferred methods. These are issues that I would like to look at further at some point.

References:

John S. Zeuli, How do teachers understand research when they read it?, Teaching and Teacher Education, Volume 10, Issue 1, January 1994, Pages 39-55, ISSN 0742-051X, http://dx.doi.org/10.1016/0742-051X(94)90039-6

How effective do teachers find academic research? How easy is it to use? Survey results 2

What is the relationship between teachers and educational research? Survey results 1

In late October last year a survey by Nick Rose (@Nick_J_Rose) appeared on my Twitter feed, looking at the divisions in educational philosophy and pedagogical opinions that exist amongst teachers. When the results emerged soon after, they revealed a fascinatingly fractured profession, divided along lines of gender, rank and (especially) school type. Dichotomies between ‘progressive’ and ‘traditional’ ideas (false or otherwise) were alive and kicking, especially between primary and secondary schools, raising some extremely important questions about how we manage the transition between these two sectors with sometimes fundamentally opposed approaches to education. If you haven’t seen them, Nick’s four blog posts picking his results apart in forensic detail are highly recommended and can be read here.

Suitably inspired, I decided to do something very similar for an issue that I have been wondering about for a couple of years; to investigate the relationship between teachers and educational research. It is a platitude, but worth saying anyway, that teachers are always alert for ways to improve their practice and provision. The prevalence of ‘neuromyths’ and other pseudoscientific beliefs amongst teachers is often rightly cited as a worrying development, but it does at least illustrate the strong appetite for ‘empirical’ information which could help them to do their job more successfully. But where teachers actually turn for this empirical information is often not clear, for example selection and appraisal of academic research is entirely absent on most teacher training courses. My worry is that this creates a vacuum of demand without supply, and that into this vacuum the opportunistic and lightweight is sucked in before the rigorous and heavy.

My rough working hypotheses at the outset (based on my experience) were:

  1. Teachers would be generally in favour of using academic research to improve practice.
  2. Despite this, they would actually do so less than might be expected given (1)
  3. Teachers would find summaries of academic research easier to build into to their practice than the research papers themselves.

The survey went ‘live’ on 11th November, with the last completion on 4th January. The sampling involved appeals on social networks but also (mainly due to my paltry Twitter following) personal emails to contacts in schools. Whether this more direct approach led to a more representative sample of teachers than a Twitter-only survey is an open question. I know that a good range of schools were contacted and distributed the survey on their email lists (or otherwise semi-publicised it, in line with hilariously complicated ‘information management’ policies), but not how many people from each centre completed it. Demographic data was not collected in the survey in the interests of keeping the length down. Regardless of the type of school, it is probably a very unrepresentative teacher who takes time out of their day to fill in a survey, so I can’t claim that the results represent the teaching population at large.

school sectorA total of 223 teachers completed the survey, for whose time I am hugely grateful. Perhaps unsurprisingly, given that I was a secondary school teacher, so were the majority of my contacts and so, subsequently, were most of the respondents (Primary = 26, Secondary = 191, Sixth Form/FE college = 2, University, HE = 4).

The limited numbers from sectors other than secondary make it difficult to draw any firm conclusions regarding differences between the groups. This should be borne in mind for some of the graphs below.

Are teachers interested in reading research?

Respondents were asked “Do you think that teachers should have opportunities to read educational research?” and responded using a ten point rating scale where 10 = “yes, regularly” and 1 = “never”.

Teachers reading educational research

The mean ratings for all three sectors were pretty consistent, leading me to collapse the groups when looking at the overall distribution of responses:

Hist - opportunities to read

I was expecting that the overall response to this question would be positive, but perhaps with some polarisation of responses towards the lower end of the scale.

The results here exceeded my expectations in terms of the positive skew in the distribution. Clearly it seems that the majority of respondents feel very positively towards the idea that teachers should have opportunities to read educational research. From this we can draw the perhaps obvious, but useful, initial conclusion that teachers are not anti-research by and large.

The wording of this question was carefully chosen. I was worried that simply asking “Do you think that teachers should read educational research?” would lead to responses being confounded by worries about workload and other daily concerns. Asking about “opportunities” to read research allowed the distinction to be drawn between teachers’ theoretical opinions, and their day-to-day practice (which was assessed in the next question).

How often do teachers read educational research?

Q 2. asked “How often do you tend to read a piece of educational research?” (with subsequent clarification making it clear that this was referring to original research articles, rather than summaries or digests) and Q 4. “How often do you tend to read summaries or digests of educational research?” The summaries question was included in recognition of the fact that there are ways to engage with research results other than having to slog through each original article yourself. Websites such as the EEF or EBTN provide accessible research summaries, as well as tips on how best to incorporate them into lessons. I was interested to see if these options were popular alternatives to the ‘real thing’.

reading research

Again it’s worth pointing out here that the small numbers in the ‘6th form/FE/HE’ group especially mean that we should not draw any major conclusions from differences between the groups. It would not surprise me if the ‘6th form/FE/HE’ did score more highly on a larger scale survey here, as their daily practice might actually bring them into contact with research articles more frequently (my own reading of Educational Psychology articles only really started when I had to teach it!) Otherwise, it is interesting to note the increase in the frequency of reading summaries compared to reading the research papers themselves. Collapsing the school types into one group and just comparing responses to the ‘research articles’ question with the ‘summaries’ question, the difference between the two was significant (Research: M=2.78, SD= 1.25; Summaries: M=3.45, SD=1.31; t=5.552, p=<0.0001), suggesting that teachers read research summaries more often than the papers themselves. The histograms below (with all school types amalgamated) also help to illustrate this difference.

Although the distributions are relatively similar, it seems much more common for teachers to have regular (weekly) contact with research summaries than with the papers themselves. This seems to be where the difference between the groups revealed by the t-test is primarily found. Despite this difference, for both histograms fewer than half the respondents fell into the top two categories of ‘weekly’ or ‘monthly’, meaning that the majority of the sample are accessing research every few months or less.

One final piece of analysis was to see whether it is the same people who give high ratings in both categories:

Regression papers vs summaries Perhaps some people read lots of research papers and research summaries and other people don’t read anything.

Interestingly, although there is a positive correlation between the two questions (correlation coefficient r = .488, r2 = .2377), it is not extremely strong, so there is clearly some variation here within the sample, with some people regularly reading one medium but not the other. From this data it’s not clear whether this is due to availability or preference.

 

So, are teachers interested in reading research?

Yes… in theory!

It seems fairly clear from the data above that teachers are interested in academic research, and alive to the potential for using it in their own practice. However, this positivity towards reading research in principle is not completely reflected in practice, with over half of the sample reading a research paper or summary only every few months or less. These findings do seem to support my initial hypotheses 1 and 2.

So what are the barriers which are preventing good intentions becoming reality? The most obvious one is probably time. It is a very unusual school (though I have heard of them) which provides allocated time for its staff to engage with research on teaching and education. More commonly, it is a ‘desirable extra’ for which no actual provision is made. This is a ridiculous situation, making engagement with empirical data about how to do the job better a somewhat niche and onerous add-on.

However, there are other possibilities which might help to explain the gap between intention and practice, besides school timetabling and priorities. The next post will look at two possible explanations for why this might be, focusing on the content of the academic literature itself.

 

References

Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in education: Prevalence and predictors of misconceptions among teachers. Frontiers in psychology, 3.
What is the relationship between teachers and educational research? Survey results 1