We all know that there are some basic rules when we are presenting information: not too many words on a slide at once, try to include pictures, recap your argument from time to time and so on. I suspect, however, that most of us follow these rules more for stylistic reasons than due to any specific scientific strategy. A very neat study which came out last week, however, illustrated just how important the manner of the presentation of information really is, and how different presentation formats can lead to wildly different subsequent outcomes.
Duncan et al (2017) begin with the claim that much of the power of human cognition (and especially the facet of it known as ‘fluid intelligence’) rests heavily on the principle of compositionality – the ability to break down complex mental structures into simple parts (and vice versa, to build them back up again). They set out to test whether this ability determined performance in a commonly administered test of fluid intelligence; matrix reasoning tasks.
In a standard matrix reasoning problem (such as the one given below), to quote Duncan et al:
the task is to decide which of the four response alternatives at the bottom completes the matrix at the top. To determine the correct solution, it is necessary to take account of three varying stimulus features: whether the top part is outline or black, whether the left part is curved or angled, and whether the right part is straight or bowed. Only by considering all three features can the correct solution be determined, and reflecting the importance of complexity, if the problem has fewer varying features, it becomes progressively easier to solve.
Success in matrix reasoning tasks, then, may be due to the ability to break down the composite images into simpler constituent parts. People who do poorly on matrix reasoning tests such as these seem to struggle with keeping track of these multiple composite parts (e.g. the top, left and right sections of the example above, all of which vary independent of one another).
Duncan et al. created 20 reasoning tasks, each with three varying parts (like in the example above). Ten of the problems were presented in a traditional (or what Duncan et al call the ‘combined’) format, with the three parts varying across three pictures. However, instead of selecting their answer from four alternatives as in the standard version above, participants were required to draw the answer into a box with a common feature already provided (see below; in this case a horizontal line). In the words of Duncan et al:
By constructing matrix items from multiple parts and allowing answers for each part to be drawn in turn, we removed the requirement to store intermediate results and finally synthesise them into a single answer.
In other words, by drawing rather than selecting the answer, there is no requirement to hold onto the changes to all three parts concurrently in working memory; each can be analysed and drawn into the answer box in turn. All that is required is the cognitive insight that the problem can be segmented into three parts, and the attentional control to focus on each segmented part in isolation. Despite this apparent simplification of the procedure, however, participants’ with low fluid intelligence performed poorly on these modified matrices; success rates at solving these problems were closely related to participants’ performance on other measures of fluid intelligence. This is perhaps not surprising, given that matrix problems themselves are used as a test of fluid intelligence, but it does at least show that this modified answer format still provides a valid test of FI. However, Duncan et al’s clever innovation was to provide the other 10 reasoning problems in a different format, one which removed the compositionality demand entirely from the process by providing separate grids for each of the three varying parts of the matrix (see below for the ‘separated’ version of the same task as above).
Importantly, once the segmentation was done for them, all participants, regardless of fluid intelligence scores, achieved close to perfect scores when drawing the answers for these ten tasks. Duncan et al call the ‘separated’ versions of the tasks “trivially easy” and see them merely as a means to support their conclusion regarding the importance of compositionality for performance on FI measures. In addition to raising this question about exactly what it is that is being measured by FI tests, however, I also think that a slightly different additional conclusion can be drawn from the paper pertaining to education: presentation matters.
Here we have two ways of attempting to encourage a participant to make an identical response: the combined and separated formats of the matrix problem. Success on the combined format requires either a sufficiently developed compositional sense to segment the image into its three parts, or perhaps a sufficiently large working memory capacity to be able to hold all three parts in mind concurrently. The separated format imposes neither of these cognitive burdens, and as a result even participants who scored very poorly on the combined format are able to display high levels of accuracy. Whilst Duncan et al might label the separated format “trivially easy” it could just as well be labelled “explicitly broken down”. Were the participants to be given a series of separate matrices to do first, along with clear teacher instruction which gradually built up to attempting the combined format, it seems highly probable that even those participants who scored very poorly on the combined puzzles to start with would be more successful1.
I think this study provides a great illustration of one of the most challenging aspects of teaching – finding a manner to present complex new material in a form that initially minimises the cognitive demand for the learners – as well as clearly demonstrating the possible gains if a successful strategy is utilised. It is the sort of consideration that many teachers implicitly consider when deciding how to present new material, without always having a clear scientific framework for how to minimise the cognitive demand. Compositionality, as well as the existing principles of Cognitive Load Theory2, can help to provide a more nuanced understanding of exactly how teachers might segment and present new information, for the benefit of all.
Tips on the use of CLT in the presentation of information can be found here and here, but those with a more detailed interest may enjoy Clark, Nguyen and Sweller’s 2005 book ‘Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load‘.
1. Duncan et al do report that there was no separate->combined practice effect (i.e. people didn’t get better at combined puzzles if they were in the group which first tried the separate format, compared to the group which completed the experiment in the other order)… but this does not suggest how they would do if this was also paired with explicit teaching.
2. I actually think that compositionally could possibly be subsumed within the existing structure of CLT (it is a form of what CLT would call ‘element interactivity’), but my concern here is less with categorisation as with the general point about the effect of effective presentation.
Duncan, J., Chylinski, D., Mitchell, D. J., & Bhandari, A. (2017). Complexity and compositionality in fluid intelligence. Proceedings of the National Academy of Sciences. https://doi.org/10.1073/pnas.1621147114