A case study of educational innovation?

Design and Assessment of an Online Prelab Model in General Chemistry


Keith S. Taber


Case study is meant to be naturalistic – whereas innovation sounds like an intervention. But interventions can be the focus of naturalistic enquiry.

One of the downsides of having spent years teaching research methods is that one cannot help but notice how so much published research departs from the ideal models one offers to students. (Which might be seen as a polite way of saying authors often seem to get key things wrong.) I used to teach that how one labelled one's research was less important than how well one explained it. That is, different people would have somewhat different takes on what is, or is not, grounded theory, case study or action research, but as long as an author explained what they had done, and could adequately justify why, the choice of label for the methodology was of secondary importance.

A science teacher can appreciate this: a student who tells the teacher they are doing a distillation when they are actually carrying out reflux – but clearly explains what they are doing and why, will still be understood (even if the error should be pointed out). On the other hand if a student has the right label but an alternative conception this is likely to be a more problematic 'bug' in the teaching-learning system. 1

That said, each type of research strategy has its own particular weaknesses and strengths so describing something as an experiment, or a case study, if it did not actually share the essential characteristics of that strategy, can mislead the reader – and sometimes even mislead the authors such that invalid conclusions are drawn.

A 'case study', that really is a case study

I made reference above to action research, grounded theory, and case study – three methodologies which are commonly name-checked in education research. There are a vast number of papers in the literature with one of these terms in the title, and a good many of them do not report work that clearly fits the claimed approach! 2


The case study was published in the Journal for the Research Center for Educational Technology

So, I was pleased to read an interesting example of a 'case study' that I felt really was a case study (Llorens-Molina, 2009). 'Design and assessment of an online prelab model in general chemistry: A case study' offered a good example of a case study. Although, I suspect some other authors might have been tempted to describe this research differently.

Is it a bird, is it a plane; no it's…

Llorens-Molina's study included an experimental aspect. A cohort of learners was divided into two groups to allow the researcher to compare two different educational treatments; then, measurements were made to compare outcomes quantitatively. That might sound like an experiment. Moreover, this study reported an attempt to innovate in a teaching situation, which gives the work a flavour of action research. Despite this, I agree with Llorens-Molinathat that the work is best characterised as a case study.

Read about experiments

Read about action research


A case study focuses on 'one instance' from among many


What is a case study?

A case study is an in-depth examination of one instance: one example – of something for which there are many examples. The focus of a case study might be one learner, one teacher, one group of students working together on a task, one class, one school, one course, one examination paper, one text book, one laboratory session, one lesson, one enrichment programme… So, there is great variety in what kind of entity a case study is a study of, but what case studies have in common is they each focus in detail on that one instance.

Read about case study methodology


Characteristics of case study

Characteristics of case study

Case studies are naturalistic studies, which means they are studies of things as they are, not attempts to change things. The case has to be bounded (a reader of a case study learns what is in the case and what is not) but tends to be embedded in a wider context that impacts upon it. That is, the case is entangled in a context from which it could not easily be extracted and still be the same case. (Imagine moving a teacher with her class from their school to have their lesson in a university where it could be observed by researchers – it would not be 'the same lesson' as would have occurred in situ).

The case study is reported in detail, often in a narrative form (not just statistical summaries) – what is sometimes called 'thick description'. Usually several 'slices' of data are collected – often different kinds of data – and often there is a process of 'triangulation' to check the consistency of the account presented in relation to the different slices of data available. Although case studies can include analysis of quantitative data, they are usually seen as interpretive as the richness of data available usually reflects complexity and invites nuance.



Design and Assessment of an Online Prelab Model in General Chemistry

Llorens-Molina's study explored the use of prelabs that are "used to introduce and contextualize laboratory work in learning chemistry" (p.15), and in particular "an alternative prelab model, which consists of an audiovisual tutorial associated with an online test" (p.15).

An innovation

The research investigated an innovation in teaching practice,

"In our habitual practice, a previous lecture at the beginning of each laboratory session, focused almost exclusively on the operational issues, was used. From our teaching experience, we can state that this sort of introductory activity contributes to a "cookbook" way to carry out the laboratory tasks. Furthermore, the lecture takes up valuable time (about half an hour) of each ordinary two-hour session. Given this set-up, the main goal of this research was to design and assess an alternative prelab model, which was designed to enhance the abilities and skills related to an inquiry-type learning environment. Likewise, it would have to allow us to save a significant amount of time in laboratory sessions due to its online nature….

a prelab activity developed …consists of two parts…a digital video recording about a brief tutorial lecture, supported by a slide presentation…[followed by ] an online multiple choice test"

Llorens-Molina, 2009, p.16-17
Not action research?

The reference to shifting "our habitual practice" indicates this study reports practitioner research. Practitioner studies, such as this, that test a new innovation are often labelled by authors as 'action research'. (Indeed, sometimes, the fact that research is carried out by practitioners looking to improve their own practice is seen as sufficient for action research: when actually this is a necessary, but not a sufficient condition.)

Genuine action research aims at improving practice, not simply seeing if a specific innovation is working. This means action research has an open-ended design, and is cyclical – with iterations of an innovation tested and the outcomes used as feedback to inform changes in the innovation. (Despite this, a surprising number of published studies labelled as action research lack any cyclic element, simply reporting one iteration of a innovation.) Llorens-Molina's study does not have a cyclic design, so would not be well-characterised as action research.

An experimental design?

Llorens-Molina reports that the study was motivated by three hypotheses (p.16):

  • "Substituting an initial lecture by an online prelab to save time during laboratory sessions will not have negative repercussions in final examination marks.
  • The suggested online prelab model will improve student autonomy and prerequisite knowledge levels during laboratory work. This can be checked by analyzing the types and quantity of SGQ [student generated questions].
  • Student self-perceptions about prelab activities will be more favourable than those of usual lecture methods."

To test these hypotheses the student cohort was divided into two groups, to be split between the customary and innovative approach. This seems very much like an experiment.

It may be useful here to make a discrimination between two levels of research design – methodology (akin to strategy) and techniques (akin to tactics). In research design, a methodology is chosen to meet the overall aims of the study, and then one or more research techniques are selected consistent with that methodology (Taber, 2013). Experimental techniques may be included in a range of methodologies, but experiment as an overall methodology has some specific features.

Read about Research design

In a true experiment there is random assignment to conditions, and often there is an intention to generalise results to a wider population considered to be sampled in the study. Llorens-Molina reports that although inferential statistics were used to test the hypotheses, there was no intention to offer statistical generalisation beyond the case. The cohort of students was not assumed to be a sample representing some wider population (such as, say, undergraduates on chemistry courses in Spain) – and, indeed, clearly such an assumption would not have been justified.

Case study is naturalistic – but an innovation is an intervention in practice…

Case study is said to be naturalistic research – it is a method used to understand and explore things as they are, not to bring about change. Yet, here the focus is an innovation. That seems a contradiction. It would be a contradiction if the study was being carried out by external researchers who had asked the teaching team to change practice for the benefits of their study. However, here it is useful to separate out the two roles of teacher and researcher.

This is a situation that I commonly faced when advising graduates preparing for school teaching who were required to carry out a classroom based study into an aspect of their school placement practice context as part of their university qualification (the Post-Graduate Certificate in Education, P.G.C.E.). Many of these graduates were unfamiliar with research into social phenomena. Science graduates often brought a model of what worked in the laboratory to their thinking about their projects – and had a tendency to think that transferring the experimental approach to classrooms (where there are usually a large number of potentially relevant variables, many of which can not be controlled) would be straightforward.

Read 'Why do natural scientists tend to make poor social scientists?'

The Cambridge P.G.C.E. teaching team put into place a range of supports to introduce graduate preparing for teaching to the kinds of education research useful for teachers who want to evaluate and improve their own teaching. This included a book written to introduce classroom-based research that drew heavily on analysis of published studies (Taber, 2007; 2013). Part of our advice was that those new to this kind of enquiry might want to consider action research and case study as suitable options for their small-scale projects.


Useful strategies for the novice practitioner-researcher (Figure: diagram used in working with graduates preparing for teaching, from Taber, 2010)

Simplistically, action research might be considered best suited to a project to test an innovation or address a problem (e.g., evaluating a new teaching resource; responding to behavioural issues), and case study best suited to an exploratory study (e.g., what do Y9 students understand about photosynthesis?; what is the nature of peer dialogue during laboratory working in this class?) However, it was often difficult for the graduates to carry out authentic action research as the constraints of the school-based placements seldom allowed them to test successive iterations of the same intervention until they found something like an optimal specification.

Yet, they often were in a good position to undertake a detailed study of one iteration, collecting a range of different data, and so producing a detailed evaluation. That sounds like a case study.

Case study is supposed to be naturalistic – whereas innovation sounds like an intervention. But some interventions in practice can be considered the focus of naturalistic enquiry. My argument was that when a teacher changes the way they do something to try and solve a problem, or simply to find a better way to work, that is a 'natural' part of professional practice. The teacher-researcher, as researcher, is exploring something the fully professional teacher does as matter of course – seek to develop practice. After all, our graduates were being asked to undertake research to give them the skills expected to meet professional teaching standards, which

"clearly requires the teacher to have both the procedural knowledge to undertake small-scale classroom enquiry, and 'conceptual frameworks' for thinking about teaching and learning that can provide the basis for evaluating their teaching. In other words, the professional teacher needs both the ability to do her own research and knowledge of what existing research suggests"

Taber, 2013, p.8

So, the research is on something that is naturally occurring in the classroom context, rather than an intervention imported into the context in order to answer an external researcher's questions. A case study of an intervention introduced by practitioners themselves can be naturalistic – even if the person implementing the change is the researcher as well as the teacher.


If a teacher-researcher (qua researcher) wishes to enquire into an innovation introduced by the teacher-researcher (qua teacher) then this can be considered as naturalistic enquiry


The case and the context

In Llorens-Molina's study, the case was a sequence of laboratory activities carried out by a cohort of undergraduates undertaking a course of General and Organic Chemistry as part of an Agricultural Engineering programme. So, the case was bounded (the laboratory part of one taught course) and embedded in a wider context – a degree programme in a specific institution in Spain: the Polytechnic University of Valencia.

The primary purpose of the study was to find out about the specific innovation in the particular course that provided the case. This was then what is known as an intrinsic case study. (When a case is studied primarily as an example of a class of cases, rather than primarily for its own interest, it is called an instrumental case study).

Llorens-Molina recognised that what was found in this specific case, in its particular context, could not be assumed to apply more widely. There can be no statistical generalisation to other courses elsewhere. In case study, the intention is to offer sufficient detail of the case for readers to make judgements of the likely relevance to other context of interest (so-called 'reader generalisation').

The published report gives a good deal of information about the course as well as much information about how data was collected, and equally important, analysed.

Different slices of data

Case study often uses a range of data sources to develop a rounded picture of the case. In this study the identification of three specific hypotheses (less usual in case studies, which often have more open-ended research questions) led to the collection of three different types of data.

  • Students were assessed on each of six laboratory activities. A comparison was made between the prelab condition and the existing approach.
  • Questions asked by students in the laboratories were recorded and analysed to see if the quality/nature of such questions was different in the two conditions. A sophisticated approach was developed to analyse the questions.
  • Students were asked to rate the prelabs through responding to items on a questionnaire.

This approach allowed the author to go beyond simply reporting whether hypotheses were supported by the analysis, to offer a more nuanced discussion around each feature. Such nuance is not only more informative to the reader of a case study, but reflects how the researcher, as practitioner, has an ongoing commitment to further develop practice and not see the study as an end in itself.

Avoiding the 'equivalence' and the 'misuse of control groups' problems

I particularly appreciate a feature of the research design that many educational studies that claim to be experiments could benefit from. To test his hypotheses Llorens-Molina employed two conditions or treatments, the innovation and a comparison condition, and divided the cohort: "A group with 21 students was split into two subgroups, with 10 and 11 in each one, respectively". Llorens-Molina does not suggest this was based on random assignment, which is necessary for a 'true' experiment.

In many such quasi-experiments (where randomisation to condition is not carried out, and is indeed often not possible) the researchers seek to offer evidence of equivalence before the treatments occur. After all, if the two subgroups are different in terms of past subject attainment or motivation or some other relevant factor (or, indeed, if there is no information to allow a judgement regarding whether this is the case or not), no inferences about an intervention can be drawn from any measured differences. (Although that does not always stop researchers from making such claims regardless: e.g., see Lack of control in educational research.)

Another problem is that if learners are participating in research but are assigned to a control or comparison condition then it could be asked if they are just being used as 'data fodder', and would that be fair to them? This is especially so in those cases (so, not this one) where researchers require that the comparison condition is educationally deficient – many published studies report a control condition where schools students have effectively been lectured to, and no discussion work, group work, practical work, digital resources, et cetera, have been allowed, in order to ensure a stark contrast with whatever supposedly innovative pedagogy or resource is being evaluated (Taber, 2019).

These issues are addressed in research designs which have a compensatory structure – in effect the groups switch between being the experimental and comparison condition – as here:

"Both groups carried out the alternative prelab and the previous lecture (traditional practice), alternately. In this way, each subgroup carried out the same number of laboratory activities with either a prelab and previous lecture"

Llorens-Molina, 2009, p.19

This is good practice both from methodological and ethical considerations.


The study used a compensatory design which avoids the need to ensure both groups are equivalent at the start, and does not disadvantage one group. (Figure from Llorens-Molina, 2009, p.22 – published under a creative commons Attribution-NonCommercial-NoDerivs 3.0 United States license allowing redistribution with attribution)

A case of case study

Do I think this is a model case study that perfectly exemplifies all the claimed characteristics of the methodology? No, and very few studies do. Real research projects, often undertaken in complex contexts with limited resources and intractable constraints, seldom fit such ideal models.

However, unlike some studies labelled as case studies, this study has an explicit bounded case and has been carried out in the spirit of case study that highlights and values the intrinsic worth of individual cases. There is a good deal of detail about aspects of the case. It is in essence a case study, and (unlike what sometimes seems to be the case [sic]) not just called a case study for want of a methodological label. Most educational research studies examine one particular case of something – but (and I do not think this is always appreciated) that does not automatically make them case studies. Because it has been both conceptualised and operationalised as a case study, Llorens-Molina's study is a coherent piece of research.

Given how, in these pages, I have often been motivated to call out studies I have read that I consider have major problems – major enough to be sufficient to undermine the argument for the claimed conclusions of the research – I wanted to recognise a piece of research that I felt offered much to admire.


Work cited:

Notes:

1 I am using language here reflecting a perspective on teaching as being based on a model (whether explicit or not) in the teacher's mind of the learners' current knowledge and understanding and how this will respond to teaching. That expects a great deal of the teacher, so there are often bugs in the system (e.g., the teacher over-estimates prior knowledge) that need to be addressed. This is why being a teacher involves being something of a 'learning doctor'.

Read about the learning doctor perspective on teaching


2 I used to teach sessions introducing each of these methodologies when I taught on an Educational Research course. One of the class activities was to examine published papers claiming the focal methodology, asking students to see if studies matched the supposed characteristics of the strategy. This was a course with students undertaking a very diverse range of research projects, and I encouraged them to apply the analysis to papers selected because they were of particular interest and relevance to to their own work. Many examples selected by students proved to offer poor match between claimed methodology and the actual research design of ther study!

Lack of control in educational research

Getting that sinking feeling on reading published studies


Keith S. Taber


this is like finding that, after a period of watering plant A, it is taller than plant B – when you did not think to check how tall the two plants were before you started watering plant A

Research on prelabs

I was looking for studies which explored the effectiveness of 'prelabs', activities which students are given before entering the laboratory to make sure they are prepared for practical work, and can therefore use their time effectively in the lab. There is much research suggesting that students often learn little from science practical work, in part because of cognitive overload – that is, learners can be so occupied with dealing with the apparatus and materials they have little capacity left to think about the purpose and significance of the work. 1


Okay, so is THIS the pipette?
(Image by PublicDomainPictures from Pixabay)

Approaching a practical work session having already spent time engaging with its purpose and associated theories/models, and already having become familiar with the processes to be followed, should mean students enter the laboratory much better prepared to use their time efficiently, and much better informed to reflect on the wider theoretical context of the work.

I found a Swedish paper (Winberg & Berg, 2007) reporting a pair of studies that tested this idea by using a simulation as a prelab activity for undergraduates about to engage with an acid-base titration. The researchers tested this innovation by comparisons between students who completed the prelab before the titration, and those who did not.

The work used two basic measures:

  • types (sophistication) of questions asked by students during the lab. session
  • elicitation of knowledge in interviews after the laboratory activity

The authors found some differences (between those who had completed the prelab and those that had not) in the sophistication of the questions students asked, and in the quality of the knowledge elicited. They used inferential statistics to suggest at least some of the differences found were statistically significant. From my reading of the paper, these claims were not justified.

A peer reviewed journal (no, really, this time)

This is a paper in a well respected journal (not one of the predatory journals I have often discussed on this site). The Journal of Research in Science Teaching is published by Wiley (a major respected publisher of academic material) and is the official journal of NARST (which used to stand for the National Association for Research in Science Teaching – where 'national' referred to the USA 2). This is a journal that does take peer review very seriously.

The paper is well-written and well-structured. Winberg and Berg set out a conceptual framework for the research that includes a discussion of previous relevant studies. They adopt a theoretical framework based on the Perry's model of intellectual development (Taber, 2020). There is considerable detail of how data was collected and analysed. This account is well-argued. (But, you, dear reader, can surely sense a 'but' coming.)

Experimental research into experimental work?

The authors do not seem to explicitly describe their research as an experiment as such (as opposed to adopting some other kind of research strategy such as survey or case study), but the word 'experiment' and variations of it appear in the paper.

For one thing, the authors refer to students' practical work as being experiments,

"Laboratory exercises, especially in higher education contexts, often involve training in several different manipulative skills as well as a high information flow, such as from manuals, instructors, output from the experimental equipment, and so forth. If students do not have prior experiences that help them to sort out significant information or reduce the cognitive effort required to understand what is happening in the experiment, they tend to rely on working strategies that help them simply to cope with the situation; for example, focusing only on issues that are of immediate importance to obtain data for later analysis and reflective thought…"

Winberg & Berg, 2007

Now, some student practical work is experimental, where a student is actively looking to see what happens when they manipulate some variable to test a hypothesis. This type of practical work is sometimes labelled enquiry (or inquiry in US spelling). But a lot of school and university laboratory work, however, is undertaken to learn techniques, or (probably more often) to support the learning of taught theory – where it is usually important the learners know what is meant to happen before they begin the laboratory activity.

Winberg and Berg refer to the 'laboratory exercise' as 'the experiment' as though any laboratory work counts as an experiment. In Winberg and Berg's research, students were asked about their "own [titration] experiment", despite the prelab material involving a simulation of the titration process, in advance of which "the theoretical concepts, ideas, and procedures addressed in the simulation exercise had been treated mainly quantitatively during the preceding 1-week instructional sequence". So, the laboratory titration exercise does not seem to be an experiment in the scientific sense of the term.

School children commonly describe all practical work in the lab as 'doing experiments'. It cannot help students learn what an experiment really is when the word 'experiment' has two quite distinct meanings in the science classroom:

  • experiment(technical) = an empirical test of a hypothesis involving the careful control of variables and observation of the effect on a specified (hypothetised as) dependent variable of changing the variable specified as the independent variable
  • experiment(casual) = absolutely any practical activity carried out with laboratory equipment

We might describe this second meaning as an alternative conception of 'experiment', a way of understanding that is inconsistent with the scientific meaning. (Just as there are common alternative conceptions of other 'nature of science' concepts such as 'theory').

I would imagine Winberg and Berg were well aware of what an experiment is, although their casual use of language might suggest a lack of rigour in thinking with the term. They refer to having "both control and experiment groups" in their studies, and refer to "the experimental chronology" of their research design. So, they certainly seem to think of their work as a kind of experiment.

Experimental design

In a true experiment, a sample is randomly drawn from a population of interest (say, first year undergraduate chemistry students; or, perhaps, first year undergraduate chemistry students attending Swedish Universities, or… 3) and assigned randomly to the conditions being compared. Providing a genuine form of random assignment is used, then inferential statistical tests can guide on whether any differences found between groups at the end of an experiment should be considered statistically significant. 4

"Statistics can only indicate how likely a measured result would occur by chance (as randomisation of units of analysis to different treatments can only make uneven group composition unlikely, not impossible)…Randomisation cannot ensure equivalence between groups (even if it makes any imbalance just as likely to advantage either condition)"

Taber, 2019, p.73

Inferential statistics can be used to test for statistical significance in experiments – as long as the 'units of analysis' (e.g., students) are randomly assigned to the experimental and control conditions.
(Figure from Taber, 2019)

That is, if the are difference that the stats. tests suggests are very unlikely to happen by chance, then they are very unlikely to be due to an initial difference between the groups in the two conditions as long as the groups were the result of random assignment. But that is a very important proviso.

There are two aspects to this need for randomisation:

  • to be able to suggest any differences found reflect the effects of the intervention, then there should be random assignment to the two (or more) conditions
  • to be able to suggest the results reflect what would probably would be found in a wider population, the sample should be randomly selected from the population of interest 3

Studies in education seldom meet the requirements for being true experiments
(Figure from Taber, 2019)

In education, it is not always possible to use random assignment, so true experiments are then not possible. However, so-called 'quasi-experiments' may be possible where differences between the outcomes in different conditions may be understood as informative, as long as there is good reason to believe that even without random assignment, the groups assigned to the different conditions are equivalent.

In this specific research, that would mean having good reason to believe that without the intervention (the prelab):

  • students in both groups would have asked overall equivalent (in terms of the analysis undertaken in this study) questions in the lab.;
  • students in both groups would have been judged as displaying overall equivalent subject knowledge.

Often in research where a true experiment is not possible some kind of pre-testing is used to make a case for equivalence between groups.

Two control groups that were out of control

In Winberg and Berg's research there were two studies where comparisons were made between 'experimental' and 'control' conditions

StudyExperimentalControl
Study 1n=78: first-year students, following completion of their first chemistry course in 2001n=97: students who had been interviewed by the researchers during the same course in the previous year
Study 2n=21 (of 58 in cohort)n=37 (of 58 in same cohort)

In the first study, a comparison was made between the cohort where the innovation was introduced and a cohort from the previous year. All other things being equal, it seems likely these two cohorts were fairly similar. But in education all thing are seldom equal, so there is no assurance they were similar enough to be considered equivalent.

In the second study

"Students were divided into treatment (n = 21) and control (n = 37) groups. Distribution of students between the treatment and control groups was not controlled by the researchers".

Winberg & Berg, 2007

So, some factor(s) external to the researchers divided the cohort into two groups – and the reader is told nothing about the basis for this, nor even if the two groups were assigned to the treatments randomly.5 The authors report that the cohort "comprised prospective molecular biologists (31%), biologists (51%), geologists (7%), and students who did not follow any specific program (11%)", and so it is possible the division into two uneven sized groups was based on timetabling constraints with students attending chemistry labs sessions according to their availability based on specialism. But that is just a guess. (It is usually better when the reader of a research report is not left to speculate about procedures and constraints.)

What is important for a reader to note is that in these studies:

  • the researchers were not able to assign learners to conditions randomly;
  • nor were the researchers able to offer any evidence of equivalence between groups (such as near identical pre-test scores);
  • so, the requirements for inferring significance from statistical tests were not met;
  • so, claims in the paper about finding statistically significant differences between conditions cannot therefore be justified given the research design;
  • and therefore the conclusions presented in the paper are strictly not valid.

If students are not randomly assigned to conditions, then any statistically unlikely difference found at the end of an experiment cannot be assumed to be likely to be due to intervention, rather than some systematic initial difference between the groups.
(Figure adapted from Taber, 2019)


This is a shame, because this is in many ways an interesting paper, and much thought and care seems to have been taken about the collection and analysis of meaningful data. Yet, drawing conclusions from statistical tests comparing groups that might never have been similar in the first case is like finding that careful use of a vernier scale shows that after a period of watering plant A, plant A is taller than plant B – having been very careful to make sure plant A was watered regularly with carefully controlled volumes, while plant B was not watered at all – when you did not think to check how tall the two plants were before you started watering plant A.

In such a scenario we might be tempted to assume plant A has actually become taller because it had been watered; but that is just applying what we had conjectured should be the case, and we would be mistaking our expectations for experimental evidence.

Work cited:

Notes:

1 The part of the brain where we can consciously mentipulate ideas is called the working memory (WM). Research suggests that WM has a very limited capacity in the sense that people can only hold in mind a very small number of different things at once. (These 'things' however are somewhat subjective – a complex idea that is treated as a single 'thing' in the WM of an expert can overload a novice.) This limit to ~WM is considered to be one of the most substantial constraints on effective classroom learning. This is also, then, one of the key research findings informing the design of effective teaching.

Read about working memory

Read about key ideas for teaching in accordance with learning theory

How fat is your memory? – read about a chemical analogy for working memory


2 The organisation has seemingly spotted that the USA is only one part of the world, and now describes itself as a global organisation for improving science education through research.


3 There is no reason why an experiment cannot be carried out on a very specific population, such as first year undergraduate chemistry students attending a specific Swedish University such a, say, Umea ̊ University. However, if researchers intend their study to have results generalisable beyond their specific research contexts (say, to first year undergraduate chemistry students attending any Swedish University) then it is important to have a representative sample of that population.

Read about populations of interest in research

Read about generalisation from research studies


4 It might be assumed that scientists, and researchers know what is meant by random, and how to undertake random assignment. Sadly, the literature suggests that in practice the term 'randomly' is sometimes used in research reports to mean something like 'arbitrarily' (Taber, 2013), which fills short of being random.

Read about randomisation in research


5 Arguably, even if the two groups were assigned randomly, there is only one 'unit of analysis' in each condition, as they were assigned as groups. That is, for statistical purposes, the two groups have size n=1 and n=1, which would not allow statistical significance to be found: e.g, see 'Quasi-experiment or crazy experiment?'

Quasi-experiment or crazy experiment?

Trustworthy research findings are conditional on getting a lot of things right


Keith S. Taber


A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy.

I was looking for research into the effectiveness of P-O-E (predict-observe-explain) pedagogy, a teaching technique that is believed to help challenge learners' alternative conceptions and support conceptual change.

Read about the predict-observe-explain approach



One of the papers I came across reported identifying, and then using P-O-E to respond to, students' alternative conceptions. The authors reported that

The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test).

Kibirige, Osodo & Tlala, 2014, p.302

The references to "in the 80%" did not seem to be explained anywhere. Perhaps only 80% of students responded to the open-ended questions included as part of the assessment instrument (discussed below), so the authors gave the incidence as a proportion of those responding? Ideally, research reports are explicit about such matters avoiding the need for readers to speculate.

The authors concluded from their research that

"This study revealed that the use of POE strategy has a positive effect on learners' misconceptions about dissolved salts. As a result of this strategy, learners were able to overcome their initial misconceptions and improved on their performance….The implication of these results is that science educators, curriculum developers, and textbook writers should work together to include elements of POE in the curriculum as a model for conceptual change in teaching science in schools."

Kibirige, Osodo & Tlala, 2014, p.305

This seemed pretty positive. As P-O-E is an approach which is consistent with 'constructivist' thinking that recognises the importance of engaging with learners' existing thinking I am probably biased towards accepting such conclusions. I would expect techniques such as P-O-E, when applied carefully in suitable curriculum contexts, to be effective.

Read about constructivist pedagogy

Yet I also have a background in teaching research methods and in acting as a journal editor and reviewer – so I am not going to trust the conclusion of a research study without having a look at the research design.


All research findings are subject to caveats and provisos: good practice in research writing is for the authors to discuss them – but often they are left unmentioned for readers to spot. (Read about drawing conclusions from studies)


Kibirige and colleagues describe their study as a quasi-experiment.

Experimental research into teaching approaches

If one wants to see if a teaching approach is effective, then it seems obvious that one needs to do an experiment. If we can experimentally compare different teaching approaches we can find out which are more effective.

An experiment allows us to make a fair comparison by 'control of variables'.

Read about experimental research

Put very simply, the approach might be:

  • Identify a representative sample of an identified population
  • Randomly assign learners in the sample to either an experimental condition or a control condition
  • Set up two conditions that are alike in all relevant ways, apart from the independent variable of interest
  • After the treatments, apply a valid instrument to measure learning outcomes
  • Use inferential statistics to see if any difference in outcomes across the two conditions reaches statistical significance
  • If it does, conclude that
    • the effect is likely to due to the difference in treatments
    • and will apply, on average, to the population that has been sampled

Now, I expect anyone reading this who has worked in schools, and certainly anyone with experience in social research (such as research into teaching and learning), will immediately recognise that in practice it is very difficult to actually set up an experiment into teaching which fits this description.

Nearly always (if indeed not always!) experiments to test teaching approaches fall short of this ideal model to some extent. This does not mean such studies can not be useful – especially where there are many of them with compensatory strengths and weaknesses offering similar findings (Taber, 2019a)- but one needs to ask how closely published studies fit the ideal of a good experiment. Work in high quality journals is often expected to offer readers guidance on this, but readers should check for themselves to see if they find a study convincing.

So, how convincing do I find this study by Kibirige and colleagues?

The sample and the population

If one wishes a study to be informative about a population (say, chemistry teachers in the UK; or 11-12 year-olds in state schools in Western Australia; or pharmacy undergraduates in the EU; or whatever) then it is important to either include the full population in the study (which is usually only feasible when the population is a very limited one, such as graduate students in a single university department) or to ensure the sample is representative.

Read about populations of interest in research

Read about sampling a population

Kibirige and colleagues refer to their participants as a sample

"The sample consisted of 93 Grade 10 Physical Sciences learners from two neighbouring schools (coded as A and B) in a rural setting in Moutse West circuit in Limpopo Province, South Africa. The ages of the learners ranged from 16 to 20 years…The learners were purposively sampled."

Kibirige, Osodo & Tlala, 2014, p.302

Purposive sampling means selecting participants according to some specific criteria, rather than sampling a population randomly. It is not entirely clear precisely what the authors mean by this here – which characteristics they selected for. Also, there is no statement of the population being sampled – so the reader is left to guess what population the sample is a sample of. Perhaps "Grade 10 Physical Sciences" students – but, if so, universally, or in South Africa, or just within Limpopo Province, or indeed just the Moutse West circuit? Strictly the notion of a sample is meaningless without reference to the population being sampled.

A quasi-experiment

A key notion in experimental research is the unit of analysis

"An experiment may, for example, be comparing outcomes between different learners, different classes, different year groups, or different schools…It is important at the outset of an experimental study to clarify what the unit of analysis is, and this should be explicit in research reports so that readers are aware what is being compared."

Taber, 2019a, p.72

In a true experiment the 'units of analysis' (which in different studies may be learners, teachers, classes, schools, exam. papers, lessons, textbook chapters, etc.) are randomly assigned to conditions. Random assignment allows inferential statistics to be used to directly compare measures made in the different conditions to determine whether outcomes are statistically significant. Random assignment is a way of making systematic differences between groups unlikely (and so allows the use of inferential statistics to draw meaningful conclusions).

Random assignment is sometimes possible in educational research, but often researchers are only able to work with existing groupings.

Kibirige, Osodo & Tlala describe their approach as using a quasi-experimental design as they could not assign learners to groups, but only compare between learners in two schools. This is important, as means that the 'units of analysis' are not the individual learners, but the groups: in this study one group of students in one school (n=1) is being compared with another group of students in a different school (n=1).

The authors do not make it clear whether they assigned the schools to the two teaching conditions randomly – or whether some other criterion was used. For example, if they chose school A to be the experimental school because they knew the chemistry teacher in the school was highly skilled, always looking to improve her teaching, and open to new approaches; whereas the chemistry teacher in school B had a reputation for wishing to avoid doing more than was needed to be judged competent – that would immediately invalidate the study.

Compensating for not using random assignment

When it is not possible to randomly assign learners to treatments, researchers can (a) use statistics that take into account measurements on each group made before, as well as after, the treatments (that is, a pre-test – post-test design); (b) offer evidence to persuade readers that the groups are equivalent before the experiment. Kibirige, Osodo and Tlala seek to use both of these steps.

Do the groups start as equivalent?

Kibirige, Osodo and Tlala present evidence from the pre-test to suggest that the learners in the two groups are starting at about the same level. In practice, pre-tests seldom lead to identical outcomes for different groups. It is therefore common to use inferential statistics to test for whether there is a statistically significant difference between pre-test scores in the groups. That could be reasonable, if there was an agreed criterion for deciding just how close scores should be to be seen as equivalent. In practice, many researchers only check that the differences do not reach statistical significance at the level of probability <0.05: that it they look to see if there are strong differences, and, if not, declare this is (or implicitly treat this as) equivalence!

This is clearly an inadequate measure of equivalence as it will only filter out cases where there is a difference so large it is found to be very unlikely to be a chance effect.


If we want to make sure groups start as 'equivalent', we cannot simply look to exclude the most blatant differences. (Original image by mcmurryjulie from Pixabay)

See 'Testing for initial equivalence'


We can see this in the Kibirige and colleagues' study where the researchers list mean scores and standard deviations for each question on the pre-test. They report that:

"The results (Table 1) reveal that there was no significant difference between the pre-test achievement scores of the CG [control group] and EG [experimental group] for questions (Appendix 2). The p value for these questions was greater than 0.05."

Kibirige, Osodo & Tlala, 2014, p.302

Now this paper is published "licensed under Creative Commons Attribution 3.0 License" which means I am free to copy from it here.



According to the results table, several of the items (1.2, 1.4, 2.6) did lead to statistically significantly different response patterns in the two groups.

Most of these questions (1.1-1.4; 2.1-2.8; discussed below) are objective questions, so although no marking scheme was included in the paper, it seems they were marked as correct or incorrect.

So, let's take as an example question 2.5 where readers are told that there was no statistically significant difference in the responses of the two groups. The mean score in the control group was 0.41, and in the experimental group was 0.27. Now, the paper reports that:

"Forty nine (49) learners (31 males and 18 females) were from school A and acted as the experimental group (EG) whereas the control group (CG) consisted of 44 learners (18 males and 26 females) from school B."

Kibirige, Osodo & Tlala, 2014, p.302

So, according to my maths,


Correct responsesIncorrect responses
School A (49 students)(0.27 ➾) 1336
School B (44 students)(0.41 ➾) 1826
pre-test results for an item with no statistically significant difference between groups

"The achievement of the EG and CG from pre-test results were not significantly different which suggest that the two groups had similar understanding of concepts" (p.305).
Pre-test results for an item with no statistically significant difference between groups (offered as evidence of 'similar' levels of initial understanding in the two groups)

While, technically, there may have been no statistically significant difference here, I think inspection is sufficient to suggest this does not mean the two groups were initially equivalent in terms of performance on this item.


Data that is normally distributed falls on a 'bell-shaped' curve

(Image by mcmurryjulie from Pixabay)


Inspection of this graphic also highlights something else. Student's t-test (used by the authors to produce the results in their table 1), is a parametric test. That means it can only be used when the data fit certain criteria. The data sample should be randomly selected (not true here) and normally distributed. A normal distribution means data is distributed in a bell-shaped Gaussian curve (as in the image in the blue circle above).If Kibirige, Osodo & Tlala were applying the t-test to data distributed as in my graphic above (a binary distribution where answers were either right or wrong) then the test was invalid.

So, to summarise, the authors suggest there "was no significant difference between the pre-test achievement scores of the CG and EG for questions", although sometimes there was (according to their table); and they used the wrong test to check for this; and in any case lack of statistical significance is not a sufficient test for equivalence.

I should note that the journal does claim to use peer review to evaluate submissions to see if they are ready for publication!

Comparing learning gains between the two groups

At one level equivalence might not be so important, as the authors used an ANCOVA (Analysis of Covariance) test which tests for difference at post-test taking into account the pre-test. Yet this test also has assumptions that need to be tested for and met, but here seem to have just been assumed.

However, to return to an even more substantive point I made earlier, as the learners were not randomly assigned to the two different conditions /treatments, what should be compared are the two school-based groups (i.e., the unit of analysis should be the school group) but that (i.e., a sample of 1 class, rather than 40+ learners, in each condition) would not facilitate using inferential statistics to make a comparison. So, although the authors conclude

"that the achievement of the EG [taking n=49] after treatment (mean 34. 07 ± 15. 12 SD) was higher than the CG [taking n =44] (mean 20. 87 ± 12. 31 SD). These means were significantly different"

Kibirige, Osodo & Tlala, 2014, p.303

the statistics are testing the outcomes as if 49 units independently experienced one teaching approach and 44 independently experienced another. Now, I do not claim to be a statistics expert, and I am aware that most researchers only have a limited appreciation of how and why stats. tests work. For most readers, then, a more convincing argument may be made by focussing on the control of variables.

Controlling variables in educational experiments

The ability to control variables is a key feature of laboratory science, and is critical to experimental tests. Control of variables, even identification of relevant variables, is much more challenging outside of a laboratory in social contexts – such as schools.

In the case of Kibirige, Osodo & Tlala's study, we can set out the overall experimental design as follows


Independent
variable
Teaching approach:
– predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Controlled
variable(s)
Anything other than teaching approach which might make a difference to student learning
Variables in Kibirige, Osodo & Tlala's study

The researchers set up the two teaching conditions, measure learning gains, and need to make sure any other factors which might have an effect on learning outcomes, so called confounding variables, are controlled so the same in both conditions.

Read about confounding variables in research

Of course, we cannot be sure what might act as a confounding variable, so in practice we may miss something which we do not recognise is having an effect. Here are some possibilities based on my own (now dimly recalled) experience of teaching in school.

The room may make a difference. Some rooms are

  • spacious,
  • airy,
  • well illuminated,
  • well equipped,
  • away from noisy distractions
  • arranged so everyone can see the front, and the teacher can easily move around the room

Some rooms have

  • comfortable seating,
  • a well positioned board,
  • good acoustics

Others, not so.

The timetable might make a difference. Anyone who has ever taught the same class of students at different times in the week might (will?) have noticed that a Tuesday morning lesson and a Friday afternoon lesson are not always equally productive.

Class size may make a difference (here 49 versus 44).

Could gender composition make a difference? Perhaps it was just me, but I seem to recall that classes of mainly female adolescents had a different nature than classes of mainly male adolescents. (And perhaps the way I experienced those classes would have been different if I had been a female teacher?) Kibirige, Osodo and Tlala report the sex of the students, but assuming that can be taken as a proxy for gender, the gender ratios were somewhat different in the two classes.


The gender make up of the classes was quite different: might that influence learning?

School differences

A potentially major conflating variable is school. In this study the researchers report that the schools were "neighbouring" and that

Having been drawn from the same geographical set up, the learners were of the same socio-cultural practices.

Kibirige, Osodo & Tlala, 2014, p.302

That clearly makes more sense than choosing two schools from different places with different demographics. But anyone who has worked in schools will know that two neighbouring schools serving much the same community can still be very different. Different ethos, different norms, and often different levels of outcome. Schools A and B may be very similar (but the reader has no way to know), but when comparing between groups in different schools it is clear that school could be a key factor in group outcome.

The teacher effect

Similar points can be made about teachers – they are all different! Does ANY teacher really believe that one can swap one teacher for another without making a difference? Kibirige, Osodo and Tlala do not tell readers anything about the teachers, but as students were taught in their own schools the default assumption must be that they were taught by their assigned class teachers.

Teachers vary in terms of

  • skill,
  • experience,
  • confidence,
  • enthusiasm,
  • subject knowledge,
  • empathy levels,
  • insight into their students,
  • rapport with classes,
  • beliefs about teaching and learning,
  • teaching style,
  • disciplinary approach
  • expectations of students

The same teacher may perform at different levels with different classes (preferring to work with different grade levels, or simply getting on/not getting on with particular classes). Teachers may have uneven performance across topics. Teachers differentially engage with and excel in different teaching approaches. (Even if the same teacher had taught both groups we could not assume they were equally skilful in both teaching conditions.)

Teacher variable is likely to be a major difference between groups.

Meta-effects

Another conflating factor is the very fact of the research itself. Students may welcome a different approach because it is novel and a change from the usual diet (or alternatively they may be nervous about things being done differently) – but such 'novelty' effects would disappear once the new way of doing things became established as normal. In which case, it would be an effect of the research itself and not of what is being researched.

Perhaps even more powerful are expectancy effects. If researchers expect an innovation to improve matters, then these expectations get communicated to those involved in the research and can themselves have an affect. Expectancy effects are so well demonstrated that in medical research double-blind protocols are used so that neither patients nor health professionals they directly engage with in the study know who is getting which treatment.

Read about expectancy effects in research

So, we might revise the table above:


Independent
variable
Teaching approach:
– predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Potentially conflating
variables
School effect
Teacher effect
Class size
Gender composition of teaching groups
Relative novelty of the two teaching approaches
Variables in Kibirige, Osodo & Tlala's study

Now, of course, these problems are not unique to this particular study. The only way to respond to teacher and school effects of this kind is to do large scale studies, and randomly assign a large enough number of schools and teachers to the different conditions so that it becomes very unlikely there will be systematic differences between treatment groups.

A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy (Taber, 2019a). Strangely, often this does not seem to preclude publication in research journals. 1

Advice on controls in scientific investigations:

I can probably do no better than to share some advice given to both researchers, and readers of research papers, in an immunology textbook from 1910:

"I cannot impress upon you strongly enough never to operate without the necessary controls. You will thus protect yourself against grave errors and faulty diagnoses, to which even the most competent investigator may be liable if he [or she] fails to carry out adequate controls. This applies above all when you perform independent scientific investigations or seek to assess them. Work done without the controls necessary to eliminate all possible errors, even unlikely ones, permits no scientific conclusions.

I have made it a rule, and would advise you to do the same, to look at the controls listed before you read any new scientific papers… If the controls are inadequate, the value of the work will be very poor, irrespective of its substance, because none of the data, although they may be correct, are necessarily so."

Julius Citron

The comparison condition

It seems clear that in this study there is no strict 'control' of variables, and the 'control' group is better considered just a comparison group. The authors tell us that:

"the control group (CG) taught using traditional methods…

the CG used the traditional lecture method"

Kibirige, Osodo & Tlala, 2014, pp.300, 302

This is not further explained, but if this really was teaching by 'lecturing' then that is not a suitable approach for teaching school age learners.

This raises two issues.

There is a lot of evidence that a range of active learning approaches (discussion work, laboratory work, various kinds of group work) engages and motivates students more than whole lessons spent listening to a teacher. Therefore any approach which basically involves a mixture of students doing things, discussing things, engaging with manipulatives and resources as well as listening to a teacher, tends to be superior to just being lectured. Good science teaching normally involves lessons sequenced into a series of connected episodes involving different types of student activity (Taber, 2019b). Teacher presentations of the target scientific account are very important, but tend to be effective when embedded in a dialogic approach that allows students to explore their own thinking and takes into account their starting points.

So, comparing P-O-E with lectures (if they really were lectures) may not tell researchers much about P-O-E specifically, as a teaching approach. A better test would compare P-O-E with some other approach known to be engaging.

"Many published studies argue that the innovation being tested has the potential to be more effective than current standard teaching practice, and seek to demonstrate this by comparing an innovative treatment with existing practice that is not seen as especially effective. This seems logical where the likely effectiveness of the innovation being tested is genuinely uncertain, and the 'standard' provision is the only available comparison. However, often these studies are carried out in contexts where the advantages of a range of innovative approaches have already been well demonstrated, in which case it would be more informative to test the innovation that is the focus of the study against some other approach already shown to be effective."

Taber, 2019a, p.93

The second issue is more ethical than methodological. Sometimes in published studies (and I am not claiming I know this happened here, as the paper says so little about the comparison condition) researchers seem to deliberately set up a comparison condition they have good reason to expect is not effective: such as asking a teacher to lecture and not include practical work or discussion work or use of digital learning technologies and so forth. Potentially the researchers are asking the teacher of the 'control' group to teach less effectively than normally to bias the experiment towards their preferred outcome (Taber, 2019a).

This is not only a failure to do good science, but also an abuse of those learners being deliberately subjected to poor teaching. Perhaps in this study the class in School B was habitually taught by being lectured at, so the comparison condition was just what would have occurred in the absence of the research, but this is always a worry when studies report comparison conditions that seem to deliberately disadvantage students. (This paper does not seem to report anything about obtaining voluntary informed consent from participants, nor indeed about how access to the schools was negotiated. )

"In most educational research experiments of the type discussed in this article, potential harm is likely to be limited to subjecting students (and teachers) to conditions where teaching may be less effective, and perhaps demotivating…It can also potentially occur in control conditions if students are subjected to teaching inputs of low effectiveness when better alternatives were available. This may be judged only a modest level of harm, but – given that the whole purpose of experiments to test teaching innovations is to facilitate improvements in teaching effectiveness – this possibility should be taken seriously."

Taber, 2019a, p.94

Validity of measurements

Even leaving aside all the concerns expressed above, the results of a study of this kind depends upon valid measurements. Assessment items must test what they claim to test, and their analysis should be subject to quality control (and preferably blind to which condition a script being analysed derives form). Kibirige, Osodo and Tlala append the test they used in the study (Appendix 2, pp.309-310), which is very helpful in allowing readers to judge at least its face validity. Unfortunately, they do not include a mark/analysis scheme to show what they considered responses worthy of credit.

"The [Achievement Test] consisted of three questions. Question one consisted of five statements which learners had to classify as either true or false. Question two consisted of nine [sic, actually eight] multiple questions which were used as a diagnostic tool in the design of the teaching and learning materials in addressing misconceptions based on prior knowledge. Question three had two open-ended questions to reveal learners' views on how salts dissolve in water (Appendix 1 [sic, 2])."

Kibirige, Osodo & Tlala, 2014, p.302

"Question one consisted of five statements which learners had to classify as either true or false."

Question 1 is fairly straightforward.

1.2: Strictly all salts do dissolve in water to some extent. I expect that students were taught that some salts are insoluble. Often in teaching we start with simple dichotomous models (metal-non metal; ionic-covalent; soluble-insoluble; reversible – irreversible) and then develop these to more continuous accounts that recognise difference of degree. It is possible here then that a student who had learnt that all salts are soluble to some extent might have been disadvantaged by giving the 'wrong' ('True') response…

…although[sic] , actually, there is perhaps no excuse for answering 'True' ('All salts can dissolve in water') here as a later question begins "3.2. Some salts does [sic] not dissolve in water. In your own view what happens when a salt do [sic] not dissolve in water".

Despite the test actually telling students the answer to this item, it seems only 55% of the experimental group, and 23% of the control group obtained the correct answer on the post test – precisely the same proportions as on the pre-test!



1.4: Seems to be 'False' as the ions exist in the salt and are not formed when it goes into solution. However, I am not sure if that nuance of wording is intended in the question.

Question 2 gets more interesting.


"Question two consisted of nine multiple questions" (seven shown here)

I immediately got stuck on question 2.2 which asked which formula (singular, not 'formula/formulae', note) represented a salt. Surely, they are all salts?

I had the same problem on 2.4 which seemed to offer three salts that could be formed by reacting acid with base. Were students allowed to give multiple responses? Did they have to give all the correct options to score?

Again, 2.5 offered three salts which could all be made by direct reaction of 'some substances'. (As a student I might have answered A assuming the teacher meant to ask about direct combination of the elements?)

At least in 2.6 there only seemed to be two correct responses to choose between.

Any student unsure of the correct answer in 2.7 might have taken guidance from the charges as shown in the equation given in question 2.8 (although indicated as 2.9).

How I wished they had provided the mark scheme.



The final question in this section asked students to select one of three diagrams to show what happens when a 'mixture' of H2O and NaCl in a closed container 'react'. (In chemistry, we do not usually consider salt dissolving as a reaction.)

Diagram B seemed to show ion pairs in solution (but why the different form of representation?) Option C did not look convincing as the chloride ions had altogether vanished from the scene and sodium seemed to have formed multiple bonds with oxygen and hydrogens.

So, by a process of elimination, the answer is surely A.

  • But components seem to be labelled Na and Cl (not as ions).
  • And the image does not seem to represent a solution as there is much too much space between the species present.
  • And in salt solution there are many water molecules between solvated ions – missing here.
  • And the figure seems to show two water molecules have broken up, not to give hydrogen and hydroxide ions, but lone oxygen (atoms, ions?)
  • And why is the chlorine shown to be so much larger in solution than it was in the salt? (If this is meant to be an atom, it should be smaller than the ion, not larger. The real mystery is why the chloride ions are shown so much smaller than smaller sodium ions before salvation occurs when chloride ions have about double the radii of sodium ions.)

So diagram A is incredible, but still not quite as crazy an option as B and C.

This is all despite

"For face validity, three Physical Sciences experts (two Physical Sciences educators and one researcher) examined the instruments with specific reference to Mpofu's (2006) criteria: suitability of the language used to the targeted group; structure and clarity of the questions; and checked if the content was relevant to what would be measured. For reliability, the instruments were piloted over a period of two weeks. Grade 10 learners of a school which was not part of the sample was used. Any questions that were not clear were changed to reduce ambiguity."

Kibirige, Osodo & Tlala, 2014, p.302

One wonders what the less clear, more ambiguous, versions of the test items were.

Reducing 'misconceptions'

The final question was (or, perhaps better, questions were) open-ended.



I assume (again, it would be good for authors of research reports to make such things explicit) these were the questions that led to claims about the identified alternative conceptions at pre-test.

"The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test)."

Kibirige, Osodo & Tlala, 2014, p.302

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not be a conceptual error

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not (for at least some of these learners) be a conceptual error as much as poor use of terminology. .

To say that salts disappear when they dissolve does not seem to me a misconception: they do. To disappear means to no longer be visible, and that's a fair description of the phenomenon of salt dissolving. The authors may assume that if learners use the term 'disappear' they mean the salt is no longer present, but literally they are only claiming it is not directly visible.

Unfortunately, the authors tell us nothing about how they analysed the data collected form their test, so the reader has no basis for knowing how they interpreted student responded to arrive at their findings. The authors do tell us, however, that:

"the intervention had a positive effect on the understanding of concepts dealing with dissolving of salts. This improved achievement was due to the impact of POE strategy which reduced learners' misconceptions regarding dissolving of salts"

Kibirige, Osodo & Tlala, 2014, p.305

Yet, oddly, they offer no specific basis for this claim – no figures to show the level at which "learners believed that salts 'disappear' when dissolved in water …and that salt 'melts' when dissolved in water" in either group at the post-test.


'disappear' misconception'melt' misconception
pre-test:
experimental group
not reportednot reported
pre-test:
comparison group
not reportednot reported
pre-test:
total
(0.37 x 0.8 x 93 =)
24.5 (!?)
(0.27 x 0.8 x 93 =)
20
post-test:
experimental group
not reportednot reported
post-test:
comparison group
not reportednot reported
post-test:
total
not reportednot reported
Data presented about the numbers of learners considered to hold specific misconceptions said to have been 'reduced' in the experimental condition

It seems journal referees and the editor did not feel some important information was missing here that should be added before publication.

In conclusion

Experiments require control of variables. Experiments require random assignment to conditions. Quasi-experiments, where random assignment is not possible, are inherently weaker studies than true experiments.

Control of variables in educational contexts is often almost impossible.

Studies that compare different teaching approaches using two different classes each taught by a different teacher (and perhaps not even in the same school) can never be considered fair comparisons able to offer generalisable conclusions about the relative merits of the approaches. Such 'experiments' have no value as research studies. 1

Such 'experiments' are like comparing the solubility of two salts by (a) dropping a solid lump of 10g of one salt into some cold water, and (b) stirring a finely powdered 35g sample of the other salt into hot propanol; and watching to see which seems to dissolve better.

Only large scale studies that encompass a wide range of different teachers/schools/classrooms in each condition are likely to produce results that are generalisable.

The use of inferential statistical tests is only worthwhile when the conditions for those statistical tests are met. Sometimes tests are said to be robust to modest deviations from such acquirements as normality. But applying tests to data that do not come close to fitting the conditions of the test is pointless.

Any research is only as trustworthy as the validity of its measurements. If one does not trust the measuring instrument or the analysis of measurement data then one cannot trust the findings and conclusions.


The results of a research study depend on an extended chain of argumentation, where any broken link invalidates the whole chain. (From 'Critical reading of research')

So, although the website for the Mediterranean Journal of Social Science claims "All articles submitted …undergo to a rigorous double blinded peer review process", I think the peer reviewers for this article were either very generous, very ignorant, or simply very lazy. That may seem harsh, but peer review is meant to help authors improve submissions till they are worthy of appearing in the literature, and here peer review has failed, and the authors (and readers of the journal) have been let down by the reviewers and the editor who ultimately decided this study was publishable in this form.

If I asked a graduate student (or indeed an undergraduate student) to evaluate this paper, I would expect to see a response something along these sorts of lines:


Applying the 'Critical Reading of Empirical Studies Tool' to 'The effect of predict-observe-explain strategy on learners' misconceptions about dissolved salts'

I still think P-O-E is a very valuable part of the science teacher's repertoire – but this paper can not contribute anything to support to that view.

Work cited:

Note

1 A lot of these invalid experiments get submitted to research journals, scrutinised by editors and journal referees, and then get published without any acknowledgement of how they fall short of meeting the conditions for a valid experiment. (See, for example, examples discussed in Taber 2019a.) It is as if the mystique of experiment is so great that even studies with invalid conclusions are considered worth publishing as long as the authors did an experiment.

A corny teaching analogy

Pop goes the comparison


Keith S. Taber


The order of corn popping is no more random than the roll of a dice.


I was pleased to read about a 'new' teaching analogy in the latest 'Education in Chemistry' (the Royal Society of Chemistry's education magazine) – well, at least it was new to me. It was an analogy that could be demonstrated easily in the school science lab, and, according to Richard Gill (@RGILL_Teach on Twitter), went down really well with his class.

Teaching analogies

Analogies are used in teaching and in science communication to help 'make the unfamiliar familiar', to show someone that something they do not (yet) know about is actually, in some sense at least, a bit like something they are already familiar with. In an analogy, there is a mapping between some aspect(s) of the structure of the target ideas and the structure of the familiar phenomenon or idea being offered as an analogue. Such teaching analogies can be useful to the extent that someone is indeed highly familiar with the 'analogue' (and more so than with the target knowledge being communicated); that there is a helpful mapping across between the analogue and the target; and that comparison is clearly explained (making clear which features of the analogue are relevant, and how).

Read about analogies in science


The analogy is discussed in the July 2022 Edition of Education in Chemistry, and on line.

Richard Gill suggests that 'Nuclear decay is a tough concept' to teach and learn, but after making some popcorn he realised that popping corn offered an analogy for radioactive decay that he could demonstrate in the classroom.

Richard Gill describes how

"I tell the students I'm going to heat up the oil; I'm going to give the kernels some energy, making them unstable and they're going to want to pop. I show them under the visualiser, then I ask, 'which kernel will pop first?' We have a little competition. Why do I do this? It links to nuclear decay being random. We know an unstable atom will decay, but we don't know which atom will decay or when it will decay, just like we don't know which kernel will pop when."

Gill, 2022

In the analogy, the corn (maize) kernels represents atoms or nuclei of an unstable isotope, and the popped corn the decay product, daughter atoms or nuclei. 1



Richard Gill homes in on a key feature of radioactive decay which may seem counter-intuitive to learners, but which is actually a pattern found in many different phenomena – exponential decay. The rate of radioactive decay falls (decays, confusingly) over time. Theoretically the [radioactive] decay rate follows a very smooth [exponential] decay curve. Theoretically, because of another key feature of radioactive decay that Gill highlights – its random nature!

It may seem that something which occurs by random will not lead to a regular pattern, but although in radioactivity the behaviour of an individual nucleus (in terms of when it might decay) cannot be predicted, when one deals with vast numbers of them in a macroscopic sample, a clear pattern emerges. Each different type of unstable atom has an associated half-life which tells us when half of a sample will have decayed. These half-lives can vary from fractions of a second to vast numbers of years, but are fixed for a particular nuclide.

Richard Gill notes that he can use the popping corn demonstration as background for teaching about half-life,

I usually follow this lesson with the idea of half-lives. The concept of half-lives now makes sense. Why are there fewer unpopped kernels over time? Because they're popping. Why do radioactive materials become less radioactive over time? Because they're decaying.

Gill, 2022

Perhaps he could even develop his demonstration to model the half-life of decay?

Modelling the popcorn decay curve

The Australian Earth Science Education blog suggests

"Popcorn can be used to model radioactive decay. It is a lot safer than using radioactive isotopes, as well as much tastier"

and offers instructions for a practical activity with a bag of corn and a microwave to collect data to plot a decay curve (see https://ausearthed.blogspot.com/2020/04/radioactive-popcorn.html). Although this seems a good idea, I suspect this specific activity (which involves popping the popping corn in and out of the oven) might be too convoluted for learners just being introduced to the topic, but could be suitable for more advanced learners.

However, The Association of American State Geologists suggests an alternative approach that could be used in a class context where different groups of students put bags of popcorn into the microwave for different lengths of time to allow the plotting of a decay curve by collating class results (https://www.earthsciweek.org/classroom-activities/dating-popcorn).

Another variants is offered by The University of South Florida's' Spreadsheets Across the Curriculum' (SSAC) project. SSAC developed an activity ("Radioactive Decay and Popping Popcorn – Understanding the Rate Law") to simulate the popping of corn using (yes, you guessed) a spreadsheet to model the decay of corn popping, as a way of teaching about radioactive decay!

This is more likely to give a good decay curve, but one cannot help feeling it loses some of the attraction of Richard Gill's approach with the smell, sound and 'jumping' of actual corn being heated! One might also wonder if there is any inherent pedagogic advantage to simulating popping corn as a model for simulating radioactive decay – rather than just using the spreadsheet to directly model radioactive decay?

Feedback cycles

The reason the popping corn seems to show the same kind of decay as radioactivity, is because it can be represented with the same kind of feedback cycle.

This pattern is characteristic of simple systems where

  • a change is brought about by a driver
  • that change diminishes the driver

In radioactive decay, the level of activity is directly proportional to the number of unstable nuclei present (i.e., the number of nuclei that can potentially decay), but the very process of decay reduces this number (and so reduces the rate of decay).

So,

  • when there are many unstable nuclei
  • there will be much decay
  • quickly reducing the number of unstable nuclei
    • so reducing the rate of decay
    • so reducing the rate at which unstable nuclei decay
      • so reducing the rate at which decay is reducing

and so forth.


Exponential decay is a characteristic of systems with a simple negative feedback cycle
(source: ASCEND project)

Recognising this general pattern was the focus of an 'enrichment' activity designed for upper secondary learners in the Gatsby SEP supported ASCEND project which presented learners with information about the feedback cycle in radioactive decay; and then had them set up and observe some quite different phenomena (Taber, 2011):

  • capacitor discharge
  • levelling of connected uneven water columns
  • hot water cooling

In each case the change driven by some 'driver' reduced the driver itself (so a temperature difference leads to heat transfer which reduces the temperature difference…).

Read about the classroom activity

In Richard Gill's activity the driver is the availability of intact corn kernels being heated such that water vapour is building up inside the kernel – something which is reduced by the consequent popping of those kernels.


A negative feedback cycle

Mapping the analogy

A key feature of an analogy is that it can be understood as a kind of mapping between two conceptual structures. The making popcorn demonstration seems a very simple analogue, but mapping out the analogy might be useful (at least for the teacher) to clarify it. Below I present a representation of a mapping between popping corn and radioactive decay, suggesting which aspects of the analogue (the popping corn) map onto the target scientific concept.


Mapping an analogy between making pop-corn and radioactive decay

In this mapping I have used colour to highlight differences between the two (conceptual) structures. Perhaps the most significant difference is represented by the blue (target concept) versus red (analogue) features.


Most analogies only map to a limited extent

There will be aspects of an analogue that do not map onto anything on the target, and sometimes there will be an important feature of the target which has no analogous feature in the analogue. There is always the possibility that irrelevant features of an analogue will be mapped across by learners.

As one example, the comparison of the atom with a tiny solar system was once an image often used as a teaching analogy, yet it seems learners often have limited understandings of both analogue and target, and may be transferring across inappropriately – such as assuming the electrons are bound to the atom by gravity (Taber, 2013a). Where students have an alternative conception of the analogue (the earth attracts the sun, but not vice versa) they will often assume the same pattern in the target (the nucleus is not attracted to the electrons).

Does this matter? Well, yes and no. A teaching analogy is used to introduce a technical scientific concept by making it seem familiar. This is a starting point to be built upon (so, Richard Gill tells us that he will build upon the diminishing activity of cooking corn in his his popcorn demonstration to introduce the idea of half-life), so it does not matter if students do not fully understand everything immediately. (Indeed, it is naive to assume most learners could acquire a new complex set of ideas all at once: learning is incremental – see Key ideas for constructivist teaching).

Analogies can act as 'scaffolds' to help learners venture out from their existing continents of knowledge towards new territory. Once this 'anchor' in learners' experience is established one can, so to speak, disembark from the scaffolding raft the onto the more solid ground of the shore.

Read about scaffolding learning

However, it is important to be careful to make sure

  • (a) learners appreciate the limitations of models (such an analogies) – that they are thinking and learning tools, and not absolute accounts of the natural word; and that
  • (b) the teacher helps dismantle the 'scaffolding' once it is not needed, so that it is not retained as part of the learners 'scientific' account.
Weak anthropomorphism

An example of that might be Gill's use of anthropomorphism.

…unstable atoms/nuclei need to become stable…

…I'm going to give the kernels some energy, making them unstable and they're going to want to pop…

Anthropomorphism

This type of language is often used to offer narratives that are more readily appreciated by learners (making the unfamiliar familiar, again) but students can come to use such language habitually, and it may come to stand in place of a more scientific account (Taber & Watts, 1996). So, 'weak' anthropomorphism used to help introduce something abstract and counter-intuitive is useful, but 'strong' anthropomorphism that comes to be adopted as a scientific explanation (e.g., nuclei decay because they want to be stable) is best avoided by seeking to move beyond the figurative language as soon as students are ready.

Read about anthropomorphism

The 'negative' analogy

The mapping diagram above may highlight several potential teaching points that may be considered (perhaps not to be introduced immediately, but when the new concepts are later reinforced and developed).

Where does the energy come from?

One key difference between the two systems is that radioactive decay is (we think) completely spontaneous, whereas the corn only pops because we cook it (Gill used a Bunsen burner) and left to its own devices remains as unpopped kernels.

Related to this, the source of energy for popping corn is the applied heat, whereas unstable nuclei are already in a state of high energy and so have an 'internal' source for their activity. This a key difference that will likely be obvious to some, but certainly not all learners in most classes.

When is random, random?

A more subtle point relates to the 'random' nature of the two events. I suggest subtle, because there are many published reports written by researchers in science education which suggests even supposed experts can have a pretty shaky ideas of what counts as random (Taber, 2013b).

Read 'Nothing random about a proper scientific evaluation?'

Read about the randomisation criterion

As far as scientists understand, the decay of one unstable nucleus in a sample of radioactive material (rather than another) is a random process. It is not just that we are not yet able to predict when a particular nucleus will decay – according to current scientific accounts it is not possible to predict in principle. This is an idea that even Einstein found difficult to accept.

That is not true with the corn. Presumably there are subtle differences between kernels – some have slightly more water content, or slightly weaker casings. Perhaps more significantly, some are heated more than others due to their position in the pan and the position of the heat source, or due differential exposure to the cooking oil… In principle it would be possible to measure relevant variables and model the set up to make good predictions. (In principle, even if in practice a very complex task.) The order of corn popping is no more random than…say…the roll of a dice. That is, physics tells us it follows natural laws, even if we are not in a position to fully model the phenomenon.

(We might suggest that a student who considered the corn popping as a random event because she saw apparently identical kernels all being heated in the same pan at the same time is simply missing certain 'hidden variables'. Einstein wondered if there were also 'hidden variables' that science had not yet uncovered which could explain random events such as why one nucleus rather than another decays at a particular moment.)

On the recoil

Perhaps a more significant difference is what is observed. The corn are observed 'jumping' (more anthropomorphic language?) Physics tells us that momentum must always be conserved, and the kernels act like tiny jet propelled rockets. That is, as steam is released when the kernel bursts, the rest of the kernel 'jumps' in the opposite direction. (That is, by Newton's third law, there is a reaction force to the force pushing the steam out of the kernel. Momentum is a vector, so it is possible for a stationary object to break up into several moving parts with conservation of momentum.)

Something similar happens in radioactive decay. The emitted radiation carries away momentum, and the remaining 'daughter' nucleus recoils – although if the material is in the solid state this effect is dissipated by being spread across the lattice. So, the radioactivity which is detected is not analogous to the jumping corn, but to the steam it has released.

Is this important? That likely depends upon the level being taught. If the topics is being introduced to 14-16 years-olds, perhaps not. If the analogy is being explored with post-compulsory students doing an elective course, then maybe. (If not in chemistry; then certainly in physics, where learners are expected to to apply the principle of conservation of momentum across various scenarios.)

Will this be on the exam?

When I drafted this, I suspected most readers might find my caveats above about the limitations of the analogy, a bit pernickety (the kind of things an academic who's been out of the school classroom too long and forgotten the realities of working with pupils might dream up), but then I found what claims to be an Edexcel GCE Physics paper from 2012 (paper reference 6PH05/01) on line. In this paper, one question begins:

"In a demonstration to her class, a teacher pours popcorn kernels onto a hot surface and waits for them to pop…".

Much to my delight, I found the first part of this question asked learners:

"How realistic is this demonstration as an analogy to radioactive decay?

Consider aspects of the demonstration that are similar to radioactive decay and aspects that are different"

Examination paper asking physics students to identify positive and negative aspects of the analogy.

Classes of radioactivity

One further difference did occur to me that may be important. At some level this analogy works for radioactivity regardless of what is being emitted from an excited nucleus. However, the analogy seems clearer for the emission of an alpha particle, or a beta particle, or a neutron, than in the case of gamma radiation.

Although in gamma decay an excited nucleus relaxes to a lower energy state emitting a photon, it may not be as obvious to learners that the nucleus has changed (arguably, it has not 'substantially' changed as there is no change of substance) – as it has the same mass number and charge as before. This may be a point to be raised if moving on later to discuss different classes of radioactivity.

Or, perhaps, with gamma decay one can use a different version of the analogy?

Another corny analogy

Although I do not think I had never come across this analogy before reading the Education in Chemistry piece (perhaps because I do not make myself popcorn), Richard Gill does not seem to be the only person to have noticed this comparison. (They say 'great minds think alike' – and not just physicist Henri Poincaré thinking like Kryten from'Red Dwarf'). When I looked around the world-wide web I found there were two different approaches to using corn kernels to model radioactivity.

Some people use a similar demonstration to Mr Gill.2 However, there was also a different approach to using the corn. There were variations on this 3, but the gist was that

  • one starts with a large number of kernels
  • they are agitated (e.g., shaken in a box with different cells, poured onto the bench…)
  • then inspected to see which are pointing in some arbitrary direction designated as representing decay
  • the 'decayed' kernels are removed and counted
  • the rest of the sample is agitated again
  • etc.
Choose a direction to represent decay, and remove the aligned kernels as the 'activity' in that interval.
(Original image by Susie from Pixabay)

This lacks the excitement of popping corn, but could be a better model for gamma decay where the daughter nucleus is at a different energy after decay, but is otherwise unchanged.

Perhaps this version of the analogy could be improved by using a tray with an array of small dips (like tiny spot tiles) just the right size to stand corn kernels in the depressions with their points upwards. Then, after a very gentle tap on the bench next to the tile, those which have 'relaxed' from the higher energy state (i.e., fallen onto their sides) would be considered decayed. This would more directly model the change in potential energy and also avoid the need to keep removing kernels from the context (just as daughter atoms usually remain in a sample of radioactive material), as further gentle tapes are unlikely to excite them back to the higher energy state. 4

Or, dear reader, perhaps I've just been thinking about this analogy for just a little too long now.


Sources:

Notes

1 Referring to the nuclei before and after radioactive decay as 'parents' and 'daughters' seems metaphorical, but this use has become so well established (in effect, these are now technical terms) that these descriptors are now what are known (metaphorically!) as 'dead metaphors'.

Read about metaphors in science


2 Here are some examples I found:

Jennifer Wenner, University of Wisconsin-Oshkosh uses the demonstration in undergraduate geosciences:

"I usually perform it after I have introduced radioactive decay and talked about how it works. It only takes a few minutes and I usually talk while I am waiting for the "decay" to happen 'Using Popcorn to Simulate Radioactive Decay'"

https://serc.carleton.edu/quantskills/activities/popcorn.html

The Institute of Physics (IoP) include this activity as part of their 'Modelling decay in the laboratory Classroom Activity for 14-16' but suggest the pan lid is kept on as a safety measure. (Any teacher planing on carrying out any activity in the lab., should undertake a risk assessment first.)

I note the IoP also suggests care in using the term 'random':

Teacher: While we were listening to that there didn't seem to be any fixed pattern to the popping. Is there a word that we could use to describe that?

Lydia: Random?

Teacher: Excellent. But the word random has a very special meaning in physics. It isn't like how we think of things in everyday life. When do you use the word random in everyday life?

Lydia: Like if it's unpredictable? Or has no pattern?

https://spark.iop.org/modelling-decay-laboratory

Kieran Maher and 'Kikibooks contributors' suggests readers of their 'Basic Physics of Nuclear Medicine' could "think about putting some in in a pot, adding the corn, heating the pot…" and indeed their readers "might also like to try this out while considering the situation", but warn readers not to "push this popcorn analogy too far" (pp.20-21).


3 Here are some examples I found:

Florida High School teacher Marc Mayntz offers teachers' notes and student instructions for his 'Nuclear Popcorn' activity, where students are told to "Carefully 'spill' the kernels onto the table".

Chelsea Davis (a student?) reports her results in 'Half Life Popcorn Lab' from an approach where kernels are shaken in a Petri dish.

Redwood High School's worksheet for 'Radioactive Decay and Half Life Simulation' has students work with 100 kernels in a box with its sides labelled 1-4 (kernels that have the small end pointed toward side 1 after "a sharp, single shake (up and down, not side to side)" are considered decayed). Students are told at the start to to "Count the popcorn kernels to be certain there are exactly 100 kernels in your box".

This activity is repeated but with (i) kernels pointing to either side 1 or 2; and in a further run (ii) any of sides 1, 2, or 3; being considered decayed. This allows a graph to be drawn comparing all three sets of results.

The same approach is used in the Utah Education network's 'Radioactive Decay' activity, which specifies the use of a shoe box.

A site called 'Chegg' specified "a square box is filled with 100 popcorn kernels". and asked "What alteration in the experimental design would dramatically change the results? Why?" But, sadly, I needed to subscribe to see the answer.

The 'Lesson Planet' site offers 'Nuclear Popcorn' where "Using popcorn kernels spread over a tabletop, participants pick up all of those that point toward the back of the room, that is, those that represent decayed atoms".

'Anonymous' was set a version of this activity, but could not "seem to figure it out". 'Jiskha Homework Help' (tag line: "Ask questions and get helpful responses") helpfully responded,

"You ought to have a better number than 'two units of shake time…'

Read off the graph, not the data table."

(For some reason this brought to mind my sixth form mathematics teacher imploring us in desperation to "look at the ruddy diagram!")


4 Consider the challenge of developing this model to simulate nuclear magnetic resonance or laser excitation!


POEsing assessment questions…

…but not fattening the cow


Keith S. Taber


A well-known Palestinian proverb reminds us that we do not fatten the cow simply by repeatedly weighing it. But, sadly, teachers and others working in education commonly get so fixated on assessment that it seems to become an end in itself.


Images by Clker-Free-Vector-Images from PixabayOpenClipart-Vectors and Deedster from Pixabay

A research study using P-O-E

I was reading a report of a study that adopted the predict-observe-explain, P-O-E, technique as a means to elicit "high school students' conceptions about acids and bases" (Kala, Yaman & Ayas, 2013, p.555). As the name suggests, P-O-E asks learners to make a prediction before observing some phenomenon, and then to explain their observations (something that can be specially valuable when the predictions are based on strongly held intuitions which are contrary to what actually happens).

Read about Predict-Observe-Explain


The article on the publisher website

Kala and colleagues begin the introduction to their paper by stating that

"In any teaching or learning approach enlightened by constructivism, it is important to infer the students' ideas of what is already known"

Kala, Yaman & Ayas, 2013, p.555
Constructivism?

Constructivism is a perspective on learning that is informed by research into how people learn and a great many studies into student thinking and learning in science. A key point is how a learner's current knowledge and understanding influences how they make sense of teaching and what they go on to learn. Research shows it is very common for students to have 'alternative conceptions' of science topics, and often these conceptions either survive teaching or distort how it is understood.

The key point is that teachers who teach the science without regard to student thinking will often find that students retain their alternative ways of thinking, so constructivist teaching is teaching that takes into account and responds to the ideas about science topics that students bring to class.

Read about constructivism

Read about constructivist pedagogy

Assessment: summative, formative and diagnostic

If teachers are to take into account, engage with, and try to reshape, learners ideas about science topics, then they need to know what those ideas are. Now there is a vast literature reporting alternative conceptions in a wide range of science topics, spread across thousands or research reports – but no teacher could possibly find time to study them all. There are books which discuss many examples and highlight some of the most common alternative conceptions (including one of my own, Taber, 2014)



However, in any class studying some particular topic there will nearly always be a spread of different alternative conceptions across the students – including some so idiosyncratic that they have never been reported in any literature. So, although reading about common misconceptions is certainly useful to prime teachers for what to look out for, teachers need to undertake diagnostic assessment to find out about the thinking of their own particular students.

There are many resources available to support teachers in diagnostic assessment, and some activities (such as using concept cartoons) that are especially useful at revealing student thinking.

Read about diagnostic assessment

Diagnostic assessment, assessment to inform teaching, is carried out at the start of a topic, before the teaching, to allow teachers to judge the learners' starting points and any alternative conceptions ('misconceptions') they may have. It can therefore be considered aligned to formative assessment ('assessment for learning') which is carried out as part of the learning process, rather than summative assessment (assessment of leaning) which is used after studying to check, score, grade and certify learning.

P-O-E as a learning activity…

P-O-E can best support learning in topics where it is known learners tend to have strongly held, but unhelpful, intuitions. The predict stage elicits students' expectations – which, when contrary to the scientific account, can be confounded by the observe step. The 'cognitive conflict' generated by seeing something unexpected (made more salient by having been asked to make a formal prediction) is thought to help students concentrate on that actual phenomena, and to provide 'epistemic relevance' (Taber, 2015).

Epistemic relevance refers to the idea that students are learning about things they are actually curious about, whereas for many students following a conventional science course must be experienced as being presented with the answers to a seemingly never-ending series questions that had never occurred to them in the first place.

Read about the Predict-Observe-Explain technique

Students are asked to provide an explanation for what they have observed which requires deeper engagement than just recording an observation. Developing explanations is a core scientific practice (and one which is needed before another core scientific practice – testing explanations – is possible).

Read about teaching about scientific explanations

To be most effective, P-O-E is carried out in small groups, as this encourages the sharing, challenging and justifying of ideas: the kind of dialogic activity thought to be powerful in supporting learners in developing their thinking, as well as practicing their skills in scientific argumentation. As part of dialogic teaching such an open-forum for learners' ideas is not an end in itself, but a preparatory stage for the teacher to marshal the different contributions and develop a convincing argument for how the best account of the phenomenon is the scientific account reflected in the curriculum.

Constructivist teaching is informed by learners' ideas, and therefore relies on their elicitation, but that elicitation is never the end in itself but is a precursor to a customised presentation of the canonical account.

Read about dialogic teaching and learning

…and as a diagnostic activity

Group work also has another function – if the activity is intended to support diagnostic assessment, then the teacher can move around the room listening in to the various discussions and so collecting valuable information on what students think and understand. When assessment is intended to inform teaching it does not need to be about students completing tests and teachers marking them – a key principle of formative assessment is that it occurs as a natural part of the teaching process. It can be based on productive learning activities, and does not need marks or grades – indeed as the point is to help students move on in their thinking, any kind of formal grading whilst learning is in progress would be inappropriate as well as a misuse of teacher time.

Probing students' understandings about acid-base chemistry

The constructivist model of learning applies to us all: students, teachers, professors, researchers. Given what I have written above about P-O-E, about diagnostic assessment, and dialogic approaches to learning, I approached Kala and colleagues' paper with expectations about how they would have carried out their project.

These authors do report that they were able to diagnose aspects of student thinking about acids and bases, and found some learning difficulties and alternative conceptions,

"it was observed that eight of the 27 students had the idea that the "pH of strong acids is the lowest every time," while two of the 27 students had the idea that "strong acids have a high pH." Furthermore, four of the 27 students wrote the idea that the "substance is strong to the extent to which it is burning," while one of the 27 students mentioned the idea that "different acids which have equal concentration have equal pH."

Kala, Yaman & Ayas, 2013, pp.562-3

The key feature seems to be that, as reported in previous research, students conflate acid concentration and acid strength (when it is possible to have a high concentration solution of a weak acid or a very dilute solution of a strong acid).

Yet some aspects of this study seemed out of alignment with the use of P-O-E.

The best research style?

One feature was the adoption of a positivistic approach to the analysis,

Although there has been no reported analyzing procedure for the POE, in this study, a different [sic] analyzing approach was offered taking into account students' level of understanding… Data gathered from the written responses to the POE tasks were analyzed and divided into six groups. In this context, while students' prediction were divided into two categories as being correct or wrong, reasons for predictions were divided into three categories as being correct, partially correct, or wrong.

Kala, Yaman & Ayas, 2013, pp.560


GroupPredictionReasons
correctcorrect
correctpartially correct
correctwrong
wrongcorrect
wrongpartially correct
wrongwrong
"the written responses to the POE tasks were analyzed and divided into six groups"

There is nothing inherently wrong with doing this, but it aligns the research with an approach that seems at odds with the thinking behind constructivist studies that are intended to interpret a learner's thinking in its own terms, rather than simply compare it with some standard. (I have explored this issue in some detail in a comparison of two research studies into students' conceptions of forces – see Taber, 2013, pp.58-66.)

In terms of research methodology we might say it seem to be conceptualised within the 'wrong' paradigm for this kind of work. It seems positivist (assuming data can be unambiguously fitted into clear categories), nomothetic (tied to 'norms' and canonical answers) and confirmatory (testing thinking as matching model responses or not), rather than interpretivist (seeking to understand student thinking in its own terms rather than just classifying it as right or wrong), idiographic (acknowledging that every learner's thinking is to some extent unique to them) and discovery (exploring nuances and sophistication, rather than simply deciding if something is acceptable or not).

Read about paradigms in educational research

The approach used seemed more suitable for investigating something in the science laboratory, than the complex, interactive, contextualised, and ongoing life of classroom teaching. Kala and colleagues describe their methodology as case study,

"The present study used a case study because it enables the giving of permission to make a searching investigation of an event, a fact, a situation, and an individual or a group…"

Kala, Yaman & Ayas, 2013, pp.558
A case study?

Case study is a naturalistc methodology (rather than involving an intervention, such as an experiment), and is idiographic, reflecting the value of studying the individual case. The case is one from among many instances of its kind (one lesson, one school, one examination paper, etc.), and is considered as a somewhat self contained entity yet one that is embedded in a context in which it is to some extent entangled (for example, what happens in a particular lesson is inevitably somewhat influenced by

  • the earlier sequence of lessons that teacher taught that class {the history of that teacher with that class},
  • the lessons the teacher and student came from immediately before this focal lesson,
  • the school in which it takes place,
  • the curriculum set out to be followed…)

Although a lesson can be understood as a bounded case (taking place in a particular room over a particular period of time involving a specified group of people) it cannot be isolated from the embedding context.

Read about case study methodology


Case study – study of one instance from among many


As case study is idiographic, and does not attempt to offer direct generalisation to other situations beyond that case, a case study should be reported with 'thick description' so a reader has a good mental image of the case (and can think about what makes it special – and so what makes it similar to, or different from, other instances the reader may be interested in). But that is lacking in Kala and colleagues' study, as they only tell readers,

"The sample in the present study consisted of 27 high school students who were enrolled in the science and mathematics track in an Anatolian high school in Trabzon, Turkey. The selected sample first studied the acid and base subject in the middle school (grades 6 – 8) in the eighth year. Later, the acid and base topic was studied in high school. The present study was implemented, based on the sample that completed the normal instruction on the acid and base topic."

Kala, Yaman & Ayas, 2013, pp.558-559

The reference to a sample can be understood as something of a 'reveal' of their natural sympathies – 'sample' is the language of positivist studies that assume a suitably chosen sample reflects a wider population of interest. In case study, a single case is selected and described rather than a population sampled. A reader is left to rather guess what population being sampled here, and indeed precisely what the 'case' is.

Clearly, Kala and colleagues elicited some useful information that could inform teaching, but I sensed that their approach would not have made optimal use of a learning activity (P-O-E) that can give insight into the richness, and, sometimes, subtlety of different students' ideas.

Individual work

Even more surprising was the researchers' choice to ask students to work individually without group discussion.

"The treatment was carried out individually with the sample by using worksheets."

Kala, Yaman & Ayas, 2013, p.559

This is a choice which would surely have compromised the potential of the teaching approach to allow learners to explore, and reveal, their thinking?

I wondered why the researchers had made this choice. As they were undertaking research, perhaps they thought it was a better way to collect data that they could readily analyse – but that seems to be choosing limited data that can be easily characterised over the richer data that engagement in dialogue would surely reveal?

Assessment habits

All became clear near the end of the study when, in the final paragraph, the reader is told,

"In the present study, the data collection instruments were used as an assessment method because the study was done at the end of the instruction/ [sic] on the acid and base topics."

Kala, Yaman & Ayas, 2013, p.571

So, it appears that the P-O-E activity, which is an effective way of generating the kind of rich but complex data that helps a teacher hone their teaching for a particular group, was being adopted, instead, as means of a summative assessment. This is presumably why the analysis focused on the degree of match to the canonical science, rather than engaging in interpreting the different ways of thinking in the class. Again presumably, this is why the highly valuable group aspect of the approach was dropped in favour of individual working – summative assessment needs to not only grade against norms, but do this on the basis of each individual's unaided work.

An activity which offers great potential for formative assessment (as it is a learning activity as well as a way of exploring student thinking); and that offers an authentic reflection of scientific practice (where ideas are presented, challenged, justified, and developed in response to criticism); and that is generally enjoyed by students because it is interactive and the predictions are 'low stakes' making for a fun learning session, was here re-purposed to be a means of assessing individual students once their study of a topic was completed.

Kala and colleagues certainly did identify some learning difficulties and alternative conceptions this way, and this allowed them to evaluate student learning. But I cannot help thinking an opportunity was lost here to explore how P-O-E can be used in a formative assessment mode to inform teaching:

  • diagnostic assessment as formative assessment can inform more effective teaching
  • diagnostic assessment as summative assessment only shows where teaching has failed

Yes, I agree that "in any teaching or learning approach enlightened by constructivism, it is important to infer the students' ideas of what is already known", but the point of that is to inform the teaching and so support student learning. What were Kala and colleagues going to do with their inferences about students ideas when they used the technique as "an assessment method … at the end of the instruction".

As the Palestinian adage goes, you do not fatten up the cow by weighing it, just as you do not facilitate learning simply by testing students. To mix my farmyard allusions, this seems to be a study of closing the barn door after the horse has already bolted.


Work cited

Neuroadaptation gremlins on the see-saw in your brain

The brain's reward pathway is like a teeter-totter because…


Keith S. Taber


in your brain there is a teeter-totter like in a kids' playground…these neuroadaptation gremlins hopping on the pain side of the balance…the gremlins hop off …But if we …accumulate so many gremlins on the pain side of our balance …we've crossed over into the disease of addiction…we are craving because …it's the gremlins jumping up and down

Dr Anna Lembke talking on 'All in the mind'

Is there a see-saw in your brain?
(Original images by Image by mohamed Hassan and OpenClipart-Vectors from Pixabay) 

Dr Anna Lembke, Professor of Psychiatry at Stanford University explained how addiction relates to dopmaine and the brain's reward pathway with an analogy of a see-saw. She was talking to Sana Qadar for an episode of the the ABC programme 'All in the Mind' called 'How dopamine drives our addictions'.

Analogies are used in teaching and in science communication to help 'make the unfamiliar familiar', to show someone that something they do not (yet) know about is actually, in some sense at least, a bit like something they are already familiar with. In an analogy, there is a mapping between some aspect(s) of the structure of the target ideas and the structure of the familiar phenomenon or idea being offered as an analogue. Such teaching analogies can be useful to the extent that someone is indeed highly familiar with the 'analogue' (and more so than with the target knowledge being communicated); that there is a helpful mapping across between the analogue and the target; and that comparison is clearly explained (making clear which features of the analogue are relevant, and how).

Read about science analogies

A fried brain

During the programme the interviewer (Qadar) uses a metaphor for how addiction influences brain chemistry:

Sana Qadar: "The problem is when we are becoming addicted to something, our brain's ability to naturally produce dopamine gets fried."

Anna Lembke: "So essentially what happens in the brain as we tip toward the compulsive cycle of overconsumption or addiction is that we start to down-regulate our own dopamine production and dopamine transmission in order to compensate for the ways that we are bombarding our brain's reward pathway with too much dopamine through ingestion of these incredibly potent and rewarding substances and behaviours."

What does Sana Qadar mean by 'fried'? Presumably not destroyed, as the subsequent interview suggests that recovery is possible (see below) – although the brain's ability to naturally produce dopamine would surely not recover from actual frying. So, perhaps, fried means disturbed, or damaged? Given the following dialogue it might mean thrown out of balance.

Perhaps Qadar was thinking of the brain as circuitry as the term is commonly applied to damaged circuits (I think the term derives from damage caused by overheating, as can happen when there is a 'short' for example, which does stop the 'fried' components functioning permanently). So, perhaps for Qadar this is a dead metaphor – a term which started as a metaphor but which, with habitual use, has come to be treated as having literal meaning – at least in relation to electrical circuits and, by analogy, brain circuitry?


A fried brain?
(Images by OpenClipart-Vectors and y Roger YI from Pixabay)

Balancing those gremlins

What I found especially interesting is the way Dr Lembke made extensive use of an analogy in her explanation, much in the way teacher might keep referring back to the same metaphor or analogy or model when introducing an abstract topic.

Sana Qadar tells listeners that "to explain how this process unfolds in the brain's reward pathway, Dr Lembke uses the analogy of a teeter-totter or seesaw":

"Because pleasure and pain are processed in the same part of the brain and work like opposite sides of the balance, it means for every pleasure there is a cost and that cost is pain.

So, if you imagine that in your brain there is a teeter-totter like in a kids' playground, that teeter-totter will tip to one side when we experience pleasure, and the opposite side when we experience pain.

But no sooner has that balance tipped to the side of pleasure, for example when I eat a piece of chocolate, then my brain will work very hard to restore a level balance or what neuroscientists call homeostasis. And it does that not just by bringing the balance level again but first by tipping it an equal and opposite amount to the to the side of pain, that is the after-effect, the come-down. I imagine that as these neuroadaptation gremlins hopping on the pain side of the balance to bring it level again.

Now, if we wait long enough, the gremlins hop off and homeostasis is restored as we go back to our baseline tonic level of dopamine firing. But if we continue to ingest addictive substances or behaviours over very long periods of time, we essentially accumulate so many gremlins on the pain side of our balance that we are in a chronic dopamine deficit state, and that is essentially where we get when we've crossed over into the disease of addiction.

Dr Anna Lembke talking on 'All in the mind'

As part of the programme of treatment Dr Lembke has developed for those suffering from additions she often recommends a period of complete abstinence – asking her clients to abstain for at least 30 days

Because 30 days is about the minimum amount of time it takes for the brain to restore baseline dopamine firing. Another way of saying this is 30 days is about the minimum amount of time it takes for the gremlins to hop off the pain side of the balance so that homeostasis or balance can be restored.

… I think in many people it is possible with abstinence to reset the reward pathway, the brain has an enormous amount of plasticity.

Dr Anna Lembke talking on 'All in the mind'

Abstinence is obviously not easy when the person is constantly faced with relevant triggers, as

"…what happens when we are triggered is that we release a little bit of dopamine in the reward pathway. … But if we wait long enough, those gremlins will hop off the pain side of the balance, and balance is restored….we are craving because we are in a dopamine deficit state, it's the gremlins jumping up and down on the pain side of the balance. But if we can just wait a few more moments, they will get off, homeostasis will be restored and that feeling will pass."

"…It's a fine line between pleasure and pain
You've done it once you can do it again
Whatever you've done don't try to explain
It's a fine, fine line between pleasure and pain.."

From the lyrics of the song 'Pleasure and Pain' (covered by Manfred Mann's Earth Band), by Holly Knight & Michael Donald Chapman

It's a fine line between pleasure and pain

Sana Qadar suggests that certain kinds of pain can actually be good for us. From a biological perspective this is clearly so, as pain provides signals to motivate us to change our behaviour (move away from the fire, put down the very heavy object), but that is not what she is referring to. Rather, that "Dr Lembke says it has to do with the fact that pleasure and pain are processed in the same part of the brain":

Well, just like when we press on the pleasure side of the balance the gremlins hop on the pain side and ultimately shift our hedonic setpoint or our joy setpoint to the side of pain, it's also true that when we press on the pain side of the balance, so we intentionally invite psychologically or physically painful experiences into our lives, that those neuroadaptation gremlins will then hop on the pleasure side of the balance and we will start to up-regulate our own endogenous dopamine, not as the primary response to the stimulus but as the after-response

…I'm absolutely not talking about extreme forms of pain like cutting [which is] not a healthy way to get dopamine

…[For example] Michael was somebody who was addicted to cocaine and alcohol, and got into recovery and immediately experienced the dopamine deficit state, those gremlins on the pain side of the balance, he was anxious, he was irritable, and he also felt very numb, kind of an absence of emotions, which was really scary for him.

And he serendipitously discovered in early recovery that if he took a very cold shower, that created for him the same kinds of feelings, in a muted way, that he used to get from drugs, so he got into this practice of every morning taking a cold shower, and it worked great for him."

Dr Anna Lembke talking on 'All in the mind'

Gremlins, indeed?

Of course there is no see-saw in the brain, but a see-saw is a familiar everyday object that people understand can be balanced – or not. And that if more children (of similar size) load up one side than the other it will be out of balance – and it will only level up once the loads are balanced.

Strictly, there are some complications here with the analogue. If the children are at different distances from the fulcrum that will change their turning effect (so two children could balance one of similar mass according to where they are positioned). Similarly, when the moments are balanced the see-saw will not necessarily be level: as 'balance' means no overall turning effect. So, if the see-saw was already at an angle to the horizontal, loading it up in a balanced way should not shift it back to being level.

Perhaps there is something comparable in the reward system to whereabouts children sit on the see-saw – (perhaps some synapses are more sensitive to the effects of dopamine than others?), but this would be over-complicating an analogy that is intended to offer a link to a simple everyday phenomenon.

Are gremlins like children – do they come in different sizes? Perhaps it seems a little childish even to talk of such things in the brain. But there was once a strong (if discouraged these days 1) tradition of considering a homunculus, a little observer, inside the brain as if in a control room. Moreover, if the lauded physicist James Clerk Maxwell could invoke his famous demon to explain aspects of thermodynamics, we should not censure Lembke's metaphorical gremlins.

If this comparison was being used as a teaching analogy in a formal course, then we might want a more careful setting out of the positive and negative aspects of the analogy (those things that do, and do not, map across from the see-saw to the reward system). But Dr Lembke is not trying to teach her clients to pass tests about brain science, but rather give them a way of thinking about their problems that can help them plan and change behaviour – that is, a useful and straightforward model they can apply in overcoming their addictions.


An episode of the radio progrmme/podcast 'All in the mind'

To find out more

Prof. Lembke was talking about a very important topic and here I have only abstracted particular comments to illustrate her use of the analogy. For a fuller account of the topic, and in particular Prof. Lembke's clinical work to help people struggling with addiction, please refer to the full interview.


Work cited:

Note

1 The term is still use, but in a somewhat different sense:

"in neuroanatomy, the cortical homunculus represents either the motor or the sensory distribution along the cerebral cortex of the brain. The motor homunculus is a topographic representation of the body parts and its correspondents along the precentral gyrus of the frontal lobe. While the sensory homunculus is a topographic representation of the body parts along the postcentral gyrus of the parietal lobe."

Nguyen and Duong, 2021

So, nowadays we each have two 'little men' in our brains.

COVID is like photosynthesis because…

An analogy based on a science concept


Keith S. Taber


Photosynthesis illuminating a plant?
(Image by OpenClipart-Vectors from Pixabay)

Analogies, metaphors and similes are used in communication to help make the unfamiliar familiar by suggesting that some novel idea or phenomena being introduced is in some ways like something the reader/listener is already familiar with. Analogies, metaphors and similes are commonly used in science teaching, and also in science writing and journalism.

An analogy maps out similarities in structure between two phenomena or concepts. This example, from a radio programme, compared the COVID pandemic with photosynthesis.

Read about science analogies

Photosynthesis and the pandemic

Professor Will Davies of Goldsmiths, University of London suggested that:

"So, what we were particularly aiming to do, was to understand the collision between a range of different political economic factors of a pre-2020 world, and how they were sort of reassembled and deployed to cope with something which was without question unprecedented.

We used this metaphor of photosynthesis because if you think about photosynthesis in relation to plants, the sun both lights things up but at the same time it feeds them and helps them to grow, and I think one of the things the pandemic has done for social scientists is to serve both as a kind of illumination of things that previously maybe critical political economists and heterodox scholars were pointing to but now became very visible to the mainstream media and to mainstream politics. But at the same time it also accentuated and deepened some of those tendencies such as our reliance on various digital platforms, certain gender dynamics of work in the household, these sort of things that became acute and undeniable and potentially politicised over the course of 20230, 2021."

Prof. Will Davies, talking on 'Thinking Allowed' 1

This image has an empty alt attribute; its file name is Screenshot-2022-06-12-at-21.47.47.png
Will Davies, Professor in Political Economy at Goldsmiths, University of London was talking to sociologist Prof. Laurie Taylor who presents the BBC programme 'Thinking Aloud' as part of an episode called 'Covid and change'

A scientific idea used as analogue

Prof. Davies refers to using "this metaphor of photosynthesis". However he goes on to suggest how the two things he is comparing are structurally similar – the pandemic has shone a light on social issues at the same time as providing the conditions for them to become more extreme, akin to how light both illuminates plants and changes them. A metaphor is an implicit comparison where the reader/listener is left to interpret the comparison, but a metaphor or simile that is explicitly developed to explain the comparison can become an analogy.

Read about science metaphors

Often science concepts are introduced by analogy to more familiar everyday ideas, objects or events. Here, however, a scientific concept, photosynthesis is used as the analogue – the source used to explain something novel. Prof. Davies assumes listeners will be familiar enough with this science concept for it to helpful in introducing his research.

Mischaracterising photosynthesis?

A science teacher might not like the notion that the sun feeds plants – indeed if a student suggested this in a science class it would likely be judged as an alternative conception. In photosynthesis, carbon dioxide (from the atmosphere) and water (usually absorbed from the soil) provide the starting materials, and the glucose that is produced (along with oxygen) enables other processes – such as growth which relies on other substances also being absorbed from the soil. (So-called 'plant foods', which would be better characterised as plant nutritional supplements, contain sources of elements such as nitrogen, phosphorus and potassium). Light is necessary for photosynthesis, but the sunlight is not best considered 'food'.

One might also argue that Prof. Davies has misidentified the source for his analogy, and perhaps he should rather have suggested sunlight as the source metaphor for his comparison as sunlight both illuminates plants and enables them to grow. Photosynthesis takes place inside chloroplasts within a plant's tissues, and does not illuminate the plant. However, Prof. Davies' expertise is in political economy, not natural science, and it was good to see a social scientist looking to use a scientific idea to explain his research.


Baking fresh electrons for the science doughnut

Faster-than-light electrons race from a sitting start and are baked to give off light brighter than millions of suns that can be used to image tiny massage balls: A case of science communication


Keith S. Taber

(The pedantic science teacher)


Ockham's razor

Ockham's razor (also known as Occam's razor) is a principle that is sometimes applied as a heuristic in science, suggesting that explanations should not be unnecessarily complicated. Faced with a straightforward explanation, and an alternative convoluted explanation, then all other things being equal we should prefer the former – not simply accept it, but to treat is as the preferred hypothesis to test out first.

Ockham's Razor is also an ABC radio show offering "a soap box for all things scientific, with short talks about research, industry and policy from people with something thoughtful to say about science". The show used to offer recorded essays (akin to the format of BBC's A Point of View), but now tends to record short live talks.

I've just listened to an episode called The 'science donut' – in fact I listened several time as I thought it was fascinating – as in a few minutes there was much to attend to.


The 'Science Donut': a recent episode of Ockham's Razor

I approached the episode as someone with an interest in science, of course, but also as an educator with an ear to the ways in which we communicate science in teaching. Teachers do not simply present sequences of information about science, but engage pedagogy (i.e., strategies and techniques to support learning). Other science communicators (whether journalists, or scientists themselves directly addressing the public) use many of the same techniques. Teaching conceptual material (such as science principles, theories, models…) can be seen as making the unfamiliar familiar, and the constructivist perspective on how learning occurs suggests this is supported by showing the learner how that which is currently still unfamiliar, is in some way like something familiar, something they already have some knowledge/experience of.

Science communicators may not be trained as teachers, so may sometimes be using these techniques in a less considered or even less deliberate manner. That is, people use analogy, metaphor, simile, and so forth, as a normal part of everyday talk to such an extent that these tropes may be generated automatically, in effect, implicitly. When we are regularly talking about an area of expertise we almost do not have to think through what we are going to say. 1

Science communicators also often have much less information about their audience than teachers: a radio programme/podcast, for example, can be accessed by people of a wide range of background knowledge and levels of formal qualifications.

One thing teachers often learn to do very early in their careers is to slow down the rate of introducing new information, and focus instead on a limited number of key points they most want to get across. Sometimes science in the media is very dense in the frequency of information presented or the background knowledge being drawn upon. (See, for example, 'Genes on steroids? The high density of science communication'.)

A beamline scientist

Dr Emily Finch, who gave this particular radio talk, is a beamline scientist at the Australian Synchrotron. Her talk began by recalling how her family visited the Synchrotron facility on an open day, and how she later went on to work there.

She then gave an outline of the functioning of the synchrotron and some examples of its applications. Along the way there were analogies, metaphors, anthropomorphism, and dubiously fast electrons.

The creation of the god particle

To introduce the work of the particle accelerator, Dr Finch reminded her audience of the research to detect the Higgs boson.

"Do you remember about 10 years ago scientists were trying to make the Higgs boson particle? I see some nods. They sometimes call it the God particle and they had a theory it existed, but they had not been able to prove it yet. So, they decided to smash together two beams of protons to try to make it using the CERN large hadron collider in Switzerland…You might remember that they did make a Higgs boson particle".

This is a very brief summary of a major research project that involved hundreds of scientists and engineers from a great many countries working over years. But this abbreviation is understandable as this was not Dr Finch's focus, but rather an attempt to link her actual focus, the Australian Synchrotron, to something most people will already know something about.

However, aspects of this summary account may have potential to encourage the development of, or reinforce an existing, common alternative conception shared by many learners. This is regarding the status of theories.

In science, theories are 'consistent, comprehensive, coherent and extensively evidenced explanations of aspects of the natural world', yet students often understand theories to be nothing more than just ideas, hunches, guesses – conjectures at best (Taber, Billingsley, Riga & Newdick, 2015). In a very naive take on the nature of science, a scientist comes up with an idea ('theory') which is tested, and is either 'proved' or rejected.

This simplistic take is wrong in two regards – something does not become an established scientific theory until it is supported by a good deal of evidence; and scientific ideas are not simply proved or disproved by testing, but rather become better supported or less credible in the light of the interpretation of data. Strictly scientific ideas are never finally proved to become certain knowledge, but rather remain as theories. 2

In everyday discourse, people will say 'I have a theory' to mean no more that 'I have a suggestion'.

A pedantic scientist or science teacher might be temped to respond:

"no you don't, not yet,"

This is sometimes not the impression given by media accounts – presumably because headlines such as 'research leads to scientist becoming slightly more confident in theory' do not have the same impact as 'cure found', 'discovery made, or 'theory proved'.

Read about scientific certainty in the media

The message that could be taken away here is that scientists had the idea that Higgs boson existed, but they had not been able to prove it till they were able to make one. But the CERN scientists did not have a Higgs boson to show the press, only the data from highly engineered detectors, analysed through highly complex modelling. Yet that analysis suggested they had recorded signals that closely matched what they expected to see when a short lived Higgs decayed allowing them to conclude that it was very likely one had been formed in the experiment. The theory motivating their experiment was strongly supported – but not 'proved' in an absolute sense.

The doughnut

Dr Finch explained that

"we do have one of these particle accelerators here in Australia, and it's called the Australian Synchrotron, or as it is affectionately known the science donut

…our synchrotron is a little different from the large hadron collider in a couple of main ways. So, first, we just have the one beam instead of two. And second, our beam is made of electrons instead of protons. You remember electrons, right, they are those tiny little negatively charged particles and they sit in the shells around the atom, the centre of the atom."

Dr Emily Finch talking on Ockham's Razor

One expects that members of the audience would be able to respond to this description and (due to previous exposure to such representations) picture images of atoms with electrons in shells. 'Shells' is of course a kind of metaphor here, even if one which with continual use has become a so-called 'dead metaphor'. Metaphor is a common technique used by teachers and other communicators to help make the unfamiliar familiar. In some simplistic models of atomic structure, electrons are considered to be arranged in shells (the K shell, the L shell, etc.), and a simple notation for electronic configuration based on these shells is still often used (e.g., Na as 2.8.1).

Read about science metaphors

However, this common way of talking about shells has the potential to mislead learners. Students can, and sometimes do, develop the alternative conception that atoms have actual physical shells of some kind, into which the electrons are located. The shells scientists refer to are abstractions, but may be misinterpreted as material entities, as actual shells. The use of anthropomorphic language, that is that the electrons "sit in the shells", whilst helping to make the abstract ideas familiar and so perhaps comfortable, can reinforce this. After all, it is difficult to sit in empty space without support.

The subatomic grand prix?

Dr Finch offers her audience an analogy for the synchrotron: the electrons "are zipping around. I like to think of it kind of like a racetrack." Analogy is another common technique used by teachers and other communicators to help make the unfamiliar familiar.

Read about science analogies

Dr Finch refers to the popularity of the Australian Formula 1 (F1) Grand Prix that takes place in Melbourne, and points out

"Now what these race enthusiasts don't know is that just a bit further out of the city we have a race track that is operating six days a week that is arguably far more impressive.

That's right, it is the science donut. The difference is that instead of having F1s doing about 300 km an hour, we have electrons zipping around at the speed of light. That's about 300 thousand km per second.

Dr Emily Finch talking on Ockham's Razor

There is an interesting slippage – perhaps a deliberate rhetoric flourish – from the synchrotron being "kind of like a racetrack" (a simile) to being "a race track" (a metaphor). Although racing electrons lacks a key attraction of an F1 race (different drivers of various nationalities driving different cars built by competing teams presented in different livery – whereas who cares which of myriad indistinguishable electrons would win a race?) that does not undermine the impact of the mental imagery encouraged by this analogy.

This can be understood as an analogy rather than just a simile or metaphor as Dr Finch maps out the comparison:


target conceptanalogue
a synchotrona racetrack
operates six days a week[Many in the audience would have known that the Melbourne Grand Prix takes place on a 'street circuit' that is only set up for racing one weekend each year.]
racing electronsracing 'F1s' (i.e., Grand Prix cars)
at the speed of light at about 300 km an hour
An analogy between the Australian Synchrotron and the Melbourne Grand Prix circuit

So, here is an attempt to show how science has something just like the popular race track, but perhaps even more impressive – generating speeds orders of magnitude greater than even Lewis Hamilton could drive.

They seem to like their F1 comparisons at the Australian Synchrotron. I found another ABC programme ('The Science Show') where Nobel Laureate "Brian Schmidt explains, the synchrotron is not being used to its best capability",

"the analogy here is that we invested in a $200 million Ferrari and decided that we wouldn't take it out of first gear and do anything other than drive it around the block. So it seems a little bit of a waste"

Brian Schmidt (Professor of Astronomy, and Vice Chancellor, at Australian National University)

A Ferrari being taken for a spin around the block in Melbourne (Image by Lee Chandler from Pixabay )

How fast?

But did Dr Finch suggest there that the electrons were travelling at the speed of light? Surely not? Was that a slip of the tongue?

"So, we bake our electrons fresh in-house using an electron gun. So, this works like an old cathode ray tube that we used to have in old TVs. So, we have this bit of tungsten metal and we heat it up and when it gets red hot it shoots out electrons into a vacuum. We then speed up the electrons, and once they leave the electron gun they are already travelling at about half the speed of light. We then speed them up even more, and after twelve metres, they are already going at the speed of light….

And it is at this speed that we shoot them off into a big ring called the booster ring, where we boost their energy. Once their energy is high enough we shoot them out again into another outer ring called the storage ring."

Dr Emily Finch talking on Ockham's Razor

So, no, the claim is that the electrons are accelerated to the speed of light within twelve metres, and then have their energy boosted even more.

But this is contrary to current physics. According to the currently accepted theories, and specifically the special theory of relativity, only entities which have zero rest mass, such as photons, can move at the speed of light.

Electrons have a tiny mass by everyday standards (about 0.000 000 000 000 000 000 000 000 001 g), but they are still 'massive' particles (i.e., particles with mass) and it would take infinite energy to accelerate a single tiny electron to the speed of light. So, given our current best understanding, this claim cannot be right.

I looked to see what was reported on the website of the synchrotron itself.

The electron beam travels just under the speed of light – about 299,792 kilometres a second.

https://www.ansto.gov.au/research/facilities/australian-synchrotron/overview

Strictly the electrons do not travel at the speed of light but very nearly the speed of light.

The speed of light in a vacuum is believed to be 299 792 458 ms-1 (to the nearest metre per second), but often in science we are working to limited precision, so this may be rounded to 2.998 ms-1 for many purposes. Indeed, sometimes 3 x 108 ms-1 is good enough for so-called 'back of the envelope' calculations. So, in a sense, Dr Finch was making a similar approximation.

But this is one approximation that a science teacher might want to avoid, as electrons travelling at the speed of light may be approximately correct, but is also thought to be physically impossible. That is, although the difference in magnitude between

  • (i) the maximum electron speeds achieved in the synchrotron, and
  • (ii) the speed of light,

might be a tiny proportional difference – conceptually the distinction is massive in terms of modern physics. (I imagine Dr Finch is aware of all this, but perhaps her background in geology does not make this seem as important as it might appear to a physics teacher.)

Dr Finch does not explicitly say that the electrons ever go faster than the speed of light (unlike the defence lawyer in a murder trial who claimed nervous impulses travel faster than the speed of light) but I wonder how typical school age learners would interpret "they are already going at the speed of light….And it is at this speed that we shoot them off into a big ring called the booster ring, where we boost their energy". I assume that refers to maintaining their high speeds to compensate for energy transfers from the beam: but only because I think Dr Finch cannot mean accelerating them beyond the speed of light. 3

The big doughnut

After the reference to how "we bake our electrons fresh in-house", Dr Finch explains

And so it is these two rings, these inner and outer rings, that give the synchrotron its nick name, the science donut. Just like two rings of delicious baked electron goodness…

So, just to give you an idea of scale here, this outer ring, the storage ring, is about forty one metres across, so it's a big donut."

Dr Emily Finch talking on Ockham's Razor
A big doughnut? The Australian Synchrotron (Source Australia's Nuclear Science and Technology Organisation)

So, there is something of an extended metaphor here. The doughnut is so-called because of its shape, but this doughnut (a bakery product) is used to 'bake' electrons.

If audience members were to actively reflect on and seek to analyse this metaphor then they might notice an incongruity, perhaps a mixed metaphor, as the synchrotron seems to shift from being that which is baked (a doughnut) to that doing the baking (baking the electrons). Perhaps the electrons are the dough, but, if so, they need to go into the oven.

But, of course, humans implicitly process language in real time, and poetic language tends to be understood intuitively without needing reflection. So, a trope such as this may 'work' to get across the flavour (sorry) of an idea, even if under close analysis (by our pedantic science teacher again) the metaphor appears only half-baked.

Perverting the electrons

Dr Finch continued

"Now the electrons like to travel in straight lines, so to get them to go round the rings we have to bend them using magnets. So, we defect the electrons around the corners [sic] using electromagnetic fields from the magnets, and once we do this the electrons give off a light, called synchrotron light…

Dr Emily Finch talking on Ockham's Razor

Now electrons are not sentient and do not have preferences in the way that someone might prefer to go on a family trip to the local synchrotron rather than a Formula 1 race. Electrons do not like to go in straight lines. They fit with Newton's first law – the law of inertia. An electron that is moving ('travelling') will move ('travel') in a straight line unless there is net force to pervert it. 4

If we describe this as electrons 'liking' to travel in straight lines it would be just as true to say electrons 'like' to travel at a constant speed. Language that assigns human feelings and motives and thoughts to inanimate objects is described as anthropomorphic. Anthropomorphism is a common way of making the unfamiliar familiar, and it is often used in relation to molecules, electrons, atoms and so forth. Sadly, when learners pick up this kind of language, they do not always appreciate that it is just meant metaphorically!

Read about anthropomorphism

The brilliant light

Dr Finch tells her audience that

"This synchrotron light is brighter than a million suns, and we capture it using special equipment that comes off that storage ring.

And this equipment will focus and tune and shape that beam of synchrotron light so we can shoot it at samples like a LASER."

Dr Emily Finch talking on Ockham's Razor

Whether the radiation is 'captured' is a moot point, as it no longer exists once it has been detected. But what caught my attention here was the claim that the synchrotron radiation was brighter than a million suns. Not because I necessarily thought this bold claim was 'wrong', but rather I did not understand what it meant.

The statement seems sensible at first hearing, and clearly it means qualitatively that the radiation is very intense. But what did the quantitative comparison actually mean? I turned again to the synchrotron webpage. I did not find an answer there, but on the site of a UK accelerator I found

"These fast-moving electrons produce very bright light, called synchrotron light. This very intense light, predominantly in the X-ray region, is millions of times brighter than light produced from conventional sources and 10 billion times brighter than the sun."

https://www.diamond.ac.uk/Home/About/FAQs/About-Synchrotrons.html#

Sunlight spreads out and its intensity drops according to an inverse square law. Move twice as far away from a sun, and the radiation intensity drops to a quarter of what it was when you were closer. Move to ten times as far away from the sun than before, and the intensity is 1% of what it was up close.

The synchrotron 'light' is being shaped into a beam "like a LASER". A LASER produces a highly collimated beam – that is, the light does not (significantly) spread out. This is why football hooligans choose LASER pointers rather than conventional torches to intimidate players from a safe distance in the crowd.

Comparing light with like

This is why I do not understand how the comparison works, as the brightness of a sun depends how close you are too it – a point previously discussed here in relation to NASA's Parker solar probe (NASA puts its hand in the oven). If I look out at the night sky on a clear moonlight night then surely I am exposed to light from more "than a million suns" but most of them are so far away I cannot even make them out. Indeed there are faint 'nebulae' I can hardly perceive that are actually galaxies shining with the brightness of billions of suns. 5 If that is the comparison, then I am not especially impressed by something being "brighter than a million suns".


How bright is the sun? it depends which planet you are observing from. (Images by AD_Images and Gerd Altmann from Pixabay)


We are told not to look directly at the sun as it can damage our eyes. But a hypothetical resident of Neptune or Uranus could presumably safely stare at the sun (just as we can safely stare at much brighter stars than our sun because they are so far away). So we need to ask :"brighter than a million suns", as observed from how far away?


How bright is the sun? That depends on viewing conditions
(Image by UteHeineSch from Pixabay)

Even if referring to our Sun as seen from the earth, the brightness varies according to its apparent altitude in the sky. So, "brighter than a million suns" needs to be specified further – as perhaps "more than a million times brighter than the sun as seen at midday from the equator on a cloudless day"? Of course, again, only the pedantic science teacher is thinking about this: everyone knows well enough what being brighter than a million suns implies. It is pretty intense radiation.

Applying the technology

Dr Finch went on to discuss a couple of applications of the synchrotron. One related to identifying pigments in art masterpieces. The other was quite timely in that it related to investigating the infectious agent in COVID.

"Now by now you have probably seen an image of the COVID virus – it looks like a ball with some spikes on it. Actually it kind of looks like those massage balls that your physio makes you buy when you turn thirty and need to to ease all your physical ailments that you suddenly have."

Dr Emily Finch talking on Ockham's Razor

Coronavirus particles and massage balls…or is it…
(Images by Ulrike Leone and Daniel Roberts from Pixabay)

Again there is an attempt to make the unfamiliar familiar. These microscopic virus particles are a bit like something familiar from everyday life. Such comparisons are useful where the everyday object is already familiar.

By now I've seen plenty of images of the coronavirus responsible for COVID, although I do not have a physiotherapist (perhaps this is a cultural difference – Australians being so sporty?) So, I found myself using this comparison in reverse – imagining that the "massage balls that your physio makes you buy" must be like larger versions of coronavirus particles. Having looked up what these massage balls (a.k.a. hedgehog balls it seems) look like, I can appreciate the similarity. Whether the manufacturers of massage balls will appreciate their products being compared to enormous coronavirus particles is, perhaps, another matter.


Work cited:
  • Taber, K. S., Billingsley, B., Riga, F., & Newdick, H. (2015). English secondary students' thinking about the status of scientific theories: consistent, comprehensive, coherent and extensively evidenced explanations of aspects of the natural world – or just 'an idea someone has'. The Curriculum Journal, 1-34. doi: 10.1080/09585176.2015.1043926

Notes:

1 At least, depending how we understand 'thinking'. Clearly there are cognitive processes at work even when we continue a conversation 'on auto pilot' (to employ a metaphor) whilst consciously focusing on something else. Only a tiny amount of our cognitive processing (thinking?) occurs within conscousness where we reflect and deliberate (i.e., explicit thinking?) We might label the rest as 'implicit thinking', but this processing varies greatly in its closeness to deliberation – and some aspects (for example, word recognition when listening to speech; identifying the face of someone we see) might seem to not deserve the label 'thinking'?


2 Of course the evidence for some ideas becomes so overwhelming that in principle we treat some theories as certain knowledge, but in principle they remain provisional knowledge. And the history of science tells us that sometimes even the most well-established ideas (e.g., Newtonian physics as an absolutely precise description of dynamics; mass and energy as distinct and discrete) may need revision in time.


3 Since I began drafting this article, the webpage for the podcast has been updated with a correction: "in this talk Dr Finch says electrons in the synchrotron are accelerated to the speed of light. They actually go just under that speed – 99.99998% of it to be exact."


4 Perversion in the sense of the distortion of an original course


5 The term nebulae is today reserved for clouds of dust and gas seen in the night sky in different parts of our galaxy. Nebulae are less distinct than stars. Many of what were originally identified as nebulae are now considered to be other galaxies immense distances away from our own.

The earth's one long-term objective

Scientist reveals what the earth has been trying to do

Keith S. Taber

Seismology – the study of the earth letting off steam? (Image by ELG21 from Pixabay)

"the earth has one objective, it has had one objective for four and half billion years, and that's…"

In our time

'In Our Time' is an often fascinating radio programme (and podcast) where Melvyn Bragg gets three scholars from a field to explain some topic to a general audience.

Imagine young Melvyn interrupting a physics teacher's careful exposition of why pV = 1/3nmc2 by asking how the gas molecules came to be moving in the first place.

The programme covers various aspects of culture.

BBC 'In our time'

I am not sure if the reason that I sometimes find the science episodes seem a little less erudite than those in the the other categories is:

  • a) Melvyn is more of an arts person, so operates at a different level in different topics;
  • b) I am more of a science person, so more likely to be impressed by learning new things in non-science topics; and to spot simplifications, over-generalisations, and so forth, in science topics.
  • c) A focus in recent years on the importance of the public understanding of science and science communication means that scientists may (often, not always) be better prepared and skilled at pitching difficult topics for a general audience.
  • d) Topics from subjects like history and literature are easier to talk about to a general audience than many science topics which are often highly conceptual and technical.

Anyway, today I did learn something from the episode on seismology ("Melvyn Bragg and guests discuss how the study of earthquakes helps reveal Earth's secrets [sic]"). I was told what the earth had been up to for the last four and half billion years…

Seismology: Where does this energy come from?

Quite early in the discussion Melvyn (sorry, The Lord Bragg CH – but he is so familiar from his broadcasts over the years that he seems like an old friend) interjected when Dr James Hammond (Reader in Geophysics at Birkbeck, University of London) was talking about forces involved in plate tectonics to ask "Where does this energy come from?". To this, Dr Hammond replied,

"The whole thing that drives the whole caboose?

It comes from plate tectonics. So, essentially the earth has one objective, it has had one objective for four and half billion years, and that's to cool down. We're [on] a big lump of rock floating in space, and it's got all this primordial energy, so we are going right back here, there's all this primordial energy from the the material coming together, and it's trying to cool down."

Dr James Hammond talking on 'In Our Time' 1

My immediate response, was that this was teleology – seeing purpose in nature. But actually, this might be better described as anthropomorphism. This explanation presents the earth as being the kind of agent that has an objective, and which can act in the world to work towards goals. That is, like a human:

  • The earth has an objective.
  • The earth tries to achieve its objective.

Read about teleology

Read about anthropomorphism

A flawed scientific account?

Of course, in scientific terms, the earth has no such objective, and it is not trying to do anything as it is inanimate. Basic thermodynamics suggests that an object (e.g., the earth) that is hotter than its surroundings will cool down as it will radiate heat faster than it absorbs it. 2 (Of course, the sun is hotter than the earth – but that's a rather minority component of the earth's surroundings, even if in some ways a very significant one.) Hot objects tend to cool down, unless they have an active mechanism to maintain their temperature above their ambient backgrounds (such as 'warm-blooded' creatures). 3

So, in scientific terms, this explanation might be seen as flawed – indeed as reflecting an alternative conception of similar kind as when students explain evolutionary adaptations in terms of organisms trying to meet some need (e.g., The brain thinks: grow more fur), or explain chemical processes in terms of atoms seeking to meet a need by filling their electron shells (e.g., Chlorine atoms share electrons to fill in their shells).

Does Dr Hammond really believe this account?

Does Dr Hammond really think the earth has an objective that it actively seeks to meet? I very much doubt it. This was clearly rhetorical language adopting tropes seen as appropriate to meet the needs of the context (a general audience, a radio programme with no visuals to support explanations). In particular, he was in full flow when he was suddenly interrupted by Melvin, a bit like the annoying child who interrupts the teacher's carefully prepared presentation by asking 'but why's that?' about something it had been assumed all present would take for granted.

Imagine the biology teacher trying to discuss cellular metabolism when young Melvin asks 'but where did the sugar come from?'; or the chemistry teacher discussing the mechanism of a substitution reaction when young Melvin asks why we are assuming tetrahedral geometry around the carbon centre of interest; or young Melvyn interrupting a physics teacher's careful exposition of why pV = 1/3nmc2 by asking how the gas molecules came to be moving in the first place.

Of course, part of Melvin's job in chairing the programme IS to act as the child who does not understand something being taken for granted and not explained, so vicariously supporting the listener without specialist background in that week's topic.

Effective communication versus accurate communication?

Science teachers and communicators have to sometimes use ploys to 'make the unfamiliar familiar'. One common ploy is to employ an anthropomorphic narrative as people readily relate to the human experience of having goals and acting to meet needs and desires. Locating difficult ideas within such a 'story' framework is known to often make such ideas more accessible. Does this gain balance the potential to mislead people into thinking they have been given a scientific account? In general, such ploys are perhaps best used only as introductions to a difficult topic, introductions which are then quickly followed up by more technical accounts that better match the scientific narrative (Taber & Watts, 2000).

Clearly, that is more feasible when the teacher or communicator has the opportunity for a more extensive engagement with an audience, so that understanding can be built up and developed over time. I imagine Dr Hammond was briefed that he had just a few minutes to get across his specific points in this phase of the programme, only to then find he was interrupted and asked to address additional background material.

As a scientist, the notion of the earth spending billions of years trying to cool down grates as it reflects pre-scientific thinking about nature and acts as a pseudo-explanation (something which has the form of an explanation, but little substance).

Read about pseudo-explanations

As cooling is a very familiar everyday phenomena, I wondered if a basic response that would avoid anthropomorphism might have served, e.g.,

When the earth formed, it was very much hotter than today, and, as it was hotter than its surroundings, it has been slowly cooling ever since by radiating energy into space. Material inside the earth may be hot enough to be liquid, or – where solid – be plastic enough to be deformed. The surface is now much cooler than it was, but inside the earth it is still very hot, and radioactive processes continue to heat materials inside the earth. We can understand seismic events as driven by the ways heat is being transferred from deep inside the earth.

However, just because I am a scientist, I am also less well-placed to know how effective this might have been for listeners without a strong science background – who may well have warmed [sic] to the earth striving to cool.

Dr Hammond had to react instantly (like a school teacher often has to) and make a quick call based on his best understanding of the likely audience. That is one of the difference between teaching (or being interviewed by Melvin) and simply giving a prepared lecture.

Work cited:

Taber, K. S. and Watts, M. (1996) The secret life of the chemical bond: students' anthropomorphic and animistic references to bonding, International Journal of Science Education, 18 (5), pp.557-568.

Note

1 Speech often naturally has repetitions, and markers of emphasis, and hesitations that seem perfectly natural when heard, but which do not match written language conventions. I have slightly tidied what I transcribed from:

"The whole thing that drives the whole caboose? It comes from plate tectonics, right. So, essentially the earth, right, has one objective, it has had one objective for four and half billion years, and that's to cool down. Right, we're a big lump of rock floating in space, and it's got all this primordial energy, so we are going right back here, there's all this primordial energy from, from the the material coming together,4 and it's trying to cool down."

2 In simple terms, the hotter an object is, the greater the rate at which it radiates.

The hotter the environment is, the more intense the radiation incident on the object and the more energy it will absorb.

Ultimately, in an undisturbed, closed system everything will reach thermal equilibrium (the same temperature). Our object still radiates energy, but at the same rate as it absorbs it from the environment so there is no net heat flow.

3 Historically, the earth's cooling was an issue of some scientific controversy, after Lord Kelvin (William Thomson) calculated that if the earth was cooling at the rate his models suggested for a body of its mass, then this was cooling much too rapid for the kind of timescales that were thought to be needed for life to have evolved on earth.

4 This is referring to the idea that the earth was formed by the coming together of material (e.g., space debris from a supernova) by its mutual gravitational attraction. Before this happens the material can be considered to be in a state of high gravitational potential energy. As the material is accelerated together it acquires kinetic energy (as the potential energy reduces), and then when the material collides inelastically it forms a large mass of material with high internal energy (relating to the kinetic and potential energy of the molecules and ions at the submicroscopic level) reflected in a high temperature.

A discriminatory scientific analogy

Animals and plants as different kinds of engines

Keith S. Taber

Specimens of two different types of natural 'engines'.
Portrait of Sir Kenelm Digby, 1603-65 (Anthony van DyckFrom Wikimedia Commons, the free media repository)

In this post I discuss a historical scientific analogy used to discuss the distinction between animals and plants. The analogy was used in a book which is said to be the first major work of philosophy published in the English language, written by one of the founders of The Royal Society of London for Improving Natural Knowledge ('The Royal Society'), Sir Kenelm Digby.

Why take interest in an out-of-date analogy?

It is quite easy to criticise some of the ideas of early modern scientists in the light of current scientific knowledge. Digby had some ideas which seem quite bizarre to today's reader, but perhaps some of today's canonical scientific ideas, and especially more speculative theories being actively proposed, may seem equally ill-informed in a few centuries time!

There is a value in considering historical scientific ideas, in part because they help us understand a little about the path that scientists took towards current scientific thinking. This might be valuable in avoiding the 'rhetoric of conclusions', where well-accepted ideas become so familiar that we come to take them for granted, and fail to appreciate the ways in which such ideas often came to be accepted in the face of competing notions and mixed experimental evidence.

For the science educator there are added benefits. It reminds us that highly intelligent and well motivated scholars, without the value of the body of scientific discourse and evidence available today, might sensibly come up with ideas that seem today ill-conceived, sometimes convoluted, and perhaps even foolish. That is useful to bear in mind when our students fail to immediately understand the science they are taught and present with alternative conceptions that may seem illogical or fantastic to the teacher. Insight into the thought of others can help us consider how to shift their thinking and so can make us better teachers.

Read about historical scientific conceptions

Analogies as tools for communicating science

Analogies are used in teaching and in science communication to help 'make the unfamiliar familiar', to show someone that something they do not (yet) know about is actually, in some sense at least, a bit like something they are already familiar with. In an analogy, there is a mapping between some aspect(s) of the structure of the target ideas and the structure of the familiar phenomenon or idea being offered as an analogue. Such teaching analogies can be useful to the extent that someone is indeed highly familiar with the 'analogue' (and more so than with the target knowledge being communicated); that there is a helpful mapping across between the analogue and the target; and that comparison is clearly explained (making clear which features of the analogue are relevant, and how).

Read about scientific analogies

Nature made engines

Digby presents his analogy for considering the difference between plants and animals in his 'Discourse on Bodies', the first part of his comprehensive text known as his 'Two Discourses' completed in 1644, and in which he sets out something of a system of the world.1 Although, to a modern scientific mind, many of Digby's ideas seem odd, and his complex schemes sometimes feel rather forced, he shared the modern scientific commitment that natural phenomena should be explained in terms of natural causes and mechanisms. (That is certainly not to suggest he was an atheist, as he was a committed Roman Catholic, but he assumed that nature had been set up to work without 'occult' influences.)

Before introducing an analogy between types of living things and types of engines, Digby had already prepared his readers by using the term 'engine' metaphorically to refer to living things. He did this after making a distinction between matter dug out of the ground as a single material, and other specimens which although highly compacted into single bodies of material clearly comprised of "differing parts" that did not work together to carry out any function, and seemed to have come together by "chance and by accident"; and where, unlike in living things (where removed parts tended to stop functioning), the separate parts could be "severed from [one] another" without destroying any underlying harmonic whole. He contrasted these accidental complexes with,

"other bodies in which this manifest and notable difference of parts, carries with it such subordination of one of them unto another, as we cannot doubt but that nature made such engines (if so I may call them) by design; and intended that this variety should be in one thing; whole unity and being what it is, should depend of the harmony of the several differing parts, and should be destroyed by their separation".

Digby emphasising the non-accidental structure of living things (language slightly tidied for a modern reader).

Digby was writing long before Charles Darwin's work, and accepted the then widely shared idea that there was design in nature. Today this would be seen as teleological, and not appropriate in a scientific account. A teleological account can be circular (tautological) if the end result of some process is explained as due to that process having a purpose. [Consider the usefulness as an 'explanation' that 'oganisms tend to become more complex over time as nature strives for complexity'. 2]

Read about teleology

Scientists today are expected to offer accounts which do not presuppose endpoints. That does not mean that a scientists cannot believe there is purpose in the world, or even that the universe was created by a purposeful God – simply that scientific accounts cannot 'cheat' by using arguments that something happens because God wished it, or nature was working towards it. That is, it should not make any difference whether a scientist believes God is the ultimate cause of some phenomena (through creating the world, and setting up the laws of nature) as science is concerned with the natural 'mechanisms' and causes of events.

Read about science and religion

Two types of engines

In the part of his treatise on bodies that concerns living things, Digby gives an account of two 'engines' he had seen many years before when he was travelling in Spain. This was prior to the invention of the modern steam engine, and these engines were driven by water (as in water mills). 3

Digby introduces two machines which he considers illustrate "the natures of these two kinds of bodies [i.e., plants and animals]"

He gives a detailed account of one of the engines, explaining that the mechanism has one basic function – to supply water to an elevated place above a river.

His other engine example (apparently recalled in less detail – he acknowledges having a "confused and cloudy remembrance" ) was installed in a mint in a mine where it had a number of different functions, including:

  • producing metal of the correct thickness for coinage
  • stamping the metal with the coinage markings
  • cutting the coins from the metal
  • transferring the completed coins into the supply room.

These days we might see it as a kind of conveyor belt moving materials through several specialist processes.

Different classes of engine

Digby seems to think this is a superior sort of engine to the single function example.

For Digby, the first type of engine is like a plant,

"Thus then; all sorts of plants, both great and small, may be compared to our first engine of the waterwork at Toledo, for in them all the motion we can discern, is of one part transmitting unto the next to it, the juice which it received from that immediately before it…"

Digby comparing a plant to a single function machine

The comments here about juice may seem a bit obscure, as Digby has an extended explanation (over several pages) of how the growth and structure of a plant are based on a single kind of vascular tissue and a one-way transport of liquid. 4 Liquid rises up through the plant just as it was raised up by the mechanism at Toldeo

The multi-function 'engine' (perhaps ironically better considered in today's terms as an industrial plant!) is however more like an animal,

"But sensible living creatures, we may fitly compare to the second machine of the mint at Segovia. For in them, though every part and member be as it were a complete thing of itself, yet every one requires to be directed and put on in its motion by another; and they must all of them (though of very different natures and kinds of motion) conspire together to effect any thing that may be for the use and service of the whole. And thus we find in them perfectly the nature of a mover and a moveable; each of them moving differently from one another, and framing to themselves their own motions, in such sort as is more agreeable to their nature, when that part which sets them on work hath stirred them up.

And now because these parts (the movers and the moved) are parts of one whole; we call the entire thing automaton or…a living creature".

Digby comparing animals to more complex machines (language slightly tidied for a modern reader)

So plants were to animals as a single purpose mechanism was to a complex production line.

Animals as super-plants

Digby thought animals and plants shared in key characteristics of generation (we would say reproduction), nutrition, and augmentation (i.e., growth), as well as suffering sickness, decay and death. But Digby did not just think animals were different to plants, but a superior kind.

He explains this both in terms of the animal having functions that be did not beleive applied to plants,

And thus you see this plant [sic] has the virtue both of sense or feeling; that is, of being moved and affected by external objects lightly striking upon it; as also of moving itself, to or from such an object; according as nature shall have ordained.

but he also related to this as animals being more complex. Whereas the plant was based on a vascular system involving only one fluid, this super-plant-like-entity, had three. In summary,

this plant [sic, the animal] is a sensitive creature, composed of three sources, the heart, the brain, and the liver: whose are the arteries, the nerves, and the veins; which are filled with vital spirits, with animal spirits, and with blood: and by these the animal is heated, nourished, and made partaker of sense and motion.

A historical analogy to explain the superiority of animals to plants

[The account here does not seem entirely consistent with other parts of the book, especially if the reader is supposed to associate a different fluid with each of the three systems. Later in the treatise, Digby refers to Harvey's work about circulation of the blood (including to the liver), leaving the heart through arteries, and to veins returning blood to the heart. His discussion of sensory nerves suggest they contain 'vital spirits'.]

Some comments on Digby's analogy

Although some of this detail seems bizarre by today's standards, Digby was discussing ideas about the body that were fairly widely accepted. As suggested above, we should not criticise those living in previous times for not sharing current understandings (just as we have to hope that future generations are kind to our reasonable mistakes). There are, however, two features of this use of analogy I thought worth commenting on from a modern point of view.

The logic of making the unfamiliar familiar

If such analogies are to be used in teaching and science communication, then they are a tactic we can use to 'make the unfamiliar familiar', that is to help others understand what are sometimes difficult (e.g., abstract, counter-intuitive) ideas by pointing out they are somewhat like something the person is already familiar with and feels comfortable that they understand.

Read about teaching as 'making the unfamiliar familiar'

In a teaching context, or when a scientist is being interviewed by a journalist, it is usually important that the analogue is chosen so it is already familiar to the audience. Otherwise either the analogy does not help explain anything, or time has to be spent first explaining the analogy, before it can be employed.

In that sense, then, we might question Digby's example as not being ideal. He has to exemplify the two types of machines he is setting up as the analogue before he can make an analogy with it. Yet this is not a major problem here for two reasons.

Firstly, a book affords a generosity to an author that may not be available to a teacher or a scientist talking to a journalist or public audience. Reading a book (unlike a magazine, say) is a commitment to engagement in depth and over time, and a reader who is still with Digby by his Chapter 23 has probably decided that continued engagement is worth the effort.

Secondly, although most of his readers will not be familiar with the specific 'engines' he discusses from his Spanish travels, they will likely be familiar enough with water mills and other machines and devices to readily appreciate the distinction he makes through those examples. The abstract distinction between two classes of 'engine' is therefore clear enough, and can then be used as an analogy for the difference between plants and animals.

A biased account

However, today we would not consider this analogy to be applicable, even in general terms, leaving aside the now discredited details of plant and animal anatomy and physiology. An assumption behind the comparison is that animals are superior to plants.

In part, this is explained in terms of the plants apparent lack of sensitivity (later 'irritability' would be added as a characteristic of living things, shared by plants) and their their lack of ability in getting around, and so not being able to cross the room to pick up some object. In part, this may be seen as an anthropocentric notion: as humans who move around and can handle objects, it clearly seems to us with our embodied experience of being in the world that a form of life that does not do this (n.b., does not NEED to do this) is inferior. This is a bit like the argument that bacteria are primitive forms of life as they have evolved so little (a simplification, of course) over billions of years: which can alternatively be understood as showing how remarkably adapted they already were, to be able to successfully occupy so many niches on earth without changing their basic form.

There is also a level of ignorance about plants. Digby saw the plant as having a mechanism that moved moisture from the soil through the plant, but had no awareness of the phloem (only named in the nineteenth century) that means that transport in a plant is not all in one direction. He also did not seem to appreciate the complexity of seasonal changes in plants which are much more complex than a mechanism carrying out a linear function (like lifting water to a privileged person who lives above a river). He saw much of the variation in plant structures as passive responses to external agents. His idea of human physiology are also flawed by today's standards, of course.

Moreover, in Digby's scheme (from simple minerals dug from the ground, to accidentally compacted complex materials, to plants and then animals) there is a clear sense of that long-standing notion of hierarchy within nature.

The great chain of being

That is, the great chain of being, which is a system for setting out the world as a kind of ladder of superior and inferior forms. Ontology is sometimes described as the study of being , and typologies of different classes of entities are sometimes referred to as ontologies. The great chain of being can be understood as a kind of ontology distinguishing the different types of things that exist – and ranking them.

Read about ontology

In this scheme (or rather schemes, as various versions with different levels of detail and specificity had been produced – for example discriminating the different classes of angels) minerals come below plants, which come below animals. To some extent Digby's analogy may reflect his own observations of animals and plants leading him to think animals were collectively and necessarily more complex than plants. However, ideas about the great chain of being were part of common metaphysical assumptions about the world. That is, most people took it for granted that there was such hierarchy in nature, and therefore they were likely to interpret what they observed in those terms.

Digby made the comparison between increasing complexity in moving from plant to animal as being a similar kind of step-up as when moving from inorganic material to plants,

But a sensitive creature, being compared to a plant, [is] as a plant is to a mixed [inorganic] body; you cannot but conceive that he must be compounded as it were of many plants, in like sort as a plant is of many mixed bodies.

Digby, then, was surely building his scheme upon his prior metaphysical commitments. Or, as we might say these days, his observations of the world were 'theory-laden'. So, Digby was not only offering an analogy to help discriminate between animals and plants, but was discriminating against plants in assuming they were inherently inferior to animals. I think that is a bias that is still common today.

Work cited:
  • Digby, K. (1644/1665). Two Treatises: In the one of which, the nature of bodies; In the other, the nature of mans soule, is looked into: in ways of the discovery of the immortality of reasonable soules. (P. S. MacDonald Ed.). London: John Williams.
  • Digby, K. (1644/2013). Two Treatises: Of Bodies and of Man's Soul (P. S. MacDonald Ed.): The Gresham Press.
  • Taber, K. S. & Watts, M. (2000) Learners' explanations for chemical phenomena, Chemistry Education: Research and Practice in Europe, 1 (3), pp.329-353. [Free access]
Notes:

1 This is a fascinating book with many interesting examples of analogies, similes, metaphor, personification and the like, and an interesting early attempt to unify forces (here, gravity and magnetism). (I expect to write more about this over time.) The version I am reading is a 2013 edition (Digby, 1644/2013) which has been edited to offer consistent spellings (as that was not something many authors or publishers concerned themselves with at the time). The illustrations, however, are from a facsimile of an original publication (Digby, 1644/1645: which is now out of copyright so can be freely reproduced).

2 Such explanations may be considered as a class of 'pseudo-explanations': that give the semblance of explanation without actually explaining very much (Taber & Watts, 2000).

3 The aeolipile (e.g., Hero's engine) was a kind of steam engine – but was little more than a novelty where water boiled in a vessel with suitably directed outlets and free to rotate, causing it to spin. However, the only 'useful' work done was in turning the engine itself.

4 This relates to his broader theory of matter which still invokes the medieval notion of the four elements, but is also an atomic theory involving tiny particles that can pass into apparently solid materials due to pores and channels much too small to be visible.

How is a well-planned curriculum like a protein?

Because it has different levels of structure providing functionality

Keith S. Taber

I have been working on a book about pedagogy, and was writing something about sequencing teaching. I was setting out how well-planned teaching has a structure that has several levels of complexity – and I thought a useful analogy here (as the book is primarily aimed at chemistry educators) might be protein structure.

Proteins can have very complex structures. (Image by WikimediaImages from Pixabay )

Proteins are usually considered to have at least three, or often four, levels of structure. Protein structure is not just of intellectual interest, but has critical functional importance. It is the shape, conformation, of the protein molecule which allows it to have its function. Now, I should be careful here, as I am well aware (and have discussed on the site) how the language we often use when discussing organisms can seem teleological.

Read about teleology

We analyse biological structures and processes, and when considering the component parts can see them as having some function in relation to that overall structure or process. That can give the impression of purpose – as though someone designed the shape of the protein with a particular function in mind. That can give the impression of teleological thinking – seeing nature as having a purpose. The scientific understanding is that proteins with their complex shapes that are just right for their observed functions have been subject to natural selection over a very long period – evolving along with the structures and processes they are part of.

The importance of protein shape

The shape of a protein can allow it to act as a catalyst that will allow, say, a polysaccharide to break down into simple sugars at body temperature and at a rate that can support an organism's metabolism (when the rate without the enzyme would only give negligible amounts of product ). The shape of a protein, as in haemoglobin, may allow a complex to exist which either binds with oxygen or releases it depending on the local conditions in different parts of the body. And so forth.

Now, chemically, proteins are of the form of polyamides – substances that can be understood to have a molecular structure of connected amide units (above left, source: Wikipedia) in a long chain that results from polymerising amino acid units (amino acid structure shown above right, source: Wikipedia). An amino acid molecule has two functional groups – an amide group (-NH2) which allows the compounds to react with carboxylic acids (including amino acids for example), and a carboxylic acid group (-COOH) that allows the compound to react with amides (including amino acids for example). So, amino acids can polymerise as each amino acid molecule has two sites that can be loci for the reaction.

Molecular structure of a compound formed by four amino acids – the peptide linkage (highlighted orange) is formed from part (-CO-) of the acid group (-COOH, as outlined in red) of one amino acid molecule with part (-NH-) of the amine group (-NH2, as outlined in cyan) of another amino acid molecule (which may be of the same or a different amino acid). In proteins the chains are much longer. Original image from Wikipedia

Special examples of polyamides

So, proteins are polyamides. But this does not mean that polyamides are proteins. In the same way that chemistry Nobel prize winners are scientists – but not all scientists are Nobel laureates. So, being a polyamide is a necessary, but not a sufficient, condition for being a protein. For examples, nylons are also polyamides, but are not proteins. 1

Proteins tend to be very complex polyamides, which are built up from a number of different amino acids (of which 20 are found in proteins). Each amino acid has a different molecular structure – there is the common feature which allows the peptide linkages to form, but each amino acid also has a different side chain or 'residue' as part of its molecule. But just being a large, complex, polypeptide built from a selection of those 20 amino acids does not necessarily lead to a protein found in livings things. The key point about the protein is that its very specific shape allows it to have the function it does. Indeed there are many billions of polyamide structures of similar complexity to naturally found proteins which could exist (and perhaps do somewhere), but which have no role in living organisms (on this planet at least!)

A simple teaching analogy often used to explain enzyme specificity is that of a lock and key. Whilst somewhat simplistic, if we consider that the protein has to have just the right shape to 'fit' the 'substrate' molecule then it is clear that the precise shape is important. A key that opens a door lock has to be precisely shaped. (The situation with an enzyme is actually more demanding, as the molecule can change its shape according to whether a substrate is bound – so it needs to be the right shape to bind to the substrate molecular and then the right shape to release the product molecule.)

So a functioning protein molecule has a very specific shape, indeed sometimes a specific profile of shapes as it interacts with other molecules, and this can be understood to arise from several levels of structure.

Four levels of structure

The primary structure is the sequence of amino acid residues along the polypeptide skeleton.

The amino acid sequence in polypeptide chains in human insulin (with the amino acids represented by conventional three letter abbreviations) – image from Saylor Academy, 2012 open access text: The Basics of General, Organic, and Biological Chemistry

The chain is not simply linear, or a zigzag shape (as we might commonly represent an organic molecules based on a chain of carbon atoms). Rather the interactions between the peptide units, causes the chain to form a more complex three-dimensional structure, such as a helix. This is the secondary structure.

Protein chains tend to form into shapes such as helices (This example: Crystal structure of the DNA-binding protein Sso10a from Sulfolobus solfataricus; from the protein data base PDB DOI: 10.2210/pdb4AYA/pdb.)

Because the secondary structure allows the amino acid residues on different parts of the chain to be close, interactions, forms of bonding, form between different points on the chain. (As shown in the representation of the insulin structure above.) This depends on the amino acid sequence as the different residues have different sizes, shapes and functional groups – so interactions will occur between particular residue pairs. This adds another level of structure.

A coiled cable can take on various overall shapes (Image by Brett Hondow from Pixabay )

Imagine taking a coiled cable somewhat like the helical secondary structure), such as used for some headphone, and folding this into a more complex shape. This is the tertiary structure, and gives the protein its unique shape, which it turn makes it suitable to act as an enzyme or hormone or whatever.

Proteins may be even more complex, as they may comprise complexes of several chains, closely bound together by weak chemical bonds. Haemoglobin, for example, has four such subunits arranged in a quaternary structure.

A representation of the structure of a haemoglobin protein – with the four interlinked chains shown in different colours (Structure determination of haemoglobin from Donkey (equus asinus) at 3.0 Angstrom resolution, from the protein data base: PDB DOI: 10.2210/pdb1S0H/pdb)

But what has this got to do with sequencing curriculum?

When planning teaching, such as when developing a course or writing a 'scheme of work', one has to consider how to sequence the introduction of course material as well as learning activities. This can be understood to have different levels in terms of the considerations we might take into account.

A well-designed curriculum sequence has several levels of structure (ordering, building, cross-linking) affording more effective teaching

Primary structure and conceptual analysis

A fundamental question (once we have decided what falls within the scope of the course, and selected the subject matter) is how to order the introduction of topics and concepts. There is usually some flexibility here, but as some concepts are best understood in terms of other more fundamental ideas, there are more and less logical ways to go about this. 'Conceptual analysis' is the technique which is used to break down the conceptual structure of material to see what prerequisite learning is necessary before discussing new material.

For example, if we wish to teach for understanding then it probably does not make sense to introduce double bonds before the concept of covalent bonds, or neutralisation before teaching something about acids, or d-level splitting before introducing ideas about atomic orbitals, or the rate determining step of a reaction before teaching about reaction rate. In biology, it would not make sense to teach about mitochondria before the concept of cells had been introduced. In physics, one would not seek to teach about conservation of momentum, before having introduced the concept of momentum. The reader can probably think of many more examples. The sequence of quanta of subject matter in the curriculum sequence can be considered a first level of curriculum structure.

Secondary structure and the spiral curriculum

We also revise topics periodically at different levels of treatment. We introduce topics at an introductory level – and later offer more sophisticated accounts (atomic structure, acidity, oxidation…). We distinguish metals form non-metals and later introduce electronegativity. We distinguish ionic and covalent bonds and later introduce degrees of bond polarity. In recent years this has been reflected in the work on developing model 'learning progressions' that support students in more sophisticated scientific thinking over several grade levels.

From Taber, 2021

This builds upon the well-established idea of a 'spiral curriculum' (Bruner, 1960) where the learner resists topics in increasing levels of sophistication over their student career. So, here is a level of structure beyond the linear progression of topics covered in different sessions, encompassing revisiting the same topic at different turns of the 'spiral' (perhaps like the alpha helices formed in may proteins).

This already suggests there will be linkages across the 'chain' of teachings units (whether seen as lectures/lesson or lesson episodes) as references are made back to earlier teaching in order to draw upon more fundamental ideas in building up more complex ideas, and building on simplified accounts to develop more nuanced and sophisticated accounts.

Tertiary structure – drip feeding to reinforce learning

The skilled teacher will also be making other links that are not strictly* essential but are useful unless the students have exemplary study skills usually ARE essential!]

To support students in consolidating learning (something that is usually essential if we want them to remember the material and be able to apply it months later) the teacher will 'drip feed' reinforcement of prior learning by looking for opportunities to revise key points form earlier teaching.

We have defined what we mean by 'compound' or 'oxidising agent' or 'polymer', so now we spot opportunities to reinforce this whenever it seems sensible to do so in teaching other material. We have taught students to calculate molecular mass, or assign oxidation states, or recognise a Lewis acid – so we look for opportunities to ask students to rehearse and apply this knowledge in contexts that arise in later teaching. At the end of a previous lesson everyone seemed to understand the difference between respiration and breathing – but it sensible to find opportunity for them to rehearse the distinction. 2

There is then a level of structure due to linkages back and forth between the components of the teaching sequence.

So where the 'primary structure' is necessary to build up knowledge in a logical way in order that the teaching scheme functions to provide a coherent learning experience (teaching makes sense at the time), and the secondary structure allows progression toward more sophisticated accounts and models as students develop, the 'tertiary structure' offers reinforcement of learning to ensure the course functions as an effective long term learning experience (that what was taught is not just understood at the time, but is retained, and readily brought to mind in relevant contexts, and can be applied, over the longer term).

Quaternary structure – locating the course in the wider curriculum experience

What about quaternary structure? Well, commonly a student is not just attending one class or lecture course. Their curriculum consists of several different strands of teaching experiences. At upper secondary school level, for example, the learner may attend chemistry classes interspersed with physics classes, biology classes and mathematics classes. Their experience of the curriculum encompasses these different strands. Likely, there are both salient and other less obvious potential linkages between these different courses. Conservation of energy from physics applies in chemistry and biology. Enzymes are catalysts, so the characteristics of catalysts apply to them. The nature of hydrogen bonds may be taught in chemistry – and applied in biology. In that case, it would be useful for the learners if the topic was taught that concept in the chemistry class before it was needed in biology.

And just as there may be aspects of logical sequencing of ideas across the strands to be considered, there may be other potential links where the teacher in one subject can draw upon, exemplify, or provide opportunities to review, what has been taught in the other.

Level of structureFeature of sequencing
primary structurelogical sequencing of concepts to identify and later build on prerequisites
secondary structurespiral curriculum to build up sophistication of understanding
tertiary structurecross-linking between lessons along strand to reinforce learning by finding opportunities to revisit, review, and apply prior learning
quaternary structurecross links between courses to build up integrated (inter-*)disciplinary knowledge
levels of structure in well-designed curriculum

(* in a degree course this may be coordinating different lecture courses within a discipline; in a school context this may be relating different curriculum subjects)

Afterword

How seriously do I intend this comparison? Of course this is just an analogy. It is easy to see that it does not hold up to detailed analysis – there are more ways that curricular structure is quite unlike protein structure, and the kinds of units and links being discussed in the two cases are of very different nature.

Is there any value in such a comparison if the analogy is somewhat shallow? Well, devices such as analogies operate as thinking tools. Most commonly we use teaching analogies to help 'make the unfamiliar familiar' by showing how something unfamiliar is somewhat like something familiar. This can be a useful first stage in helping someone understand some new phenomena or concept.

In teaching science we commonly make analogies with everyday phenomena to help introduce abstract science concepts. Here I am using a scientific concept (protein structure) as the analogue for the target idea about sequencing teaching.

Read about scientific analogies

My motivation here was to prompt teachers (and others who might read the book when it is finished) who are already familiar with general ideas about curriculum and schemes of work to think about a parallel (albeit, perhaps a somewhat forced one?) with something rather different but likely already very familiar – protein structure. Chemists and science teachers are likely to already appreciate the different levels of structure in proteins, and how the different aspects of the nature of polypeptide chains and the links formed between amino acid residues inform the overall shape, and therefore the functionality, of the structure.

Perhaps this thinking tool will entice readers to think about how conceptual links within and between courses of study can support the functionality of teaching? Perhaps they will dismiss the comparison, pointing out various ways in which the level of structure in a well-planned curriculum are quite different from the levels of structure in a protein. Of course, if they can do that insightfully, I might suspect that this 'teaching analogy' will have done its job.

Work cited:
Note:

1 Sometimes the term polyamide is reserved for synthetic compounds and contrasted with polypeptides as natural products.

2 This can be useful even when students 'seem' to have grasped key ideas. When they remember that 'everything is made of atoms' we may not appreciate they think that implies chemical bonds contain atoms. When they seem to have understood that cellular metabolism depends upon respiration, we may not appreciate they think that this does not apply to plants when the sun is shining.

How fat is your memory?

A chemical analogy for working memory

Keith S. Taber

This posting has nothing to do with the chemical composition of your brain (where lipids do play an important part), nor about diet – such as those claims of the merits of 'omega-3 fatty acids' in a healthy diet.

Rather, I am going to suggest how a chemical structure can provide an analogy for thinking about working memory.

In a sense, working memory is a bit like triglyceride structure (which is only a useful comparison for those who already know about the chemistry being referenced)

Analogies are used in teaching and in science communication to help 'make the unfamiliar familiar', to show someone that something they do not (yet) know about is actually, in some sense at least, a bit like something they are already familiar with. In an analogy, there is a mapping between some aspect(s) of the structure of the target ideas and the structure of the familiar phenomenon or idea being offered as an analogue. Such teaching analogies can be useful to the extent that someone is indeed highly familiar with the 'analogue' (and more so than with the target knowledge being communicated); that there is a helpful mapping across between the analogue and the target; and that comparison is clearly explained (making clear which features of the analogue are relevant, and how).

So, the similarly is in terms of conceptual structures. Consider the figure above, which suggests there are similarities between aspects of the concept of working memory and aspects of the concept of triglyceride structure. In this case the analogy is at quite an abstract level – so is only likely to be useful for more advanced learners (such as science graduates preparing for teaching for example).

In relation to science, we might distinguish between several classes of analogy. In teaching we are most likely to be explaining some target scientific idea in terms of an everyday idea or phenomenon. However, sometimes one scientific idea which is already well-established is used as the analogue by which to explain about a less familiar scientific idea. This can happen in science teaching or science communication to the public – but can also be employed by scientists themselves when communication new ideas to their peers. It is also possible that sometimes a scientific idea may be useful as an analogue for explaining some target idea from outside science (as long, of course, as the science is familiar to the audience for the analogy).

Read about science analogies

An analogy for science teachers

My professional life has basically encompassed teaching about three broad areas – teaching natural science (mainly chemistry and physics) to school and college learners; teaching educational ideas to those preparing for school teaching; and teaching about research to those setting out on research projects.

The analogy I am discussing here came to me when preparing the manuscript for a book aimed at teachers of chemistry (an audience of readers that I can reasonably assume have a high level of chemistry knowledge) and broadly about pedagogy. So, what came to mind was an analogy from science to put across an idea about what is known as working memory.

Working memory

Working memory is the name given to the faculty or apparatus we all have to support conscious thinking – when we plan, assess and evaluate, problem solve, and so forth. It is absolutely critical to our nature as deliberate thinkers. We probably do MUCH more thinking (if you allow that term in this context, if not, say, cognitive processing) pre-consciously, so without any awareness. This is the 'thinking' [or cognitive processing] that goes on in the background, much of which is quite low level, but also includes the kind of incubation of problems that leads to those sudden insights where a solution comes to us (i.e., to our conscious awareness) in a 'flash'. It is also the basis of those intuitions that we describe as 'gut feelings' and which are often powerful (and often turn out to be correct) even though we are not sure what we are basing them on.

Yet working memory is where we do the thinking that we are aware of, and supports the stream of consciousness that is the basis of our awareness of ourselves as thinking beings. Given its importance, a very interesting finding is that although the brain potentially has a virtually inexhaustible capacity for learning new information (through so called 'long-term memory'), the working memory itself where we process material we are trying to learn and memorise has a very limited capacity. Indeed, it is often said that typical working memory capacity in a normal adult is 7±2. And some think that may be an overestimate. So, a typical person can juggle no more than about seven items in mind at once.

There is a very important question of why such an important aspect of cognition is so limited. Is there some physical factor which has limited this, or some evolution contingency that is in effect an unlucky break in human evolution? There is also the intriguing suggestion that actually this very limited capacity may have survival value and so be considered an adaptation increasing fitness.1 Whatever the reason – we have a working memory that can be considered to only have about half a dozen slots for information. (Of course, 'slots' is here a metaphor, but a useful one.)

Chunking

Each slot will take one item of information, except that we have to be careful what we mean by one 'item', as the brain acts to treat such 'items' subjectively. That is, what counts as one item for your brain may not work as one item for mine. Consider the following example:

1s22s22p63s1

How many 'items' is that string of symbols? If we consider someone who only saw this as a series of numbers and letters and who had never come across this before they would need to remember:

  • the number 1,
  • is followed by the lower-case letter s,
  • followed by the number 2,
  • which is a superscript,
  • then the number 2,
  • then the lower-case letter s,
  • then the number 2,
  • which is a superscript,

and quite likely we have exceeded memory capacity when only half way through!

But for a chemist who already knows that this particular string could be seen as the electronic structure of a sodium atom,2 this can be treated as one unit – the whole string is already available represented as an integrated structure in long-term memory form where it can be copied as 'a chunk' into working memory to occupy a single slot. So, a chemistry teacher using this information in an argument or calculation has other 'slots' for the other relevant information whereas a student may be struggling.

Triglycerides as an analogue for working memory

It struck me that an analogy that would be familiar to many chemists and science teachers is that of triglycerides which are considered esters of glycerol (with its three alcohol groups) with fatty acids. Although this class of compounds has some commonalities, there are a great many possible different specific structures (each strictly reflecting a distinct compound). What is common is the short chain of three carbons each bonded to an ester linkage (left hand figure below). However, what those ester linkages actually link to can vary. In human milk, for example, there are a great many different triglycerides (at least of the order of hundreds) comprising a wide range of fatty acids (Winter, Hoving & Muskiet, 1993).3

The triglycerides are members of a class of compounds with common features. They can be considered to be the result of glycerol (propane-1,2,3-triol) reacting with fatty acids, where the compounds formed will depend upon the specific fatty acids. The first figure uses R as a generic symbol to show the common structure. The second figure is the simplest triglyceride type structure formed when the acid is methanoic acid. If a mixture of acids is reacted with the glycerol, the side chains need not be the same – as in the third example. Actual triglycerides found in fats and oils in organisms usually have much longer chains than in this example.

In the image above, meant to be a simple representation of the structure of triglyceride molecules, the first figure has Rs to represent any of a great many possible side chains. The shortest possible structure here just has hydrogen atoms for Rs (triformin – the second figure), but more commonly there are long aliphatic chains as suggested by the third figure – although usually the chains would be even longer. In relation to diet, a key feature of interest is whether the fats consumed are saturated, or have some degree of 'unsaturation' (i.e., the double bond shown in the middle chain of the third figure) – unsaturated fats tend to be seen as more healthy, and tend to come from plant sources.

We might consider that the molecular structures consists of the common component with three 'slots' for side chains. In principle the slots could be occupied by hydrogens (hydrogen 'atoms') or chains based on any number of carbons (carbon 'atoms').4 So the total mass of a triglyceride molecule can vary considerably, as can the number of carbon centres in a molecule.

Fixed 'slots', variable content

Working memory is sometimes said to have 'slots' as well (again, to be understood metaphorically) into which information from perception or memory can be 'slotted'. We can consciously operate on the information in working memory, for example forming associations between the material in different slots. The number of slots in a person's working memory is fixed, but as information that has been well learnt can be 'chunked' into quite extensive conceptual structures, the total amount of information that can be engaged with is highly variable.

Working memory has a very limited number of 'slots' – but where extensive conceptual frameworks are already well established from prior learning a great deal of information can be engaged with as a single chunk

If student in class is keeping in mind information that is not directly related to the task in hand then this will 'use up' slots that are not then available for problem-solving or other tasks. Indeed, one of the skills someone with expert knowledge in a field has, but not novices, is determining which information available is likely to be peripheral or incidental rather that important to the task in hand, and indeed which of the important features need to be considered initially, and which can be ignored until later.

The perceived complexity of a learning task then always has to be considered in relation to the background knowledge and experience of the individual. So, at one time a person may be watching a documentary on a subject they know nothing about, in which case the information they perceive may seen unconnected, such that working memory may be occupied by very small chunks (such as individual names of unfamiliar people that are bring discussed). If that same person sits down to revise course notes they have developed over an extended time time, and have reviewed regularly, they may be bringing to mind quite extensive conceptual structures to slot into working memory. The same working memory, with the same nominal capacity, is now engaging with a much more extensive body of information.

Work cited:
  • Winter, C. H., Hoving, E. B., & Muskiet, F. A. J. (1993). Fatty acid composition of human milk triglyceride species: Possible consequences for optimal structures of infant formula triglycerides. Journal of Chromatography B: Biomedical Sciences and Applications, 616(1), 9-24. doi:https://doi.org/10.1016/0378-4347(93)80466-H
Footnotes

1 The logic here is that, because of chunking, working memory biases cognition towards what is already familiar, which may be an advantage in a context where although change is important there is a largely stable environment so that developing and then following a stable set of survival strategies is generally advantageous.

The kind of fruit that was edible yesterday is probably edible today, and the animal that attacked the group last week is best assumed to be dangerous today as well. The peer who helped us yesterday may help us again in future if we reciprocate, and the person who tried to cheat us before is best not trusted too far today.

2 There is a strong case that the familiar designation of electronic structures in terms of discrete s, p, d and f orbitals is only strictly valid for hydrogenic (single electron) species – but the model is commonly taught and used in chemical explanations relating to multi-electron atoms.

3 Strictly there are no fatty acids 'in' the triglycerol just as strictly there are no atoms in a molecule.4 I am here using economy of language which will be clear to the expert, though we risk misleading novice students if we are not careful to be precise. The triglycerides have various chain segments corresponding to a wide range of fatty acids; a wide range of fatty acids are generated by hydrolysing the triglyceride.

4 Strictly 'atomic centres', as molecules do not contain atoms, as atoms are by definition discrete structures with only one nucleus – and the atomic centres in molecules are bound into a molecular structure. Again, chemists and teachers may refer to carbon atoms in the side chain knowing this is not precisely what they mean, but we should perhaps be careful to be clear when talking to learners.