Creeping bronzes

Evidence of journalistic creep in 'surprising' Benin bronzes claim


Keith S. Taber


How certain can we be about the origin of metals used in historic artefacts? (Image by Monika from Pixabay)


Science offers reliable knowledge of the natural world – but not absolutely certain knowledge. Conclusions from scientific studies follow from the results, but no research can offer absolutely certain conclusions as there are always provisos.

Read about critical reading of research

Scientists tend to know this, something emphasised for example by Albert Einstein (1940), who described scientific theories (used to interpret research results) as "hypothetical, never completely final, always subject to question and doubt".

When scientists talk to one another within some research programme they may used a shared linguistic code where they can omit the various conditionals ('likely', 'it seems', 'according to our best estimates', 'assuming the underlying theory', 'within experimental error', and the rest) as these are understood, and so may be left unspoken, thus increasing economy of language.

When scientists explain their work to a wider public such conditionals may also be left out to keep the account simple, but really should be mentioned. A particular trope that annoyed me when I was younger was the high frequency of links in science documentaries that told me "this could only mean…" (Taber, 2007) when honest science is always framed more along the lines "this would seem to mean…", "this could possibly mean…", "this suggested the possibility"…

Read about scientific certainty in the media

Journalistic creep

By journalistic creep I mean the tendency for some journalists who act as intermediates between research scientists and the public to keep the story simple by omitting important provisos. Science teachers will appreciate this, as they often have to decide which details can be included in a presentation without loosing or confusing the audience. A useful mantra may be:

Simplification may be necessary – but oversimplification can be misleading

A slightly different type of journalist creep occurs within stories themselves, Sometimes the banner headline and the introduction to a piece report definitive, certain scientific results – but reading on (for those that do!) reveals nuances not acknowledged at the start. Teachers will again appreciate this tactic: offer the overview with the main point, before going back to fill in the more subtle aspects. But then, teachers have (somewhat) more control over whether the audience engages with the full account.

I am not intending to criticise journalists in general here, as scientists themselves have a tendency to do something similar when it comes to finding titles for papers that will attract attention by perhaps suggesting something more certain (or, sometimes, poetic or even controversial) than can be supported by the full report.


An example of a Benin Bronze (a brass artefact from what is now Nigeria) in the British [sic] Museum

(British Museum, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)


Where did the Benin bronzes metal come from?

The title of a recent article in the RSC's magazine for teachers, Education in Chemistry, proclaimed a "Surprise origin for Benin bronzes".1 The article started with the claim:

"Geochemists have confirmed that most of the Benin bronzes – sculptured heads, plaques and figurines made by the Edo people in West Africa between the 16th and 19th centuries – are made from brass that originated thousands of miles away in the German Rhineland."

So, this was something that scientists had apparently confirmed as being the case.

Reading on, one finds that

  • it has been "long suspected that metal used for the artworks was melted-down manillas that the Portuguese brought to West Africa"
  • scientists "analysed 67 manillas known to have been used in early Portuguese trade. The manillas were recovered from five shipwrecks in the Atlantic and three land sites in Europe and Africa"
  • they "found strong similarities between the manillas studied and the metal used in more than 700 Benin bronzes with previously published chemical compositions"
  • and "the chemical composition of the copper in the manillas matched copper ores mined in northern Europe"
  • and "suggests that modern-day Germany, specifically the German Rhineland, was the main source of the metal".

So, there is a chain of argument here which seems quite persuasive, but to move from this to it being "confirmed that most of the Benin bronzes…are made from brass that originated …in the German Rhineland" seems an example of journalistic creep.

The reference to "the chemical composition of the copper [sic] in the manillas" is unclear, as according to the original research paper the sample of manilla analysed were:

"chemically different from each other. Although most manillas analysed here …are brasses or leaded brasses, sometimes with small amounts of tin, a few specimens are leaded copper with little or no zinc."

Skowronek, et al., 2023

The key data presented in the paper concerned the ratios of different lead isotopes (205Pb:204Pb; 206Pb:204Pb; 207Pb:204Pb; 208Pb:204Pb {see the reproduced figure below}) in

  • ore from different European locations (according to published sources)
  • sampled Benin bronze (as reported from earlier research), and
  • sampled recovered manillas

and the ratios of different elements (Ni:AS; Sb:As; Bi:As) in previously sampled Benin bronzes and sampled manillas.

The tendency to consider a chain of argument where each link seems reasonably persuasive as supporting fairly certain conclusions is logically flawed (it is like concluding from knowledge that one's chance of dying on any particular day is very low, that one must be immortal) but seems reflected in something I have noticed with some research students: that often their overall confidence in the conclusions of a research paper they have scrutinised is higher than their confidence in some of the distinct component parts of that study.


An example of a student's evaluation of a research study


This is like being told by a mechanic that your cycle brakes have a 20% of failing in the next year; the tyres 30%; the chain 20%; and the frame 10%; and concluding from this that there is only about a 20% chance of having any kind of failure in that time!

A definite identification?

The peer reviewed research paper which reports the study discussed in the Education in Chemistry article informs readers that

"In the current study, documentary sources and geochemical analyses are used to demonstrate that the source of the early Portuguese "tacoais" manillas and, ultimately, the Benin Bronzes was the German Rhineland."

"…this study definitively identifies the Rhineland as the principal source of manillas at the opening of the Portuguese trade…"

Skowronek, et al.,2023

which sounds pretty definitive, but interestingly the study did not rely on chemical analysis alone, but also 'documentary' evidence. In effect, historical evidence provided another link in the argument, by suggesting the range of possible sources of the alloy that should be considered in any chemical comparisons. This assumes there were no mining and smelting operations providing metal for the trade with Africa which have not been well-documented by historians. That seems a reasonable assumption, but adds another proviso to the conclusions.

The researchers reported that

Pre-18th century manillas share strong isotopic similarities with Benin's famous artworks. Trace elements such as antimony, arsenic, nickel and bismuth are not as similar as the lead isotope data…. The greater data derivation suggests that manillas were added to older brass or bronze scrap pieces to produce the Benin works, an idea proposed earlier.

and acknowledges that

Millions of these artifacts were sent to West Africa where they likely provided the major, virtually the only, source of brass for West African casters between the 15th and the 18th centuries, including serving as the principal metal source of the Benin Bronzes. However, the difference in trace elemental patterns between manillas and Benin Bronzes does not allow postulating that they have been the only source.

The figure below is taken from the research report.


Part of Figure 2 from the open access paper (© 2023 Skowronek et al. – distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)

The chart shows results from sampled examples of Benin bronzes (blue circles); compared with the values of the same isotope ratios from different copper ore site (squares) and manillas sampled from different archaeological sties (triangles).


The researchers feel that the pattern of clustering of results (in this, and other similar comparisons between lead isotope ratios) from the Benin bronzes, compared with those from the sampled manillas, and the ore sites, allows them to identify the source of metal re-purposed by the Edo craftspeople to make the bronzes.

It is certainly the case that the blue circles (which refer to the artworks) and the green squares (which refer to copper ore samples from Rhineland) do seem to generally cluster in a similar region of the graph – and that some of the samples taken from the manillas also seem to fit this pattern.

I can see why this might strongly suggest the Rhineland (certainly more so than Wales) as the source of the copper believed to be used in manillas which were traded in Africa and are thought to have been later melted down as part of the composition of alloy used to make the Benin bronzes.

Whether that makes for either

  • definitive identification of the Rhineland as the principal source of manillas (Skowronek paper), or
  • confirmation that most of the Benin bronze are made from brass that originated thousands of miles away in the German Rhineland (EiC)

seems somewhat less certain. Just as scientific claims should be.


A conclusion for science education

It is both human nature, and often good journalistic or pedagogic practice to begin with a clear, uncomplicated statement of what is to be communicated. But we also know that what is heard or read first may be better retained in memory than what follows. It also seems that people in general tend to apply the wrong kind of calculus when there are multiple source of doubt – being more likely to estimate overall doubt as being the mean or modal level of the several discrete sources of doubt, rather than something that accumulates step-on-step.

It seems there is a major issue here for science education in training young people in critically questioning claims, looking for the relevant provisos, and understanding how to integrate levels of doubt (or, similarly, risk) that are distributed over a sequence of phases in a process.


All research conclusions (in any empirical study in any discipline) rely on a network of assumptions and interpretations, any one of which could be a weak link in the chain of logic. This is my take on some of the most critical links and assumptions in the Benin bronzes study. One could easily further complicate this scheme (for example, I have ignored the assumptions about the validity of the techniques and calibration of the instrumentation used to find the isotopic composition of metal samples).


Work cited:

Note:

1 It is not clear to me what the surprise was – but perhaps this is meant to suggest the claim may be surprising to readers of the article. The study discussed was premised on the assumption that the Benin Bronzes were made from metal largely re-purposed from manillas traded from Europe, which had originally been cast in one of the known areas in Europe with metal working traditions. The researchers included the Rhineland as one of the potential regional sites they were considering. So, it was surely a surprise only in a similar sense to rolling a die and it landing on 4, rather than say 2 or 5, would be a surprise.

But then, would you be just as likely to read an article entitled "Benin bronzes found to have anticipated origin"?


The best science education journal

Where is the best place to publish science education research?


Keith S. Taber



OutletDescriptionNotes
International Journal of Science EducationTop-tier general international science education journalHistorically associated with the European Science Education Research Association
Science EducationTop-tier general international science education journal
Journal of Research in Science TeachingTop-tier general international science education journalAssociated with NARST
Research in Science EducationTop-tier general international science education journalAssociated with the Australasian Science Education Research Association
Studies in Science EducationLeading journal for publishing in-depth reviews of topics in science education
Research in Science and Technological Education Respected general international science education journal
International Journal of Science and Maths EducationRespected general international science education journalFounded by the National Science and Technology Council, Taiwan
Science Education InternationalPublishes papers that focus on the teaching and learning of science in school settings ranging from early childhood to university educationPublished by the International Council of Associations for Science Education
Science & EducationHas foci of historical, philosophical, and sociological perspectives on science educationAssociated with the International History, Philosophy, and Science Teaching Group
Journal of Science Teacher EducationConcerned with the preparation and development of science teachersAssociated with the Association for Science Teacher Education
International Journal of Science Education, Part B – Communication and Public EngagementConcerned with research into science communication and public engagement / understanding of science
Cultural Studies of Science EducationConcerned with science education as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon
Journal of Science Education and TechnologyConcerns the intersection between science education and technology.
Disciplinary and Interdisciplinary Science Education ResearchConcerned with science education within specific disciplines and between disciplines.Affiliated with the Faculty of Education, Beijing Normal University
Journal of Biological Education For research specifically within biology educationPublished for the Royal Society of Biology.
Journal of Chemical EducationA long-standing journal of chemistry education, which includes a section for Chemistry Education Research papersPublished by the American Chemical Society.
Chemistry Education Research and Practice The leading research journal for chemistry educationPublished by the Royal Society of Chemistry
Some of the places to publish research in science education

I was recently asked which was the best journal in which to seek publication of science education research. This was a fair question, given that I had been been warning of the large number of low quality journals now diluting the academic literature.

I had been invited to give a seminar talk to the Physics Education and Scholarship Section in the Department of Physics at Durham University. I had been asked to talk on the theme of 'Publishing research in science education'.

The talk considered the usual processes involved in submitting a paper to a research journal and the particular responsibilities involved for authors, editors and reviewers. In the short time available I said a little about ethical issues, including difficulties that can arise when scholars are not fully aware of, or decide to ignore, the proper understanding of academic authorship 1 . I also discussed some of the specific issues that can arise when those with research training in the natural sciences undertake educational research without any further preparation (for example, see: Why do natural scientists tend to make poor social scientists?), such as underestimating the challenge of undertaking valid experiments in educational contexts.

I had not intended to offer advice on specific journals for the very good reasons that

  • there are a lot of journals
  • my experience of them is very uneven
  • I have biases!
  • knowledge of journals can quickly become out of date when publishers change policies, or editorial teams change

However, it was pointed out that there does not seem to be anywhere where such advice is readily available, so I made some comments based on my own experience. I later reflected that some such guidance could be useful, especially to those new to research in the area.

I do, in the 'Research methodology' section of the site, offer some advice to the new researcher on 'Publishing research', that includes some general advice on things to consider when thinking about where to send your work:

Read about 'Selecting a research journal: Selecting an outlet for your research articles'

Although I name check some journals there, I did not think I should offer strong guidance for the reasons I give above. However, taking on board the comment about the lack of guidance readily available, I thought I would make some suggestions here, with the full acknowledgement that this is a personal perspective, and that the comments facility below will allow other views and potential correctives to my biases! If I have missed an important journal, or seem to have made a misjudgement, then please tell me and (more importantly) other readers who may be looking for guidance.

Publishing in English?

My focus here is on English language journals. There are many important journals that publish in other languages such as Spanish. However, English is often seen as the international language for reporting academic research, and most of the journals with the greatest international reach work in the English language.

These journals publish work from all around the world, which therefore includes research into contexts where the language of instruction is NOT English, and where data is collected, and often analysed, in the local language. In these cases, reporting research in English requires translating material (curriculum materials, questions posed to participants, quotations from learners etc.) into English. That is perfectly acceptable, but translation is a skilled and nuanced activity, and needs to be acknowledged and reported, and some assurance of the quality of translation offered (Taber, 2018).

Read about guidelines for good practice regarding translation in reporting research

Science research journal or science education journal?

Sometime science research journals will publish work on science education. However, not all science journals will consider this, and even for those that do, this tends to be an occasional event.

With the advent of open-access, internet accessible publishing, some academic publishers are offering journals with very wide scope (presumably as it is considered that in the digital age it is easier to find research without it needing to be in a specialist journal), however, authors should be wary of journals that have titles implying a specialist scientific focus but which seem to accept material from a wide range of fields, as this is one common indicator of predatory journals – that is, journals which do not use robust peer review (despite what they may claim) and have low quality standards.

Read about predatory journals

There are some scientific journals with an interdisciplinary flavour which are not education journals per se, but are open to suitable submissions on educational topics. I am most familiar (disclosure of interest, being on the Editorial Board) is Foundations of Chemistry (published by Springer).



Science Education Journal or Education Journal?

Then, there is the question of whether to publish work in specialist science education journals or one of the many more general education journals. (There are too many to discuss them here.) General education journals will sometimes publish work from within science education, as long as they feel it is of high enough general interest to their readership. This may in part be a matter of presentation – if the paper is written so it is only understandable to subject specialists, and only makes recommendations for specialists in science education, it is unlikely to seem suitable for a more general journal.

On the other hand, just because research has been undertaken in science teaching and learning context, this may not make it of particular interest to science educators if the research aims, conceptualisation, conclusions and recommendations concern general educational issues, and anything that may be specific to science teaching and learning is ignored in the research – that is, if a science classroom was chosen just as a matter of convenience, but the work could have been just as well undertaken in a different curriculum context (Taber, 2013).

Research Journal or Professional Journal?

Another general question is whether it is best to send one's work to an academic research journal (offering more kudos for the author{s} if published) or a journal widely read by practitioners (but usually considered less prestigious when a scholar's academic record is examined for appointment and promotion). These different types of output usually have different expectations about the tone and balance of articles:

Read about Research journals and practitioner journals

Some work is highly theoretical, or is focussed on moving forward a research field – and is unlikely to be seen as suitable for a teacher's journal. Other useful work may have developed and evaluated new educational resources, but without critically exploring any educational questions in any depth. Information about this project would likely be of great interest to teachers, but is unlikely to meet the criteria to be accepted for publication in a research journal.

But what about a genuine piece of research that would be of interest to other researchers in the field, but also leads to strong recommendations for policy and practice? Here you do not have to choose one or other option. Although you cannot publish the same article in different journals, a research report sent to an academic journal and an article for teachers would be sufficiently different, with different emphases and weightings. For example, a professional journal does not usually want a critical literature review and discussion of details of data analysis, or long lists of references. But it may value vignettes that teachers can directly relate to, as well as exemplification of how recommendation might be followed through – information that would not fit in the research report.

Ideally, the research report would be completed and published first, and the article for the professional audience would refer to (and cite) this, so that anyone who does want to know more about the theoretical background and technical details can follow up.

Some examples of periodicals aimed at teachers (and welcoming work written by classroom teachers) include the School Science Review, (published by the Association for Science Education), Physics Education (published by the Institute of Physics) and the Royal Society of Chemistry's magazine Education in Chemistry. Globally, there are many publications of this kind, often with a national focus serving teachers working in a particular curriculum context by offering articles directly relevant to the specifics of the local education contexts.

The top science education research journals

Having established our work does fit in science education as a field, and would be considered academic research, we might consider sending it to one of these journals

  • International Journal of Science Education (IJSE)
  • Science Education (SE)
  • Journal of Research in Science Teaching (JRST)
  • Research in Science Education (RiSE)


To my mind these are the top general research journals in the field.

IJSE is the journal I have most worked with, having published quite a few papers in the journal, and have reviewed a great many. I have been on the Editorial Board for about 20 years, so I may be biased here.2 IJSE started as the European Journal of Science Education and has long had an association with the European Science Education Research Association (ESERA – not to be confused with ASERA).

Strictly this journal is now known as IJSE Part A, as there is also a Part B which has a particular focus on 'Communication and Public Engagement' (see below). IJSE is published by Taylor and Francis / Routledge.

SE is published by Wiley.

JRST is also published by Wiley, and is associated with NARST.

RISE is published by Springer, and is associated with the Australasian Science Education Research Association (ASERA – not to be confused with ESERA)

N.A.R.S.T. originally stood for the National Association for Research in Science Teaching, where the Nation referred to was the USA. However, having re-branded itself as "a global organization for improving science teaching and learning through research" it is now simply known as NARST. In a similar way ESERA describes itself as "an European organisation focusing on research in science education with worldwide membership" and ASERA clams it "draws together researchers in science education from Australia, New Zealand and more broadly".


The top science education reviews journal

Another 'global' journal I hold in high esteem in Studies in Science Education (published by Taylor & Francis / Routledge) 3 .

This journal, originally established at the University of Leeds and associated with the world famous Centre for Studies in Science Education 4, is the main reviews journal in science education. It publishes substantive, critical reviews of areas of science education, and some of the most influential articles in the field have been published here.

Studies in Science Education also has a tradition of publishing detailed scholarly book reviews.


In my view, getting your work published in any of these five journals is something to be proud of. I think people in many parts of the world tend to know IJSE best, but I believe that in the USA it is often considered to be less prestigious than JRST and SE. At one time RISE seemed to have a somewhat parochial focus, and (my impression is) attracted less work from outside Australasia and its region – but that has changed now. 'Studies' seems to be better known in some contexts than other, but it is the only high status general science education journal that publishes full-length reviews (both systematic, and thematic perspectives), with many of its contributions exceeding the normal word-length limits of other top science education journals. This is the place to send an article based on that literature review chapter that thesis examiners praised for its originality and insight!



There are other well-established general journals of merit, for example Research in Science and Technological Education (published by Taylor & Francis / Routledge, and originally based at the University of Hull) and the International Journal of Science and Maths Education (published by Springer, and founded by the National Science and Technology Council, Taiwan). The International Council of Associations for Science Education publishes Science Education International.

There are also journals with particular foci with the field of science education.

More specialist titles

There are also a number of well-regarded international research journals in science education which particular specialisms or flavours.


Science & Education (published by Springer) is associated with the International History, Philosophy, and Science Teaching Group 5, which as the name might suggest has a focus on science eduction with a focus on the nature of science, and "publishes research using historical, philosophical, and sociological approaches in order to improve teaching, learning, and curricula in science and mathematics".


The Journal of Science Teacher Education (published by Taylor & Francis / Routledge), as the name suggests is concerned with the preparation and development of science teachers. The journal is associated with the USA based Association for Science Teacher Education.


As suggested above, IJSE has a companion journal (also published by Taylor & Francis / Routledge), International Journal of Science Education, Part B – Communication and Public Engagement


Cultural Studies of Science Education (published by Springer) has a particular focus on  science education "as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon".


The Journal of Science Education and Technology (published by Springer) has a focus on the intersection between science education and technology.


Disciplinary and Interdisciplinary Science Education Research has a particular focus on science taught within and across disciplines. 6 Whereas most of the journals described here are now hybrid (which means articles will usually be behind a subscription/pay-wall, unless the author pays a publication fee), DISER is an open-access journal, with publication costs paid on behalf of authors by the sponsoring organisation: the Faculty of Education, Beijing Normal University.

This relatively new journal reflects the increasing awareness of the importance of cross-disciplinary, interdisciplinary and transdisciplinary research in science itself. This is also reflected in notions of whether (or to what extent) science education should be considered part of a broader STEM education, and there are now journals styled as STEM education journals.


Science as part of STEM?

Read about STEM in the curriculum


Research within teaching and learning disciplines

Whilst both the Institute of Physics and the American Institute of Physics publish physics education journals (Physics Education and The Physics Teacher, respectively) neither publishes full length research reports of the kind included in research journals. The American Physical Society does publish Physical Review Physics Education Research as part of its set of Physical Review Journals. This is an on-line journal that is Open Access, so authors have to pay a publication fee.


The Journal of Biological Education (published by Taylor and Francis/Routledge) is the education journal of the Royal Society of Biology.


The Journal of Chemical Education is a long-established journal published by the American Chemical Society. It is not purely a research journal, but it does have a section for educational research and has published many important articles in the field. 7


Chemistry Education Research and Practice (published by the Royal Society of Chemistry, RSC) is purely a research journal, and can be considered the top international journal for research specifically in chemistry education. (Perhaps this is why there is a predatory journal knowingly called the Journal of Chemistry Education Research and Practice)

As CERP is sponsored by the RSC (which as a charity looks to use income to support educational and other valuable work), all articles in CERP are accessible for free on-line, but there are no publication charges for authors.


Not an exhaustive list!

These are the journals I am most familiar with, which focus on science education (or a science discipline education), publish serous peer-reviewed research papers, and can be considered international journals.

I know there are other discipline-based journals (e.g, biochemistry education, geology education) and indeed I expect there are many worthwhile places to publish that have slipped my mind or about which I am ignorant. Many regional or national journals have high standards and publish much good work. However, when it comes to research papers (rather than articles aimed primarily at teachers) academics usually get more credit when they publish in higher status international journals. It is these outlets that can best attract highly qualified editors and reviewers, and so peer review feedback tends to be most helpful 8, and the general standard of published work tends to be of a decent quality – both in terms of technical aspects, and its significance and originality.

There is no reason why work published in English is more important than work published in other languages, but the wide convention of publishing research for an international audience in English means that work published in English language journals probably gets wider attention globally. I have published a small number of pieces in other languages, but am primarily limited by my own restricted competence to only one language. This reflects my personal failings more than the global state of science education publishing!

A personal take – other viewpoints are welcome

So, this is my personal (belated) response to the question about where one should seek to publish research in science education. I have tried to give a fair account, but it is no doubt biased by my own experiences (and recollections), and so inadvertently subject to distortions and omissions.

I welcome any comments (below) to expand upon, or seek to correct, my suggested list, which might indeed make this a more useful listing for readers who are new to publishing their work. If you have had good (or bad) experiences with science education journals included in, or omitted from, my list, please share…


Sources cited:

Notes

1 Academic authorship is understood differently to how the term 'author' is usually used: in most contexts, the author is the person who prepared (wrote, types, dictated) a text. In academic research, the authors of the research paper are those who made a substantial direct intellectual contribution to the work being reported. That is, an author need not contribute to the writing-up phase (though all authors should approve the text) as long as they have made a proper contribution to the substance of the work. Most journals have clear expectations that all deserving authors, and only those people, should be named as authors.

Read about academic authorship


2 For many years the journal was edited by the late Prof. John Gilbert, who I first met sometime in the 1984-5 academic year when I applied to join the University of Surrey/Roehampton Institute part-time teachers' programme in the Practice of Science Education, and he – as one of course directors – interviewed me. I was later privileged to work with John on some projects – so this might be considered as a 'declaration of interest'.


3 Again, I must declare an interest. For some years I acted as the Book Reviews editor for the journal.


4 The centre was the base for the highly influential Children's Learning in Science Project which undertook much research and publication in the field under the Direction of the late Prof. Ros Driver.


5 Another declaration of interest: at the time of writing I am on the IHPST Advisory Board for the journal.


6 Declaration of interest: I am a member of the DISER's Editorial Board


7 I have recently shown some surprise at one research article published in JChemEd where major problems seem to have been missed in peer review. This is perhaps simply an aberration, or may reflect the challenge of including peer-reviewed academic research in a hybrid publication that also publishes a range of other kinds of articles.


8 Peer-review evaluates the quality of submissions, in part to inform publication decisions, but also to provide feedback to authors on areas where they can improve a manuscript prior to publication.

Read about peer review


Download this post


Misconceptions of change

It may be difficult to know what counts as an alternative conception in some topics – and sometimes research does not make it any clearer


Keith S. Taber


If a reader actually thought the researchers themselves held these alternative conceptions then one could have little confidence in their ability to distinguish between the scientific and alternative conceptions of others

I recently published an article here where I talked in some detail about some aspects of a study (Tarhan, Ayyıldız, Ogunc & Sesen, 2013) published in the journal Research in Science and Technological Education. Despite having a somewhat dodgy title 1, this is a well respected journal published by a serious publisher (Routledge/Taylor & Francis). I read the paper because I was interested in the pedagogy being discussed (jigsaw learning), but what promoted me to then write about it was the experimental design: setting up a comparison between a well-tested active learning approach and lecture-based teaching. A teacher experienced in active learning techniques taught a control group of twelve year old pupils through a 'traditional' teaching approach (giving the children notes, setting them questions…) as a comparison condition for a teaching approach based on engaging group-work.

The topic being studied by the sixth grade, elementary school, students was physical and chemical changes.

I did not discuss the outcomes of the study in that post as my focus there was on the study as possibly being an example of rhetorical research (i.e., a demonstration set up to produce a particular outcome, rather than an open-ended experiment to genuinely test a hypothesis), and I was concerned that the control conditions involved deliberately providing sub-optimal, indeed sub-standard, teaching to the learners assigned to the comparison condition.

Read 'Didactic control conditions. Another ethically questionable science education experiment?'

Identifying alternative conceptions

The researchers actually tested the outcome of their experiment in two ways (as well as asking students in the experimental condition about their perceptions of the lessons), a post-test taken by all students, and "ten-minute semi-structured individual interviews" with a sample of students from each condition.

Analysis of the post-test allowed the researchers to identify the presence of students' alternative conceptions ('misconceptions'2) related to chemical and physical change, and the identified conceptions are reported in the study. Interviewees were purposively selected,

"Ten-minute semi-structured individual interviews were carried out with seven students from the experimental group and 10 students from the control group to identify students' understanding of physical and chemical changes by acquiring more information about students' unclear responses to [the post-test]. Students were selected from those who gave incorrect, partially correct and no answers to the items in the test. During the interviews, researchers asked the students to explain the reasons for their answers to the items."

Tarhan et al., 2013, p.188

I was interested to read about the alternative conceptions they had found for several reasons:

  1. I have done research into student thinking, and have written a lot about alternative conceptions, so the general topic interests me;
  2. More specifically, it is interesting to compare what researchers find in different educational contexts, as this gives some insight into the origins and developments of such conceptions;
  3. Also, I think the 'chemical and physical changes' distinction is actually a very problematic topic to teach. (Read about a free classroom resource to explore learners' ideas about physical and chemical changes.)

In this post I am going to question whether the author's claims in their research report about some of the alternative conceptions they reported finding are convincing. First, however, I should explain the second point here.

Cultural variations in alternative conceptions

Some alternative conceptions seem fairly universal, being identified in populations all around the world. These may primarily be responses to common experiences of the natural world. An obvious example relates to Newton's first law (the law of inertia): we learn from very early experience, before we even have language to talk about our experiences, that objects that we push, throw, kick, toss, pull… soon come to a stop. They do not move off in a straight line and continue indefinitely at a constant speed.

Of course, that experience is not actually contrary to Newton's first law (as various forces are acting on the objects concerned), but it presents a consistent pattern (objects initially move off, but soon slow and stop) that becomes part of out intuitions about the world and so makes learning the scientific law seem counter-intuitive, and so more difficult to accept and apply when taught in school.

Read about the challenge of learning Newton's first law

By contrast, no one has ever tested Newton's first law directly by seeing what happens under the ideal conditions under which it would apply (see 'Poincaré, inertia, and a common misconception').

Other alternative conceptions may be less universal: some may be, partially at least, due to an aspect of local cultural context (e.g. folk knowledge, local traditions), the language of instruction, the curriculum or teaching scheme, or even a particular teacher's personal way of presenting material.

So, to the extent that there are some experiences that are universal for all humans, due to commonalities in the environment (e.g., to date at least, all members of the species have been born into an environment with a virtually constant gravitational field and a nitrogen-rich atmosphere of about 1 atmosphere pressure {i.e., c.105 Pa} and about 21% oxygen content), there is a tendency for people everywhere (on earth) to develop the same alternative conceptions.

And, conversely, to the extent that people in different institutional, social, and cultural contexts have contrasting experiences, we would expect some variations in the levels of incidence of some alternative conceptions across populations.

"Some common ideas elicited from children are spread, at least in part, through informal learning in everyday "life-world" contexts. Through such processes youngsters are inducted into the beliefs of their culture. Ideas that are common in a culture will not usually contradict everyday experience, but clearly beliefs may develop and be disseminated without matching formal scientific knowledge. …

Where life-world beliefs are relevant to school science – perhaps contradicting scientific principles, perhaps apparently offering an explanation of some science taught in school; perhaps appearing to provide familiar examples of taught principles – then it is quite possible, indeed likely, that such prior beliefs will interfere with the learning of school science. …

Different common beliefs will be found among different cultural groups, and therefore it is likely that the same scientific concepts will be interpreted differently among different cultural groups as they will be interpreted through different existing conceptual frameworks."

Taber, 2012a, pp.5-6

As a trivial example, in England the National Curriculum for primary age children in England erroneously describes some materials that are mixtures as being substances. These errors have persisted for some years as the government department does not think they are important enough to make the effort to correct the error. Assuming many primary school teachers (who are usually not science specialists, though some are of course) trust the flawed information in the official curriculum, we might expect more secondary school students in England, than in other comparable populations, to later demonstrate alternative conceptions in relation to the critical concept of a chemical substance.

"This suggests that studies from different contexts (e.g., different countries, different cultures, different languages of instruction, and different curriculum organisations) should be encouraged for what they can tell us about the relative importance of educational variables in encouraging, avoiding, overcoming, or redirecting various types of ideas students are known to develop."

Taber, 2012a, p.9
The centrality of language

Language of instruction may sometimes be important. Words that supposedly are translated from one language to another may actually have different nuances and associations. (In English, it is clearly an alternative conception to think the chemical elements still exist in a compound, but the meaning of the French élément chemie seems to include the 'essence' of an element that does continue into compound.)

Research in different educational contexts can in principle help unravel some of this: in principle as it does need the various researchers to detail aspects of the teaching contexts and cultural contexts from which they report as well as the student's ideas (Taber, 2012a).

Chemical and physical change

Teaching about chemical and physical change is a traditional topic in school science and chemistry courses. It is one of those dichotomies that is understandably introduced in simple terms, and so, offers a simplification that may need to be 'unlearnt' later:

[a change is] chemical change or physical change

[an element is] metal or non-metal

[a chemical bond is] ionic bonding or covalent bonding

There are some common distinctions often made to support this discrimination into two types of change:


Table 1.2 from Teaching Secondary Chemistry (2nd ed) (Taber, 2012b)

However, a little thought suggests that such criteria are not especially useful in supporting the school student making observations, and indeed some of these criteria simply do not stand up to close examination. 2

"the distinction between chemical and physical changes is a rather messy one, with no clear criteria to help students understand the difference"

Taber, 2012b, p.33


So, I was especially interested to know what Tarhan and colleagues had found.

Methodological 'small print'

In reading any study, a consideration of the findings has to be tempered by an understanding of how the data were collected and analysed. Writing-up research reports for journals can be especially challenging as referees and editors may well criticise missing details they feel should be reported, yet often journals impose word-limits on articles.

Currently (2023) this particular journal tells potential authors that "A typical paper for this journal should be between 7000 and 8000 words" which is a little more generous than some other journals. However, Tarhan and colleagues do not fully report all aspects of their study. This may in part be because they need quite a lot of space to describe the experimental teaching scheme (six different jigsaw learning activities).

Whatever the reason:

  • the authors do not provide a copy of the post-test which elicited the responses that were the basis of the identified alternative conceptions; and
  • nor do they explain how the analysis to identify conceptions was undertaken – to show how student responses were classified;
  • similarly, there are no quotations from the interview dialogue to illustrate how the researchers interpreted student comments .

Data analysis is the process of researchers interpreting data so they become evidence for their findings, and generally research journals expect the process to be detailed – but here the reader is simply told,

"Students' understanding of physical and chemical changes was identified according to the post-test and the individual interviews after the process."

Tarhan et al., 2013, p.189

'Misconceptions'

In their paper, Tarhan and colleagues use the term 'misconception' which is often considered a synonym for 'alternative conception'. Commonly, conceptions are referred to as alternative if they are judged to be inconsistent with canonical concepts.

Read about alternative conceptions

Although the term 'misconception' is used 32 times in the paper (not counting instances in the reference list), the term is not explained in the text, presumably because it is assumed that all those working in science education know (and agree) what it means. This is not at all unusual. I once wrote about another study

"[The] qualities of misconceptions are largely assumed by the author and are implicit in what is written…It could be argued that research reports of this type suggest the reported studies may themselves be under-theorised, as rather well-defined technical procedures are used to investigate foci that are themselves only vaguely characterised, and so the technical procedures are themselves largely operationalised without explicit rationale."

Taber, 2013, p.22

Unfortunately, in Tarhan and colleagues' study there are less well-defied technical procedures in relation to how data was analysed to identify 'misconceptions', so leaving the reader with limited grounds for confidence that what are reported are worthy of being described as student conceptions – and are not just errors or guesses made on the test. Our thinking is private, and never available directly to others, and, so, can only be interpreted from the presentations we make to represent our conceptions in a public (shared) space. Sometimes we mis-speak, or we mis-write (so that then our words do not accurately represent our thoughts). Sometimes our intended meanings may be misinterpreted (Taber, 2013).

Perhaps the researchers felt that this process of identifying conceptions from students' texts and utterances was unproblematic – perhaps the assignments seemed so obvious to the researchers that they did not need to exemplify and justify their analytical method. This is unfortunate. There might also be another factor here.

Lost and found in translation?

The study was carried out in Turkey. The paper is in English, and this includes the reported alternative conceptions. The study was carried out "in a public elementary school" (not an international school, for example). Although English is often taught as a foreign language in Turkish schools, the language of instruction, not unreasonably, is Turkish.

So, it seems either

  • the data was collected in (what, for the children, would have been) 'L2' – a second language, or
  • a study carried out (questions asked; answers given) in Turkish has been reported in English, translating where necessary from one language to another.

This issue is not discussed at all in the paper – there is no mention of either the Turkish or English language, nor of anything being translated.

Yet the authors are not oblivious to the significance of language issues in learning. They report how one variant of Jigsaw teaching had "been designed specifically to increase interaction among students of differing language proficiencies in bilingual classrooms" (p.186) and how the research literature reports that sometimes children's ideas reflect "the incorrect use of terms in everyday language" (p.198). However, they did not feel it was necessary to report either that

  1. data had been collected from elementary school children in a second language, or
  2. data had been translated for the purposes of reporting in an English language journal

It seems reasonable to assume they would have appreciated the importance of mentioning option 1, and so it seems much more likely (although readers of the study should not have to guess) the reporting in English involved translation. Yet translation is never a simple algorithmic process, but rather always a matter of interpretation (another stage in analysis), so it would be better if authors always acknowledged this – and offered some basis for readers to consider the translations made were of high quality (Taber, 2018).

Read about guidelines for detailing translation in research reports

It is a general principle that the research community should adopt, surely, that whenever material reported in a research paper has been translated from another language (a) this is reported and (b) evidence of the accuracy and reliability of the translation is offered (Taber, 2018).

I make this point here, as some of the alternative conceptions reported by the authors are a little mystifying, and this may(?) be because their wording has been 'degraded' (and obscured) by imperfect translation.

An alternative conception of combustion?

For example, here are two of the learning objectives from one of the learning activities:

"The students were expected to be able to:

…comment on whether the wood has similar intensive properties before and after combustion

…indicate the combustion reactions in examples of several physical and chemical changes"

Tarhan et al., 2013, p.193

The wording of the first of these examples seems to imply that when wood is burnt, the product is still…wood. That is nonsense, but possibly this is simply a mistranslation of something that made perfect sense in Turkish. (The problem is that a reader can only speculate on whether this is the case, and research reports should be precise and explicit.)

The second learning objective quoted here implies that some combustion reactions are physical changes (or, at least, combustion reactions are components of some physical changes).

Combustion reactions are a class of chemical reactions. 'Chemical reaction' is synonymous with 'chemical change'. So, there are (if you will excuse the double negative) no examples of combustion reactions that are not chemical reactions and which would be said to occur in physical changes. So, this is mystifying, as it is not at all clear what the children were actually being taught unless one assumes the researchers themselves have very serious misconceptions about the chemistry they are teaching.

If a reader actually thought that the researchers themselves held these alternative conceptions

  • the product of combustion of wood is still wood
  • some combustion reactions are (or occur as part of) physical changes

then one could have little confidence in their ability to distinguish between the scientific and alternative conceptions of others. (A reader might also ask how come the journal referees and editor did not ask for corrections here before publication – I certainly wondered about this).

There are other statements the authors make in describing the teaching which are not entirely clear (e.g., "give the order of the changes in matter during combustion reactions", p.194), and this suggests a degree of scepticism is needed in not simply accepting the reported alternative conceptions at face value. This does not negate their interest, but does undermine the paper's authority somewhat.

One of the misconceptions reported in the study is that some students thought that "there is a flame in all combustion reaction". This led me to reflect on whether I could think of any combustion reactions that did not involve a flame – and I must confess none readily came to mind. Perhaps I also have this alternative conception – but it seems a harsh judgement on elementary school learners unless they had actually been taught about combustion reactions without flames (if, indeed, there are such things).


The study reported that some 12 year olds held the 'misconception' that "there is a flame in all combustion reaction[s]".

[Image by Susanne Jutzeler, Schweiz, from Pixabay]


Failing to control variables?

Another objective was for students to "comprehend that temperature has an effect on chemical reaction rate by considering the decay of fruit at room temperature, and the change in color [colour] from green to yellow of fallen leaves in autumn" (p.193). As presented, this is somewhat obscure.

Presumably it is not meant to be a comparison between:

the rate of
decay of fruit at room temperature
andthe rate of
change in colour of fallen leaves in autumn
Explaining that temperature has an effect on chemical reaction rate?

Clearly, even if the change of colour of leaves takes place at a different temperature to room temperature, one cannot compare between totally different processes at different temperatures and draw any conclusions about how "temperature has an effect on chemical reaction rate" . (Presumably, 'control of variables' is taught in the Turkish science curriculum.)

So, one assumes these are two different examples…

But that does not help matters too much. The "decay of fruit at room temperature" (nor, indeed, any other process studied at a single temperature) cannot offer any indication of how "temperature has an effect on chemical reaction rate". The change of colours in leaves of deciduous trees (that usually begins before they fall) is triggered by environmental conditions such as change in day length and temperature. This is part of a very complex system involving a range of pigments, whilst water content of the leaf decreases (once the supply of water through the tree's vascular system is cut off), and it is not clear how much detail these twelve year olds were taught…but it is certainly not a simple matter of a reaction changing rate according to temperature.

Evaluating conceptions

Tarhan and colleagues report their identified alternative conceptions ('misconceptions') under a series of headings. These are reported in their table 4 (p.195). A reader certainly finds some of the entries in this table easy to interpret: they clearly seem to reflect ideas contrary to the canonical science one would expect to be reflected in the curriculum and teaching. Other statements are less obviously evidence of alternative conceptions as they do not immediately seem necessarily at odds with scientific accounts (e.g., associating combustion reactions with flames).

Other reported misconceptions are harder to evaluate. School science is in effect a set of models and representations of scientific accounts that often simplify the actual current state of scientific knowledge. Unless we know exactly what has been taught it is not entirely clear if students' ideas are credit-worthy or erroneous in the specific context of their curriculum.

Moreover, as the paper does not report the data and its analysis, but simply the outcome of the analysis, readers do not know on what basis judgements have been made to assign learners as having one of the listed misconceptions.


Changes of state are chemical changes

A few students from the lecture-based teaching condition were identified as 'having' the misconception that 'changes of state are chemical changes'. This seems a pretty serious error at the end of a teaching sequence on chemical and physical changes.

However, this raises a common issue in terms of reports of alternative conceptions – what exactly does it mean to say that a student has a conception that 'changes of state are chemical changes'? A conception is a feature of someone's thinking – but that encompasses a vast range of potential possibilities from a fleeting notion that is soon forgotten ('I wonder if s orbitals are so-called because they are spherical?') to an on-going commitment to an extensive framework of ideas that a life is lived by (Buddhism, Roman Catholicism, Liberalism, Hedonism, Marxism…).


A person's conceptions can vary along a range of characteristics (Figure from Taber, 2014)


The statement that 'Changes of state are chemical changes' is unlikely to be the basis of anyone's personal creed. It could simply be a confusion of terms. Perhaps a student had a decent understanding of the essential distinction between chemical and physical changes but got the terms mixed up (or was thinking that 'changes of state' meant 'chemical reaction'). That is certainty a serious error that needs correcting, but in terms of understanding of the science, would seem to be less worrying than a deeper conceptual problem.

In their commentary, the authors note of these children:

"They thought that if ice was heated up water formed, and if water was heated steam formed, so new matter was formed and chemical changes occurred".

Tarhan et al., 2013, p.197

It is not clear if this was an explanation the learners gave for thinking "changes of state are chemical changes", or whether "changes of state are chemical changes" was the researchers' gloss on children commenting that "if ice was heated up water formed, and if water was heated steam formed, so new matter was formed and chemical changes occurred".

That a range of students are said to have precisely the same train of thought leads a reader (or, at least, certainly one with experience of undertaking research of this kind) to ask if these are open-ended responses produced by the children, or the selection by the children of one of a number of options offered by the researchers (as pointed out above, the data analysis is not discussed in detail in the paper). That makes a difference in how much weight we might give to the prevalence of the response (putting a tick by the most likely looking option requires less commitment to, and appreciation of, an idea than setting it out yourself in your own personally composed text), illustrating why it is important that research journals should require researchers to give full accounts of their instrumentation and analysis.

Because density of matter changes during changes of state, its identity also changes, and so it is a chemical change

Thirteen of the children (all in the lecture-based teaching condition) were considered to have the conception "Because density of matter changes during changes of state, its identity also changes, and so it is a chemical change". This is clearly a much more specific conception (than 'changes of state are chemical changes') which can be analysed into three components:

  • a change of state is a chemical change, AND
  • we know this because such changes involve a change in identity, AND
  • we know that because a change of state leads to a change in density

Terhan and colleagues claim this conception was "first determined in this study" (p.195).

The specificity is intriguing here – if so many students explicitly and individually built this argument for themselves then this is an especially interesting finding. Unfortunately, the paper does not give enough detail of the methodology for a reader to know if this was the case. Again, if students were just agreeing with an argument offered as an option on the assessment instrument then it is of note, but less significant (as in such cases students might agree with the statement simply because one component resonated – or they may even be guessing rather than leaving an item unanswered). Again this does not completely negate the finding, but it leaves its status very unclear.

Taken together these first two claimed results seem inconsistent – as at least 13 students seem to think "Changes of state are chemical changes". That is, all those who thought that "Because density of matter changes during changes of state, its identity also changes, and so it is a chemical change" would seem to have thought that "Changes of state are chemical changes" (see the Venn diagram below). Yet, we are also told that only five students held the less specific and seemingly subsuming conception "changes of state are chemical changes".


If 13 students think that changes of state are chemical changes because a change of density implies a change of identity; what does it mean that only 5 students think that changes of state are chemical changes?

This looks like an error, but perhaps is just a lack of sufficient detail to make the findings clear. Alternatively, perhaps this indicates some failure in translating material accurately into English.

The changes in the pure matters are physical changes

Six children in the lecture-based teaching condition and one in the jigsaw learning condition were reported as holding the conception that "The changes in the pure matters are physical changes". The authors do not explain what they mean here by "pure matters" (sic, presumably 'matter'?). The only place this term is used in the paper is in relation to this conception (p.195, p.197).

The only other reference to 'pure' was in one of the learning objectives for the teaching:

  • explain the changes of state of water depending on temperature and pressure; give various examples for other pure substances (p.191)

If "pure matter" means a pure sample of a substance, then changes in pure substances are all physical – by definition a chemical changes leads to a different substance/different substances. That would explain why this conception was "first determined [as a misconception] in this study", p.195, as it is not actually a misconception)". So, it does not seem clear precisely why the researchers feel these children have got something wrong here. Again, perhaps this is a failure of translation rather than a failure in the original study?

Changes in shape?

Tarhan and colleagues report two conceptions under the subheading of 'changes in shape'. They seem to be thinking here more of grain size than shape as such. (Another translation issue?) One reported misconception is that if cube sugar is granulated, sugar particles become small [smaller?].


Is it really a misconception to think that "If cube sugar is granulated, sugar particles become small"?

(Image by Bruno /Germany from Pixabay)


Tarhan and colleagues reported that two children in the experimental condition, and 13 in the control condition thought that "If cube sugar is granulated, sugar particles become small". Sugar cubes are made of granules of sugar weakly joined together – they can easily be crumbled into the separate grains. The grains are clearly smaller than the cubes. So, what is important here is what is meant/understood* by the children by the term 'particles'.

(* If this phrasing was produced by the children, then we want to know what they meant by it. If, however, the children were agreeing with a phrase presented to them by researchers, then we wish to know how they understood it.)

If this means quanticle level particles, molecules, then it is clearly an alternative conception – each grain contain vast numbers of molecules, and the molecules are unchanged by the breaking up the cubes. If, however, particles here refers to the cube and grains**, then it is a fair reflection of what happens: one quite large particle of sugar is broken up into many much smaller particles. The ambiguity of the (English) word 'particles' in such contexts is well recognised.

(** That is, if the children used the word 'particles' – did they mean the cubes/grains as particles of sugar? If however the phrasing was produced by the researchers and presented to the children, and if the researchers meant 'particles' to mean 'molecules'; did the children appreciate that intention, or did they understand 'particles' to refer to the cubes and grains?)

However, as no detail is given on the actual data collected (e.g., is this the children's own words; was this based on an open response?), and how it was analysed (and, as I suspect this all occurred in Turkish) the reader has no way to check on this interpretation of the data.

What kind of change is dissolving?

Tarhan and colleagues report a number of 'misconceptions' under the heading of 'molecular solubility'. Two of these are:

  • "The solvation processes are always chemical changes"
  • "The solvation processes are always physical changes"

This reflects a problem of teaching about physical and chemical changes. Dissolving is normally seen as a physical change: there is no new chemical substance formed and dissolving is usually fairly readily reversed. However, as bonds are broken and formed it also has some resemblance to chemical change.2

In dissolving common salt in water, strong ionic bonds are disrupted and the ions are strongly solvated. Yet the usual convention is still to consider this a physical change – the original substance, the salt, can be readily recovered by evaporation of the solvent. A solution is considered a kind of mixture. In any case, as Tarhan and colleagues refer to 'molecular' solubility (strictly solubility refers to substances, not molecules, but still) they were, presumably, only dealing with examples of the dissolving of substances with discrete molecules.

Taking together these two conceptions, it seems that Tarhan and colleagues think that dissolving is sometimes a physical change, and sometimes a chemical change. Presumably they have some criterion or criteria to distinguish those examples of dissolving they consider physical changes from those they consider chemical changes. A reader can only speculate how a learner observing some solute dissolve in a solvent is expected to distinguish these cases. The researchers do not explain what was taught to the students, so it is difficult to appreciate quite what the students supposedly got wrong here.

Sugar is invisible in the water, because new matter is formed

The idea that learners think that new matter is formed on dissolving would indeed be an alternative conception. The canonical view is that new matter is only formed in very high energy processes – such as in the big bang. In both chemical and physical processes studied in the school laboratory there may be transformations of matter, but no new matter.

This seems a rather extreme 'misconception' for the learners to hold. However, a reader might wonder if the students actually suggested that a new substance was formed, and this has been mistranslated. (The Turkish word 'madde' seems to mean either matter or substance.) If these students thought that a new type of substance was formed then this would be an alternative conception (and it would be interesting to know why this led to sugar being invisible – unless they were simply arguing that different appearance implied different substance).

While sugar is dissolving in the water, water damages the structure of sugar and sugar splits off

Whether this is a genuine alternative conception or just imprecise use of language is not clear. It seems reasonable to suggest that while sugar is dissolving in the water, the process breaks up the structure of solid sugar and sugar molecules split off – so some more detail would be useful here. Again, if there has been translation from Turkish this may have lost some of the nuance of the original phrasing through translation into English.

The phrasing reflects an alternative conception that in chemical reactions one reactant is an active agent (here the water doing the damaging) and the other the patient, that is passive and acted upon (here the sugar being damaged) – rather than seeing the reaction as an interaction between two species (Taber & García Franco, 2010) – but there is no suggestion in their paper that this is the issue Tarhan and colleagues are highlighting here.

When sugar dissolves in water, it reacts with water and disappears from sight

If the children thought that dissolving was a chemical reaction then this is an alternative conception – the sugar does indeed disappear from sight, but there has been no reaction.

Again, we might ask if this was actually a misunderstanding (misconception), or imprecise use of language. The sugar does 'react' with the water in the everyday sense of 'reaction'. But this is not a chemical reaction, so this terminology should be avoided in this context.

Even in science, 'reaction' means something different in chemistry and physics: in the sense of Newtonian physics, during dissolving, when a water molecule attracts a sugar molecule {'action')'} there will be an equal and oppositely directed reaction as the sugar molecule attracts the water molecule. This is Newton's third law, which applies to quanticles as much as to planets. If a water molecule and a sugar molecule collide, the force applied by the sugar molecule on the water molecule is equal to the force applied by the water molecule on the sugar molecule.

Read about learning difficulties with Newton's third law

So, 'sugar reacts with water' could be

  • a misunderstanding of dissolving (a genuine alternative conception);
  • a misuse of the chemical term 'reaction'; or
  • a use of the everyday term 'reaction' in a context where this should be avoided as it can be misunderstood

These are somewhat different problems for a teacher to address.

Molecules split off in physical changes and atoms split off in chemical changes

Ten of the children are said to have demonstrated the 'misconception' that molecules split off in physical changes and atoms split off in chemical changes. The authors claim that this misconception has not been reported in previous studies. But is this really a misconception? It may be a simplistic, and imprecise, statement – but I think when I was teaching youngsters of this age I would have been happy to find they have this notion – which at least seems to reflect an ability to imagine and visualise processes at the molecular level.

In dissolving or melting/boiling of simple molecular substances, molecules do indeed 'split off' in a sense, and in at least some chemical changes we can posit mechanisms that, in simple terms at least, involve atoms 'splitting off' from molecules.

So, again, this is another example of how this study is tantalising, without being very informative. The reader is not clear in what sense this is viewed as wrong, or how the conception was detected. (Again, for ten different students to specifically think that 'molecules split off in physical changes and atoms split off in chemical changes' makes one wonder if they volunteered this, or have simply agreed with the statement when having it presented to them).

In conclusion

The main thrust of Tarhan and colleagues' study was to report on an innovation using jig-saw learning (which unfortunately compared this with a form of pedagogy widely considered unsuitable for young children, so offering a limited basis for judging effectiveness of the innovation). As part of the study they collected data to evaluate learning in the two conditions, and used this to identify misconceptions students demonstrated after being taught about physical and chemical changes. The researchers provide a long list of identified misconceptions – but it is not always obvious why these are considered misconceptions, and what the desired responses matching teaching models were.

The researchers do not detail their data collection and analysis instruments and protocols in sufficient detail for a readers to appreciate what they mean by their results. In particular, what it means to have a misconception – e.g., to give a definitive statement in an interview, or just to select some response on a test as the answer that looked most promising at the time. Clearly we give much more weight to a notion that a learner presents in their own words as an explanation for some phenomenon, than the selection of one option from a menu of statements presented to them that comes with no indication of their confidence in the selection made.

Of particular concern: either the children were asked questions in a second language that they may not have been sufficiently fluent in to fully understand questions or compose clear responses; or none of the misconceptions reported are presented in their original form and they have all been translated by someone (unspecified) of uncertain ability as a translator. (A suitably qualified translator would need to have high competence in both languages and a strong familiarity with the subject matter being translated.)

In the circumstances, Tarhan and colleagues' reported misconceptions are little more than intriguing. In science, the outcome of a study is only informative in the context of understanding exactly how the data were obtained, and how they have been processed. Without that, readers are asked to take a researcher's conclusions on faith, rather than be persuaded of them by a logical chain of argument.


p.s. For anyone who did not know, but wondered: s orbitals are not so-called because they are spherical: the designation derives from a label ('sharp') that was applied to some lines in atomic spectra.


Work cited

Notes


1 To my reading, the publication title 'Research in Science and Technological Education' seems to suggest the journal has two distinct and somewhat disconnected foci, that is:

Research in ( Science ) and ( Technological Education )

And it would be better (that is, most consistently) titled as

Research in Science and Technology Education

{Research in ( Science and Technology ) Education}

or

Research in Scientific and Technological Education

{Research in ( Scientific and Technological ) Education}

but, hey, I know I am pedantic.


2 The table (Table 1.2 in the source) was followed by the following text:

"The first criterion listed is the most fundamental and is generally clear cut as long as the substances present before and after the change are known. If a new substance has been produced, it will almost certainly have different melting and boiling temperatures than the original substance.

The other [criteria] are much more dubious. Some chemical changes involve a great deal of energy being released, such as the example of burning magnesium in air, or even require a considerable energy input, such as the example of the electrolysis of water. However, other reactions may not obviously involve large energy transfers, for example when the enthalpy and entropy changes more or less cancel each other out…. The rusting of iron is a chemical reaction, but usually occurs so slowly that it is not apparent whether the process involves much energy transfer ….

Generally speaking, physical changes are more readily reversible than chemical changes. However, again this is not a very definitive criterion. The idea that chemical reactions tend to either 'go' or not is a useful approximation, but there are many examples of reactions that can be readily reversed…. In principle, all reactions involve equilibria of forward and reverse reactions, and can be reversed by changing the conditions sufficiently. When hydrogen and oxygen are exploded, it takes a pedant to claim that there is also a process of water molecules being converted into oxygen and hydrogen molecules as the reaction proceeds, which means the reaction will continue for ever. Technically such a claim may be true, but for all practical purposes the explosion reflects a reaction that very quickly goes to completion.

One technique that can be used to separate iodine from sand is to warm the mixture gently in an evaporating basin, over which is placed an upturned beaker or funnel. The iodine will sublime – turn to vapour – before recondensing on the cold glass, separated from the sand. The same technique may be used if ammonium chloride is mixed with the sand. In both cases the separation is achieved because sand (which has a high melting temperature) is mixed with another substance in the solid state that is readily changed into a vapour by warming, and then readily recovered as a solid sample when the vapour is in contact with a colder surface. There are then reversible changes involved in both cases:

solid iodine ➝ iodine vapour

ammonium chloride ➝ ammonia + hydrogen chloride

In the first case, the process involves only changes of state: evaporation and condensation – collectively called sublimation. However the second case involves one substance (a salt) changing to two other substances. To a student seeing these changes demonstrated, there would be little basis to infer one is (usually considered as) a chemical change, but not the other. …

The final criterion in Table 1.2 concerns whether bonds are broken and made during a change, and this can only be meaningful for students once they have learnt about particle models of the submicroscopic structure of matter… In a chemical change, there will be the breaking of bonds that hold together the reactants and the formation of new bonds in the products. However, we have to be careful here what we mean by 'bond' …

When ice melts and water boils, 'intermolecular' forces between molecules are disrupted and this includes the breaking of hydrogen 'bonds'. However, when people talk about bond breaking in the context of chemical and physical changes, they tend to mean strong chemical bonds such as covalent, ionic and metallic bonds…

Yet even this is not clear cut. When metals evaporate or are boiled, metallic bonds are broken, although the vapour is not normally considered a different substance. When elements such as carbon and phosphorus undergo phase changes relating to allotropy, there is breaking, and forming, of bonds, which might suggest these changes are chemical and that the different forms of the same elements should be considered different substances. …

A particularly tricky case occurs when we dissolve materials to form solutions, especially materials with ionic bonding…. Dissolving tends to involve small energy changes, and to be readily reversible, and is generally considered a physical change. However, to dissolve an ionic compound such as sodium chloride (table salt), the strong ionic bonds between the sodium and chloride ions have to be overcome (and new bonds must form between the ions and solvent molecules). This would seem to suggest that dissolving can be a chemical change according to the criterion of bond breaking and formation (Table 1.2)."

(Taber, 2012b, pp.31-33)

How to avoid birds of prey

…by taking refuge in the neutral zone


Keith S. Taber


Fact is said to be stranger than (science) fiction

Regular viewers of Star Trek may be under the impression that it is dangerous to enter the neutral zone between the territories claimed by the United Federation of Planets and that of the Romulan Empire in case any incursion results in an attack by a Romulan Bird of Prey.


A bird of prey (with its prey?)
(Image by Thomas Marrone, used by permission – full-size version at the source site here)


However, back here on earth, it may be that entering the neutral zone is actually a way of avoiding an attack by a bird of prey


A bird of prey (with its prey). Run rabbit, run rabbit…into the neutral zone
(Image by Ralph from Pixabay)

At least, according to the biologist Jakob von Uexküll

"All the more remarkable is the observation that a neutral zone insinuates itself between the nest and the hunting ground of many raptors, a zone in which they seize no prey at all. Ornithologists must be correct in their assumption that this organisation of the environment was made by Nature in order to keep the raptors from seizing their own young. If, as they say, the nestling becomes a branchling and spends its days hopping from branch to branch near the parental nest, it would easily be in danger of being seized by mistake by its own parents. In this way, it can spend its days free of danger in the neutral zone of the protected area. The protected area is sought out by many songbirds as a nesting and incubation site where they can raise their young free of danger under the protection of the big predator."

Uexküll, 1934/2010

This is a very vivid presentation, but is phrased in a manner I thought deserved a little interrogation. It should, however, be pointed out that this extract is from the English edition of a book translated from the original German text (which itself was originally published almost a century ago).

A text with two authors?

Translation is a process of converting a text from one natural language to another, but every language is somewhat unique regarding its range of words and word meanings. That is, words that are often considered equivalent in different language may have somewhat different ranges of application in those languages, and different nuances. Sometimes there is no precise translation for a word, and a single word in one language may have several near-equivalents in another (Taber, 2018). Translation therefore involves interpretation and creative choices.

So, translation is a skilled art form, and not simply something that can be done well by algorithmically applying suggestions in a bilingual dictionary. A good translation of an academic text not only requires someone fluent in both languages, but also someone having a sufficient understanding of the topic to translate in the best way to convey the intended meaning rather than simply using the most directly equivalent words. A sequence of the most equivalent individual words may not give the best translation of a sentence, and indeed when translating idioms may lead to a translation with no obvious meaning in the target language. It is worth bearing in mind that any translated text has (in effect) two authors, and reflects choices made by the translator as well as the original author.

Read about the challenges of translation in research writing

I am certainly not suggesting there is anything wrong with the translation of Uexküll's text, but it should be born in mind I am commenting on the English language version of the text.

A neutral zone insinuates itself

No it does not.

The language here is surely metaphorical, as it implies a deliberate action by the neutral zone. This seems to anthropomorphise the zone as if it is a human-like actor.

Read about anthropomorphism

The zone is a space. Moreover, it is not a space that is in any way discontinuous with the other space surrounding it – it is a human conception of a region of space with imagined boundaries. The zone is not a sentient agent, so it can not insinuate itself.

Ornithologists must be correct

Science develops theoretical knowledge which is tested against empirical evidence, but is always (strictly) provisional in that it should be open to revisiting in the light of further evidence. Claims made in scientific discourse should therefore be suitable tentative. Perhaps

  • ornithologists seem to be correct in suggesting…, or
  • it seems likely that ornithologists were correct when they suggested…or even
  • at present our best understanding reflects the suggestions made by ornithologists that...

Yet a statement that ornithologists must be correct implies a level of certainty and absoluteness that seems inconsistent with a scientific claim.

Read about certainty in accounts of science

The environment was made by Nature in order to…

This phrasing seems to personify Nature as if 'she' is a person. Moreover, this (…in order to…) suggests a purpose in nature. This kind of teleological claim is often considered inappropriate in science as it suggests natural events occur according to some pre-existing plan rather than unfolding according to natural laws. 1 If we consider something happens to achieve a purpose we seem to not need to look for a mechanism in terms of (for example) forces (or entropy or natural selection or…).

Read about personification of nature

Read about teleology in science

Being seized by mistake

We can understand that it would decrease the biological fitness of a raptor to indiscriminately treat its own offspring as potential food. There are situations when animals do eat their young, but clearly any species that's members committed considerable resources to raising a small number of young (e.g., nest building, egg incubation) but were also regular consumers of those young would be at a disadvantage when it came to its long-term survival.

So, in terms of what increases a species' fitness, avoiding eating your own children would help. If seeking a good 'strategy' to have descendants, then, eating offspring would be a 'mistake'. But the scientific account is not that species, or individual members of a species, seek to deliberately adopt a strategy to have generations of descendants: rather behaviour that tends to lead to descendants is self-selecting.

Just because humans can reflect upon 'our children's children's, children', we cannot assume that other species even have the vaguest notions of descendants. (And the state of the world – pollution, deforestation, habitat destruction, nuclear arsenals, soil degradation, unsustainable use of resources, etceterastrongly suggests that even humans who can conceptualise and potentially care about their descendants have real trouble making that the basis for rational action.)


Even members of the very rare species capable of conceptualising a future for their offspring struggle to develop strategies taking the well-being of future generations into account.
(Image: cover art for 'To our children's children's children' {The Moody Blues}).


Natural selection is sometimes seen as merely a tautology as it seems to be a theory that explains the flourishing of some species (and not others) in terms that they have the qualities to flourish! But this is to examine the wrong level of explanation. Natural selection explains in general terms why it is that in a particular environment competing species will tend to survive and leave offspring to different extents. (Then within that general framework, specific arguments have to be made about why particular features or behaviours contribute to differential fitness in that ecological context.)

Particular evolved behaviours may be labelled as 'strategies' by analogy with human strategies, but this is purely a metaphor: the animal is following instincts, or sometimes learned behaviours, but is not generally following a consciously considered plan intended to lead to some desired outcome in the longer term.

But a reader is likely to read about a nestling being "in danger of being seized by mistake by its own parents" as the birds themselves making a mistake – which implies they have a deliberate plan to catch food, while excluding their own offspring from the food category, and so intended to avoid treating their offspring as prey. That is, it is implied that birds of prey are looking to avoid eating their own, but get it wrong.

Yet, surely, birds are behaving instinctively, and not conceptualising their hunting as a means of acquiring nutrition, where they should discriminate between admissible prey and young relatives. Again this seems to be anthropomorphism as it treats non-human animals as if their have mental experiences and thought processes akin to humans: "I did not mean to eat my child, I just failed to recognise her, and so made a mistake".

The protected area is sought out

Similarly, the songbirds also behave instinctively. They surely do not 'seek out' the 'protected' area around the nest of a bird of prey. There must be a sense in which they 'learn' (over many generations, perhaps) that they need not fear the raptors when they are near their own nests but it seems unlikely a songbird conceptualises any of this in a way that allows them to deliberately (that is, with deliberation) seek out the neutral zone.

In terms of natural selection, a songbird that has no fear of raptors and so does not seek to avoid or hide or flee from them would likely be at a disadvantage, and so tend to leave less offspring. Similarly, a songbird that usually avoided birds of prey, but nested in the neutral zone, would have a fitness advantage if other predators (small cats say) kept clear of the area. The bird would not have to think "hey, I know raptors are generally a hazard, but I'll be okay here as I'm close enough to be in the zone where they do not hunt", as long as the behaviour was heritable (and there was initially variation in the extent to which individuals behaved that way) – as natural selection would automatically lead to it becoming common behaviour.

(In principle, the bird could be responding to some cue in the environment that was a reliable but indirect indicator they were near a raptor nesting site. For example, perhaps building a nest very close to a location where there is a regular depositing of small bones on the ground gives an advantage, so this behaviour increases fitness and so is 'selected'.)

Under the protection of the big predator

Why are the songbirds under the protection of the raptors? Perhaps because other potential predators do not come into the neutral zone as they are vulnerable when approaching this area, even if they would be safe once inside. Again, if this is so, it surely does not reflect a conscious conceptualisation of the neutral zone.

For example, a cat that preys on small birds would experience a different 'unwelt' from the bird. A small songbird with a nest where it has young experiences the surrounding space differently to a cat (already a larger animal so experiencing the world at a different scale) that ranges over a substantial territory. Perhaps the songbird perceives the neutral zone as a distinct space, whereas to the cat it is simply an undistinguished part of a wider area where the raptors are regularly seen.

Or, perhaps, for the smaller predator, the area around the neutral zone offers too little cover to risk venturing into the zone. (Again, this does not mean a conscious thinking process along the lines "I'd be safe once I was over there, but I'm not sure I'd make it there as I could easily be seen moving between here and there", but could just be an inherited tendency to keep under cover.)

The birds of prey themselves will not take the songbirds, so the smaller birds are protected from them in the zone, but if this is simply an evolved mechanism that prevents accidental 'infanticide' this can hardly be considered as other birds being under the protection of the birds of prey. Perhaps the birds of prey do scare away other predators – but, if so, this is in no sense a desired outcome of a deliberate policy adopted by the birds of prey because they want to protect their more vulnerable neighbours.

One could understand how the birds of prey might hypothetically have evolved behaviour of not preying on smaller birds (which might include their own offspring) near their nest, but would still attack smaller predators that might threaten their own chicks. In that scenario 2, the birds of prey might have indeed protected nearby songbirds from potential predators (even if only incidentally), but this does not apply if, as Uexküll suggests, "they seize no prey at all" in the neutral zone.

Again the, 'under the protection of the big predator' seems to anthropomorphise the situation and treat the birds of prey as if they are acting deliberately to protect songbirds, and so this phrasing needs to be understood metaphorically.

Does language matter?

Uexküll's phrasing offers an engaging narrative which aids in the communication of the idea of the neutral zone to his readers. (He is skilled in making the unfamiliar familiar.) It is easier to understand an abstract idea if it seems to reflect a clear purpose or it can be understood in terms of human ways of thinking and acting, for example:

  • it is important to keep your children safe
  • it is good to look out for your neighbours

But we know that science learners readily tend to accept explanations that are teleological and/or anthropomorphic, and that sometimes (at least) this acts as an impediment to learning the scientific accounts based on natural principles and mechanisms.

Therefore it is useful for science teachers in particular to be alert to such language so they can at least check that learners are seeing beyond the metaphor and not mistaking a good story for a scientific account.


Work cited:

Notes:

1 Many people, including some scientists, do believe the world is unfolding according to a pre-ordained plan or scheme. This would normally be considered a matter of religious faith or at least a metaphysical commitment.

The usual stance taken in science ('methodological naturalism'), however, is that scientific explanations must be based on scientific principles, concepts, laws, theories, etcetera, and must not call upon any supernatural causes or explanations. This need not exclude a religious faith in some creator with a plan for the world, as long as the creator is seen to have set up the world to unfold through natural laws and mechanisms. That is, faith-based and scientific accounts and explanations may be considered to work at different levels and to be complementary.

Read more about the relationship between science and religion


2 That this does not seem to be the case might reflect how a flying bird perceives prey – if it has simply evolved to swoop upon and take any object in a certain size range {that we might explain as small enough to be taken, but not so small as not to repay the effort} that matches a certain class of movement pattern {that we might interpret as moving under its own direction and so being animate} then the option of avoiding smaller birds but taking other prey would not be available.

After all, studies show parent birds will try and feed the most simple representations of a hatchling's open beak – suggesting they do not perceive the difference between their own children and crude models of an open bird mouth.


The general form of a chick's open mouth (as shown by these hatchlings) is enough to trigger feeding behaviour in adult birds.
(Image by Tania Van den Berghen from Pixabay )

Uexküll himself reported that,

"…a very young wild duck was brought to me; it followed me every step. I had the impression that it was my boots that attracted it so, since it also ran occasionally after a black dachshund. I concluded from this that a black moving object was sufficient to replace the image of its mother…"

Uexküll, 1934/2010

(A year later, Lorentz would publish his classic work on imprinting which reported detailed studies of the same phenomenon.)


Falsifying research conclusions

You do not need to falsify your results if you are happy to draw conclusions contrary to the outcome of your data analysis.


Keith S. Taber


Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: but their own data analaysis does not support this.

I recently read a research study to evaluate a teaching innovation where the authors

  • presented their results,
  • reported the statistical test they had used to analyse their results,
  • acknowledged that the outcome of their experiment was negative (not statistically significant), then
  • stated their findings as having obtained a positive outcome, and
  • concluded their paper by arguing they had demonstrated their teaching innovation was effective.

Li, Ouyang, Xu and Zhang's (2022) paper in the Journal of Chemical Education contravenes the scientific norm that your conclusions should be consistent with the outcome of your data analysis.
(Magnified portions of this scheme are presented below)

And this was not in a paper in one of those predatory journals that I have criticised so often here – this was a study in a well regarded journal published by a learned scientific society!

The legal analogy

I have suggested (Taber, 2013) that writing up research can be understood in terms of a number of metaphoric roles: researchers need to

  • tell the story of their research;
  • teach readers about the unfamiliar aspects of their work;
  • make a case for the knowledge claims they make.

Three metaphors for writing-up research

All three aspects are important in making a paper accessible and useful to readers, but arguably the most important aspect is the 'legal' analogy: a research paper is an argument to make a claim for new public knowledge. A paper that does not make its case does not add anything of substance to the literature.

Imagine a criminal case where the prosecution seeks to make its argument at a pre-trial hearing:

"The police found fingerprints and D.N.A. evidence at the scene, which they believe were from the accused."

"Were these traces sent for forensic analysis?"

"Of course. The laboratory undertook the standard tests to identify who left these traces."

"And what did these analyses reveal?"

"Well according to the current standards that are widely accepted in the field, the laboratory was unable to find a definite match between the material collected at the scene, and fingerprints and a D.N.A. sample provided by the defendant."

"And what did the police conclude from these findings?"

"The police concluded that the fingerprints and D.N.A. evidence show that the accused was at the scene of the crime."

It seems unlikely that such a scenario has ever played out, at least in any democratic country where there is an independent judiciary, as the prosecution would be open to ridicule and it is quite likely the judge would have some comments about wasting court time. What would seem even more remarkable, however, would be if the judge decided on the basis of this presentation that there was a prima facie case to answer that should proceed to a full jury trial.

Yet in educational research, it seems parallel logic can be persuasive enough to get a paper published in a good peer-reviewed journal.

Testing an educational innovation

The paper was entitled 'Implementation of the Student-Centered Team-Based Learning Teaching Method in a Medicinal Chemistry Curriculum' (Li, Ouyang, Xu & Zhang, 2022), and it was published in the Journal of Chemical Education. 'J.Chem.Ed.' is a well-established, highly respected periodical that takes peer review seriously. It is published by a learned scientific society – the American Chemical Society.

That a study published in such a prestige outlet should have such a serious and obvious flaw is worrying. Of course, no matter how good editorial and peer review standards are, it is inevitable that sometimes work with serious flaws will get published, and it is easy to pick out the odd problematic paper and ignore the vast majority of quality work being published. But, I did think this was a blatant problem that should have been spotted.

Indeed, because I have a lot of respect for the Journal of Chemical Education I decided not to blog about it ("but that is what you are doing…?"; yes, but stick with me) and to take time to write a detailed letter to the journal setting out the problem in the hope this would be acknowledged and the published paper would not stand unchallenged in the literature. The journal declined to publish my letter although the referees seemed to generally accept the critique. This suggests to me that this was not just an isolated case of something slipping through – but a failure to appreciate the need for robust scientific standards in publishing educational research.

Read the letter submitted to the Journal of Chemical Education

A flawed paper does not imply worthless research

I am certainly not suggesting that there is no merit in Li, Ouyang, Xu and Zhang's work. Nor am I arguing that their work was not worth publishing in the journal. My argument is that Li and colleague's paper draws an invalid conclusion, and makes misleading statements inconsistent with the research data presented, and that it should not have been published in this form. These problems are pretty obvious, and should (I felt) have been spotted in peer review. The authors should have been asked to address these issues, and follow normal scientific standards and norms such that their conclusions follow from, rather than contradict, their results.

That is my take. Please read my reasoning below (and the original study if you have access to J.Chem.Ed.) and make up your own mind.

Li, Ouyang, Xu and Zhang report an innovation in a university course. They consider this to have been a successful innovation, and it may well have great merits. The core problem is that Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: when their own data analysis does not support this.

The evidence for a successful innovation

There is much material in the paper on the nature of the innovation, and there is evidence about student responses to it. Here, I am only concerned with the failure of the paper to offer a logical chain of argument to support their knowledge claim that the teaching innovation improved student achievement.

There are (to my reading – please judge for yourself if you can access the paper) some slight ambiguities in some parts of the description of the collection and analysis of achievement data (see note 5 below), but the key indicator relied on by Li, Ouyang, Xu and Zhang is the average score achieved by students in four teaching groups, three of which experienced the teaching innovation (these are denoted collectively as the 'the experimental group') and one group which did not (denoted as 'the control group', although there is no control of variables in the study 1). Each class comprised of 40 students.

The study is not published open access, so I cannot reproduce the copyright figures from the paper here, but below I have drawn a graph of these key data:


Key results from Li et al, 2022: this data was the basis for claiming an effective teaching innovation.

Loading poll ...

It is on the basis of this set of results that Li and colleagues claim that "the average score showed a constant upward trend, and a steady increase was found". Surely, anyone interrogating these data might have pause to wonder if that is the most authentic description of the pattern of scores year on year.

Does anyone teaching in a university really think that assessment methods are good enough to produce average class scores that are meaningful to 3 or 4 significant figures. To a more reasonable level of precision, nearest %age point (which is presumably what these numbers are – that is not made explicit), the results were:


CohortAverage class score
201780
201880
201980
202080
Average class scores (2 s.f.) year on year

When presented to a realistic level of precision, the obvious pattern is…no substantive change year on year!

A truncated graph

In their paper, Li and colleagues do present a graph to compare the average results in 2017 with (not 2018, but) 2019 and 2020, somewhat similar to the one I have reproduced here which should have made it very clear how little the scores varied between cohorts. However, Li and colleagues did not include on their axis the full range of possible scores, but rather only included a small portion of the full range – from 79.4 to 80.4.

This is a perfectly valid procedure often used in science, and it is quite explicitly done (the x-axis is clearly marked), but it does give a visual impression of a large spread of scores which could be quite misleading. In effect, their Figure 4b includes just a slither of my graph above, as shown below. If one takes the portion of the image below that is not greyed out, and stretches it to cover the full extent of the x axis of a graph, that is what is presented in the published account.


In the paper in J.Chem.Ed., Li and colleagues (2022) truncate the scale on their average score axis to expand 1% of the full range (approximated above in the area not shaded over) into a whole graph as their Figure 4b. This gives a visual impression of widely varying scores (to anyone who does not read the axis labels).

Compare images: you can use the 'slider' to change how much of each of the two images is shown.

What might have caused those small variations?

If anyone does think that differences of a few tenths of a percent in average class scores are notable, and that this demonstrates increasing student achievement, then we might ask what causes this?

Li and colleagues seem to be convinced that the change in teaching approach caused the (very modest) increase in scores year on year. That would be possible. (Indeed, Li et al seem to be arguing that the very, very modest shift from 2017 to subsequent years was due to the change of teaching approach; but the not-quite-so-modest shifts from 2018 to 2019 to 2020 are due to developing teacher competence!) However, drawing that conclusion requires making a ceteris paribus assumption: that all other things are equal. That is, that any other relevant variables have been controlled.

Read about confounding variables

Another possibility however is simply that each year the teaching team are more familiar with the science, and have had more experience teaching it to groups at this level. That is quite reasonable and could explain why there might be a modest increase in student outcomes on a course year on year.

Non-equivalent groups of students?

However, a big assumption here is that each of the year groups can be considered to be intrinsically the same at the start of the course (and to have equivalent relevant experiences outside the focal course during the programme). Often in quasi-experimental studies (where randomisation to conditions is not possible 1) a pre-test is used to check for equivalence prior to the innovation: after all, if students are starting from different levels of background knowledge and understanding then they are likely to score differently at the end of a course – and no further explanation of any measured differences in course achievement need be sought.

Read about testing for initial equivalence

In experiments, you randomly assign the units of analysis (e.g., students) to the conditions, which gives some basis for at least comparing any differences in outcomes with the variations likely by chance. But this was not a true experiment as there was no randomisation – the comparisons are between successive year groups.

In Li and colleagues' study, the 40 students taking the class in 2017 are implicitly assumed equivalent to the 40 students taking the class in each of the years 20818-2020: but no evidence is presented to support this assumption. 3

Yet anyone who has taught the same course over a period of time knows that even when a course is unchanged and the entrance requirements stable, there are naturally variations from one year to the next. That is one of the challenges of educational research (Taber, 2019): you never can "take two identical students…two identical classes…two identical teachers…two identical institutions".

Novelty or expectation effects?

We would also have to ignore any difference introduced by the general effect of there being an innovation beyond the nature of the specific innovation (Taber, 2019). That is, students might be more attentive and motivated simply because this course does things differently to their other current courses and past courses. (Perhaps not, but it cannot be ruled out.)

The researchers are likely enthusiastic for, and had high expectations for, the innovation (so high that it seems to have biased their interpretation of the data and blinded them to the obvious problems with their argument) and much research shows that high expectation, in its own right, often influences outcomes.

Read about expectancy effects in studies

Equivalent examination questions and marking?

We also have to assume the assessment was entirely equivalent across the four years. 4 The scores were based on aggregating a number of components:

"The course score was calculated on a percentage basis: attendance (5%), preclass preview (10%), in-class group presentation (10%), postclass mind map (5%), unit tests (10%), midterm examination (20%), and final examination (40%)."

Li, et al, 2022, p.1858

This raises questions about the marking and the examinations:

  • Are the same test and examination questions used each year (that is not usually the case as students can acquire copies of past papers)?
  • If not, how were these instruments standardised to ensure they were not more difficult in some years than others?
  • How reliable is the marking? (Reliable meaning the same scores/mark would be assigned to the same work on a different occasion.)

These various issues do not appear to have been considered.

Change of assessment methodology?

The description above of how the students' course scores were calculated raises another problem. The 2017 cohort were taught by "direct instruction". This is not explained as the authors presumably think we all know exactly what that is : I imagine lectures. By comparison, in the innovation (2018-2020 cohorts):

"The preclass stage of the SCTBL strategy is the distribution of the group preview task; each student in the group is responsible for a task point. The completion of the preview task stimulates students' learning motivation. The in-class stage is a team presentation (typically PowerPoint (PPT)), which promotes students' understanding of knowledge points. The postclass stage is the assignment of team homework and consolidation of knowledge points using a mind map. Mind maps allow an orderly sorting and summarization of the knowledge gathered in the class; they are conducive to connecting knowledge systems and play an important role in consolidating class knowledge."

Li, et al, 2022, p.1856, emphasis added.

Now the assessment of the preview tasks, the in-class group presentations, and the mind maps all contributed to the overall student scores (10%, 10%, 5% respectively). But these are parts of the innovative teaching strategy – they are (presumably) not part of 'direct instruction'. So, the description of how the student class scores were derived only applies to 2018-2020, and the methodology used in 2017 must have been different. (This is not discussed in the paper.) 5

A quarter of the score for the 'experimental' groups came from assessment components that could not have been part of the assessment regime applied to the 2017 cohort. At the very least, the tests and examinations must have been more heavily weighed into the 'control' group students' overall scores. This makes it very unlikely the scores can be meaningfully directly compared from 2017 to subsequent years: if the authors think otherwise they should have presented persuasive evidence of equivalence.


Li and colleagues want to convince us that variations in average course scores can be assumed to be due to a change in teaching approach – even though there are other conflating variables.

So, groups that we cannot assume are equivalent are assessed in ways that we cannot assume to be equivalent and obtain nearly identical average levels of achievement. Despite that, Li and colleagues want to persuade us that the very modest differences in average scores between the 'control' and 'experimental' groups (which is actually larger between different 'experimental group' cohorts than between the 'control' group and the successive 'experimental' cohort) are large enough to be significant and demonstrate their teaching innovation improves student achievement.

Statistical inference

So, even if we thought shifts of less than a 1% average in class achievement were telling, there are no good reasons to assume they are down to the innovation rather than some other factor. But Li and colleagues use statistical tests to tell them whether differences between the 'control' and 'experimental' conditions are significant. They find – just what anyone looking at the graph above would expect – "there is no significant difference in average score" (p.1860).

The scientific convention in using such tests is that the choice of test, and confidence level (e.g., a probability of p<0.05 to be taken as significant) is determined in advance, and the researchers accept the outcomes of the analysis. There is a kind of contract involved – a decision to use a statistical test (chosen in advance as being a valid way of deciding the outcome of an experiment) is seen as a commitment to accept its outcomes. 2 This is a form of honesty in scientific work. Just as it is not acceptable to fabricate data, nor is is acceptable to ignore experimental outcomes when drawing conclusions from research.

Special pleading is allowed in mitigation (e.g., "although our results were non-significant, we think this was due to the small samples sizes, and suggest that further research should be undertaken with large groups {and we are happy to do this if someone gives us a grant}"), but the scientist is not allowed to simply set aside the results of the analysis.


Li and colleagues found no significant difference between the two conditions, yet that did not stop them claiming, and the Journal of Chemical Education publishing, a conclusion that the new teaching approach improved student achievement!

Yet setting aside the results of their analysis is what Li and colleagues do. They carry out an analysis, then simply ignore the findings, and conclude the opposite:

"To conclude, our results suggest that the SCTBL method is an effective way to improve teaching quality and student achievement."

Li, et al, 2022, p.1861

It was this complete disregard of scientific values, rather than the more common failure to appreciate that they were not comparing like with like, that I found really shocking – and led to me writing a formal letter to the journal. Not so much surprise that researchers might do this (I know how intoxicating research can be, and how easy it is to become convinced in one's ideas) but that the peer reviewers for the Journal of Chemical Education did not make the firmest recommendation to the editor that this manuscript could NOT be published until it was corrected so that the conclusion was consistent with the findings.

This seems a very stark failure of peer review, and allows a paper to appear in the literature that presents a conclusion totally unsupported by the evidence available and the analysis undertaken. This also means that Li, Ouyang, Xu and Zhang now have a publication on their academic records that any careful reader can see is critically flawed – something that could have been avoided had peer reviewers:

  • used their common sense to appreciate that variations in class average scores from year to year between 79.8 and 80.3 could not possibly be seen as sufficient to indicate a difference in the effectiveness of teaching approaches;
  • recommended that the authors follow the usual scientific norms and adopt the reasonable scholarly value position that the conclusion of your research should follow from, and not contradict, the results of your data analysis.


Work cited:

Notes

1 Strictly the 2017 cohort has the role of a comparison group, but NOT a control group as there was no randomisation or control of variables, so this was not a true experiment (but a 'quasi-experiment'). However, for clarity, I am here using the original authors' term 'control group'.

Read about experimental research design


2 Some journals are now asking researchers to submit their research designs and protocols to peer review BEFORE starting the research. This prevents wasted effort on work that is flawed in design. Journals will publish a report of the research carried out according to an accepted design – as long as the researchers have kept to their research plans (or only made changes deemed necessary and acceptable by the journal). This prevents researchers seeking to change features of the research because it is not giving the expected findings and means that negative results as well as positive results do get published.


3 'Implicitly' assumed as nowhere do the authors state that they think the classes all start as equivalent – but if they do not assume this then their argument has no logic.

Without this assumption, their argument is like claiming that growing conditions for tree development are better at the front of a house than at the back because on average the trees at the front are taller – even though fast-growing mature trees were planted at the front and slow-growing saplings at the back.


4 From my days working with new teachers, a common rookie mistake was assuming that one could tell a teaching innovation was successful because students achieved an average score of 63% on the (say, acids) module taught by the new method when the same class only averaged 46% on the previous (say, electromagnetism) module. Graduate scientists would look at me with genuine surprise when I asked how they knew the two tests were of comparable difficulty!

Read about why natural scientists tend to make poor social scientists


5 In my (rejected) letter to the Journal of Chemical Education I acknowledged some ambiguity in the paper's discussion of the results. Li and colleagues write:

"The average scores of undergraduates majoring in pharmaceutical engineering in the control group and the experimental group were calculated, and the results are shown in Figure 4b. Statistical significance testing was conducted on the exam scores year to year. The average score for the pharmaceutical engineering class was 79.8 points in 2017 (control group). When SCTBL was implemented for the first time in 2018, there was a slight improvement in the average score (i.e., an increase of 0.11 points, not shown in Figure 4b). However, by 2019 and 2020, the average score increased by 0.32 points and 0.54 points, respectively, with an obvious improvement trend. We used a t test to test whether the SCTBL method can create any significant difference in grades among control groups and the experimental group. The calculation results are shown as follows: t1 = 0.0663, t2 = 0.1930, t3 =0.3279 (t1 <t2 <t3 <t𝛼, t𝛼 =2.024, p>0.05), indicating that there is no significant difference in average score. After three years of continuous implementation of SCTBL, the average score showed a constant upward trend, and a steady increase was found. The SCTBL method brought about improvement in the class average, which provides evidence for its effectiveness in medicinal chemistry."

Li, et al, 2022, p.1858-1860, emphasis added

This appears to refer to three distinct measures:

  • average scores (produced by weighed summations of various assessment components as discussed above)
  • exam scores (perhaps just the "midterm examination…and final examination", or perhaps just the final examination?)
  • grades

Formal grades are not discussed in the paper (the word is only used in this one place), although the authors do refer to categorising students into descriptive classes ('levels') according to scores on 'assessments', and may see these as grades:

"Assessments have been divided into five levels: disqualified (below 60), qualified (60-69), medium (70-79), good (80-89), and excellent (90 and above)."

Li, et al, 2022, p.1856, emphasis added

In the longer extract above, the reference to testing difference in "grades" is followed by reporting the outcome of the test for "average score":

"We used a t test to test …grades …The calculation results … there is no significant difference in average score"

As Student's t-test was used, it seems unlikely that the assignment of students to grades could have been tested. That would surely have needed something like the Chi-squared statistic to test categorical data – looking for an association between (i) the distributions of the number of students in the different cells 'disqualified', 'qualified', 'medium', 'good' and 'excellent'; and (ii) treatment group.

Presumably, then, the statistical testing was applied to the average course scores shown in the graph above. This also makes sense because the classification into descriptive classes loses some of the detail in the data and there is no obvious reason why the researchers would deliberately chose to test 'reduced' data rather than the full data set with the greatest resolution.


Delusions of educational impact

A 'peer-reviewed' study claims to improve academic performance by purifying the souls of students suffering from hallucinations


Keith S. Taber


The research design is completely inadequate…the whole paper is confused…the methodology seems incongruous…there is an inconsistency…nowhere is the population of interest actually identified…No explanation of the discrepancy is provided…results of this analysis are not reported…the 'interview' technique used in the study is highly inadequate…There is a conceptual problem here…neither the validity nor reliability can be judged…the statistic could not apply…the result is not reported…approach is completely inappropriate…these tables are not consistent…the evidence is inconclusive…no evidence to demonstrate the assumed mechanism…totally unsupported claims…confusion of recommendations with findings…unwarranted generalisation…the analysis that is provided is useless…the research design is simply inadequate…no control condition…such a conclusion is irresponsible

Some issues missed in peer review for a paper in the European Journal of Education and Pedagogy

An invitation to publish without regard to quality?

I received an email from an open-access journal called the European Journal of Education and Pedagogy, with the subject heading 'Publish Fast and Pay Less' which immediately triggered the thought "another predatory journal?" Predatory journals publish submissions for a fee, but do not offer the editorial and production standards expected of serious research journals. In particular, they publish material which clearly falls short of rigorous research despite usually claiming to engage in peer review.

A peer reviewed journal?

Checking out the website I found the usual assurances that the journal used rigorous peer review as:

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general.

We use a double-blind system for peer-reviewing; both reviewers and authors' identities remain anonymous to each other. The paper will be peer-reviewed by two or three experts; one is an editorial staff and the other two are external reviewers."

https://www.ej-edu.org/index.php/ejedu/about

Peer review is critical to the scientific process. Work is only published in (serious) research journals when it has been scrutinised by experts in the relevant field, and any issues raised responded to in terms of revisions sufficient to satisfy the editor.

I could not find who the editor(-in-chief) was, but the 'editorial team' of European Journal of Education and Pedagogy were listed as

  • Bea Tomsic Amon, University of Ljubljana, Slovenia
  • Chunfang Zhou, University of Southern Denmark, Denmark
  • Gabriel Julien, University of Sheffield, UK
  • Intakhab Khan, King Abdulaziz University, Saudi Arabia
  • Mustafa Kayıhan Erbaş, Aksaray University, Turkey
  • Panagiotis J. Stamatis, University of the Aegean, Greece

I decided to look up the editor based in England where I am also based but could not find a web presence for him at the University of Sheffield. Using the ORCID (Open Researcher and Contributor ID) provided on the journal website I found his ORCID biography places him at the University of the West Indies and makes no mention of Sheffield.

If the European Journal of Education and Pedagogy is organised like a serious research journal, then each submission is handled by one of this editorial team. However the reference to "editorial staff" might well imply that, like some other predatory journals I have been approached by (e.g., Are you still with us, Doctor Wu?), the editorial work is actually carried out by office staff, not qualified experts in the field.

That would certainly help explain the publication, in this 'peer-reviewed research journal', of the first paper that piqued my interest enough to motivate me to access and read the text.


The Effects of Using the Tazkiyatun Nafs Module on the Academic Achievement of Students with Hallucinations

The abstract of the paper published in what claims to be a peer-reviewed research journal

The paper initially attracted my attention because it seemed to about treatment of a medical condition, so I wondered was doing in an education journal. Yet, the paper seemed to also be about an intervention to improve academic performance. As I read the paper, I found a number of flaws and issues (some very obvious, some quite serious) that should have been spotted by any qualified reviewer or editor, and which should have indicated that possible publication should have been be deferred until these matters were satisfactorily addressed.

This is especially worrying as this paper makes claims relating to the effective treatment of a symptom of potentially serious, even critical, medical conditions through religious education ("a  spiritual  approach", p.50): claims that might encourage sufferers to defer seeking medical diagnosis and treatment. Moreover, these are claims that are not supported by any evidence presented in this paper that the editor of the European Journal of Education and Pedagogy decided was suitable for publication.


An overview of what is demonstrated, and what is claimed, in the study.

Limitations of peer review

Peer review is not a perfect process: it relies on busy human beings spending time on additional (unpaid) work, and it is only effective if suitable experts can be found that fit with, and are prepared to review, a submission. It is also generally more challenging in the social sciences than in the natural sciences. 1

That said, one sometimes finds papers published in predatory journals where one would expect any intelligent person with a basic education to notice problems without needing any specialist knowledge at all. The study I discuss here is a case in point.

Purpose of the study

Under the heading 'research objectives', the reader is told,

"In general, this journal [article?] attempts to review the construction and testing of Tazkiyatun Nafs [a Soul Purification intervention] to overcome the problem of hallucinatory disorders in student learning in secondary schools. The general objective of this study is to identify the symptoms of hallucinations caused by subtle beings such as jinn and devils among students who are the cause of disruption in learning as well as find solutions to these problems.

Meanwhile, the specific objective of this study is to determine the effect of the use of Tazkiyatun Nafs module on the academic achievement of students with hallucinations.

To achieve the aims and objectives of the study, the researcher will get answers to the following research questions [sic]:

Is it possible to determine the effect of the use of the Tazkiyatun Nafs module on the academic achievement of students with hallucinations?"

Awang, 2022, p.42

I think I can save readers a lot of time regarding the research question by suggesting that, in this study, at least, the answer is no – if only because the research design is completely inadequate to answer the research question. (I should point that the author comes to the opposite conclusion: e.g., "the approach taken in this study using the Tazkiyatun Nafs module is very suitable for overcoming the problem of this hallucinatory disorder", p.49.)

Indeed, the whole paper is confused in terms of what it is setting out to do, what it actually reports, and what might be concluded. As one example, the general objective of identifying "the symptoms of hallucinations caused by subtle beings such as jinn and devils" (but surely, the hallucinations are the symptoms here?) seems to have been forgotten, or, at least, does not seem to be addressed in the paper. 2


The study assumes that hallucinations are caused by subtle beings such as jinn and devils possessing the students.
(Image by Tünde from Pixabay)

Methodology

So, this seems to be an intervention study.

  • Some students suffer from hallucinations.
  • This is detrimental to their education.
  • It is hypothesised that the hallucinations are caused by supernatural spirits ("subtle beings that lead to hallucinations"), so, a soul purification module might counter this detriment;
  • if so, sufferers engaging with the soul purification module should improve their academic performance;
  • and so the effect of the module is being tested in the study.

Thus we have a kind of experimental study?

No, not according to the author. Indeed, the study only reports data from a small number of unrepresentative individuals with no controls,

"The study design is a case study design that is a qualitative study in nature. This study uses a case study design that is a study that will apply treatment to the study subject to determine the effectiveness of the use of the planned modules and study variables measured many times to obtain accurate and original study results. This study was conducted on hallucination disorders [students suffering from hallucination disorders?] to determine the effectiveness of the Tazkiyatun Nafs module in terms of aspects of student academic achievement."

Awang, 2022, p.42

Case study?

So, the author sees this as a case study. Research methodologies are better understood as clusters of similar approaches rather than unitary categories – but case study is generally seen as naturalistic, rather than involving an intervention by an external researcher. So, case study seems incongruous here. Case study involves the detailed exploration of an instance (of something of interest – a lesson, a school, a course of tudy, a textbook, …) reported with 'thick description'.

Read about the characteristics of case study research

The case is usually a complex phenomena which is embedded within a context from which is cannot readily be untangled (for example, a lesson always takes place within a wider context of a teacher working over time with a class on a course of study, within a curricular, and institutional, and wider cultural, context, all of which influence the nature of the specific lesson). So, due to the complex and embedded nature of cases, they are all unique.

"a case study is a study that is full of thoroughness and complex to know and understand an issue or case studied…this case study is used to gain a deep understanding of an issue or situation in depth and to understand the situation of the people who experience it"

Awang, 2022, p.42

A case is usually selected either because that case is of special importance to the researcher (an intrinsic case study – e.g., I studied this school because it is the one I was working in) or because we hope this (unique) case can tell us something about similar (but certainly not identical) other (also unique) cases. In the latter case [sic], an instrumental case study, we are always limited by the extent we might expect to be able to generalise beyond the case.

This limited generalisation might suggest we should not work with a single case, but rather look for a suitably representative sample of all cases: but we sometimes choose case study because the complexity of the phenomena suggests we need to use extensive, detailed data collection and analyses to understand the complexity and subtlety of any case. That is (i.e., the compromise we choose is), we decide we will look at one case in depth because that will at least give us insight into the case, whereas a survey of many cases will inevitably be too superficial to offer any useful insights.

So how does Awang select the case for this case study?

"This study is a case study of hallucinatory disorders. Therefore, the technique of purposive sampling (purposive sampling [sic]) is chosen so that the selection of the sample can really give a true picture of the information to be explored ….

Among the important steps in a research study is the identification of populations and samples. The large group in which the sample is selected is termed the population. A sample is a small number of the population identified and made the respondents of the study. A case or sample of n = 1 was once used to define a patient with a disease, an object or concept, a jury decision, a community, or a country, a case study involves the collection of data from only one research participant…

Awang, 2022, p.42

Of course, a case study of "a community, or a country" – or of a school, or a lesson, or a professional development programme, or a school leadership team, or a homework policy, or an enrichnment activity, or … – would almost certainly be inadequate if it was limited to "the collection of data from only one research participant"!

I do not think this study actually is "a case study of hallucinatory disorders [sic]". Leading aside the shift from singular ("a case study") to plural ("disorders"), the research does not investigate a/some hallucinatory disorders, but the effect of a soul purification module on academic performance. (Actually, spoiler alert  😉, it does not actually investigate the effect of a soul purification module on academic performance either, but the author seems to think it does.)

If this is a case study, there should be the selection of a case, not a sample. Sometimes we do sample within a case in case study, but only from those identified as part of the case. (For example, if the case was a year group in a school, we may not have resources to interact in depth with several hundred different students). Perhaps this is pedantry as the reader likely knows what Awang meant by 'sample' in the paper – but semantics is important in research writing: a sample is chosen to represent a population, whereas the choice of case study is an acknowledgement that generalisation back to a population is not being claimed).

However, if "among the important steps in a research study is the identification of populations" then it is odd that nowhere in the paper is the population of interest actually specified!

Things slip our minds. Perhaps Awang intended to define the population, forgot, and then missed this when checking the text – buy, hey, that is just the kind of thing the reviewers and editor are meant to notice! Otherwise this looks very like including material from standard research texts to play lip-service to the idea that research-design needs to be principled, but without really appreciating what the phrases used actually mean. This impression is also given by the descriptions of how data (for example, from interviews) were analysed – but which are not reflected at all in the results section of the paper. (I am not accusing Awang of this, but because of the poor standard of peer review not raising the question, the author is left vulnerable to such an evaluation.)

The only one research participant?

So, what do we know about the "case or sample of n = 1 ", the "only one research participant" in this study?

The actual respondents in this case study related to hallucinatory disorders were five high school students. The supportive respondents in the case study related to hallucination disorders were five counseling teachers and five parents or guardians of students who were the actual respondents."

Awang, 2022, p.42

It is certainly not impossible that a case could comprise a group of five people – as long as those five make up a naturally bounded group – that is a group that a reasonable person would recognise as existing as a coherent entiy as they clearly had something in common (they were in the same school class, for example; they were attending the same group therapy session, perhaps; they were a friendship group; they were members of the same extended family diagnosed with hallucinatory disorders…something!) There is no indication here of how these five make up a case.

The identification of the participants as a case might have made sense had the participants collectively undertaken the module as a group, but the reader is told: "This study is in the form of a case study. Each practice and activity in the module are done individually" (p.50). Another justification could have been if the module had been offered in one school, and these five participants were the students enrolled in the programme at that time but as "analysis of  the  respondents'  academic  performance  was conducted  after  the  academic  data  of  all  respondents  were obtained  from  the  respective  respondent's  school" (p.45) it seems they did not attend a single school.

The results tables and reports in the text refer to "respondent 1" to "respondent 4". In case study, an approach which recognises the individuality and inherent value of the particular case, we would usually assign assumed names to research participants, not numbers. But if we are going to use numbers, should there not be a respondent 5?

The other one research participant?

It seems that these is something odd here.

Both the passage above, and the abstract refer to five respondents. The results report on four. So what is going on? No explanation of the discrepancy is provided. Perhaps:

  • There only ever were four participants, and the author made a mistake in counting.
  • There only ever were four participants, and the author made a typographical mistake (well, strictly, six typographical mistakes) in drafting the paper, and then missed this in checking the manuscript.
  • There were five respondents and the author forgot to include data on respondent 5 purely by accident.
  • There were five respondents, but the author decided not to report on the fifth deliberately for a reason that is not revealed (perhaps the results did not fit with the desired outcome?)

The significant point is not that there is an inconsistency but that this error was missed by peer reviewers and the editor – if there ever was any genuine peer review. This is the kind of mistake that a school child could spot – so, how is it possible that 'expert reviewers' and 'editorial staff' either did not notice it, or did not think it important enough to query?

Research instruments

Another section of the paper reports the instrumentation used in the paper.

"The research instruments for this study were Takziyatun Nafs modules, interview questions, and academic document analysis. All these instruments were prepared by the researcher and tested for validity and reliability before being administered to the selected study sample [sic, case?]."

Awang, 2022, p.42

Of course, it is important to test instruments for validity and reliability (or perhaps authenticity and trustworthiness when collecting qualitative data). But it is also important

  • to tell the reader how you did this
  • to report the outcomes

which seems to be missing (apart from in regard to part of the implemented module – see below). That is, the reader of a research study wants evidence not simply promises. Simply telling readers you did this is a bit like meeting a stranger who tells you that you can trust them because they (i.e., say that they) are honest.

Later the reader is told that

"Semi- structured interview questions will be [sic, not 'were'?] developed and validated for the purpose of identifying the causes and effects of hallucinations among these secondary school students…

…this interview process will be [sic, not 'was'] conducted continuously [sic!] with respondents to get a clear and specific picture of the problem of hallucinations and to find the best solution to overcome this disorder using Islamic medical approaches that have been planned in this study

Awang, 2022, pp.43-44

At the very least, this seems to confuse the plan for the research with a report of what was done. (But again, apparently, the reviewers and editorial staff did not think this needed addressing.) This is also confusing as it is not clear how this aspect of the study relates to the intervention. Were the interviews carried out before the intervention to help inform the design of the modules (presumably not as they had already been "tested for validity and reliability before being administered to the selected study sample"). Perhaps there are clear and simple answers to such questions – but the reader will not know because the reviewers and editor did not seem to feel they needed to be posed.

If "Interviews are the main research instrument in this study" (p.43), then one would expect to see examples of the interview schedules – but these are not presented. The paper reports a complex process for analysing interview data, but this is not reflected in the findings reported. The readers is told that the six stage process leads to the identifications and refinement of main and sub-categories. Yet, these categories are not reported in the paper. (But, again, peer reviewers and the editor did not apparently raise this as something to be corrected.) More generally "data  analysis  used  thematic  analysis  methods" (p.44), so why is there no analysis presented in terms of themes? The results of this analysis are simply not reported.

The reader is told that

"This  interview  method…aims to determine the respondents' perspectives, as well as look  at  the  respondents'  thoughts  on  their  views  on  the issues studied in this study."

Awang, 2022, p.44

But there is no discussion of participants perspectives and views in the findings of the study. 2 Did the peer reviewers and editor not think this needed addressing before publication?

Even more significantly, in a qualitative study where interviews are supposedly the main research instrument, one would expect to see extracts from the interviews presented as part of the findings to support and exemplify claims being made: yet, there are none. (Did this not strike the peer reviewers and editor as odd: presumably they are familiar with the norms of qualitative research?)

The only quotation from the qualitative data (in this 'qualitative' study) I can find appears in the implications section of the paper:

"Are you aware of the importance of education to you? Realize. Is that lesson really important? Important. The success of the student depends on the lessons in school right or not? That's right"

Respondent 3: Awang, 2022, p.49

This seems a little bizarre, if we accept this is, as reported, an utterance from one of the students, Respondent 3. It becomes more sensible if this is actually condensed dialogue:

"Are you aware of the importance of education to you?"

"Realize."

"Is that lesson really important?"

"Important."

"The success of the student depends on the lessons in school right or not?"

"That's right"

It seems the peer review process did not lead to suggesting that the material should be formatted according to the norms for presenting dialogue in scholarly texts by indicating turns. In any case, if that is typical of the 'interview' technique used in the study then it is highly inadequate, as clearly the interviewer is leading the respondent, and this is more an example of indoctrination than open-ended enquiry.

Random sampling of data

Completely incongruous with the description of the purposeful selection of the participants for a case study is the account of how the assessment data was selected for analysis:

"The  process  of  analysis  of  student  achievement documents is carried out randomly by taking the results of current  examinations  that  have  passed  such  as the  initial examination of the current year or the year before which is closest  to  the  time  of  the  study."

Awang, 2022, p.44

Did the peer reviewers or editor not question the use of the term random here? It is unclear what is meant to by 'random' here, but clearly if the analysis was based on randomly selected data that would undermine the results.

Validating the soul purification module

There is also a conceptual problem here. The Takziyatun Nafs modules are the intervention materials (part of what is being studied) – so they cannot also be research instruments (used to study them). Surely, if the Takziyatun Nafs modules had been shown to be valid and reliable before carrying out the reported study, as suggested here, then the study would not be needed to evaluate their effectiveness. But, presumably, expert peer reviewers (if there really were any) did not see an issue here.

The reliability of the intervention module

The Takziyatun Nafs modules had three components, and the author reports the second of the three was subjected to tests of validity and reliability. It seems that Awang thinks that this demonstrates the validity and reliability of the complete intervention,

"The second part of this module will go through [sic] the process of obtaining the validity and reliability of the module. Proses [sic] to obtain this validity, a questionnaire was constructed to test the validity of this module. The appointed specialists are psychologists, modern physicians (psychiatrists), religious specialists, and alternative medicine specialists. The validity of the module is identified from the aspects of content, sessions, and activities of the Tazkiyatun Nafs module. While to obtain the value of the reliability coefficient, Cronbach's alpha coefficient method was used. To obtain this Cronbach's alpha coefficient, a pilot test was conducted on 50 students who were randomly selected to test the reliability of this module to be conducted."

Awang, 2022, pp.43-44

Now to unpack this, it may be helpful to briefly outline what the intervention involved (as as the paper is open access anyone can access and read the full details in the report).


From the MGM film 'A Night at the Opera' (1935): "The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced"

The description does not start off very helpfully ("The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced" (p.43) put me in mind of the Marx brothers: "The party of the first part shall be known in this contract as the party of the first part"), but some key points are,

"the Tazkiyatun Nafs module was constructed to purify the heart of each respondent leading to the healing of hallucinatory disorders. This liver purification process is done in stages…

"the process of cleansing the patient's soul will be done …all the subtle beings in the patient will be expelled and cleaned and the remnants of the subtle beings in the patient will be removed and washed…

The second process is the process of strengthening and the process of purification of the soul or heart of the patient …All the mazmumah (evil qualities) that are in the heart must be discarded…

The third process is the process of enrichment and the process of distillation of the heart and the practices performed. In this process, there will be an evaluation of the practices performed by the patient as well as the process to ensure that the patient is always clean from all the disturbances and disturbances [sic] of subtle beings to ensure that students will always be healthy and clean from such disturbances…

Awang, 2022, p.45, p.43

Quite how this process of exorcising and distilling and cleansing will occur is not entirely clear (and if the soul is equated with the heart, how is the liver involved?), but it seems to involve reflection and prayer and contemplation of scripture – certainly a very personal and therapeutic process.

And yet its validity and reliability was tested by giving a questionnaire to 50 students randomly selected (from the unspecified population, presumably)? No information is given on how a random section was made (Taber, 2013) – which allows a reader to be very sceptical that this actually was a random sample from the (un?)identified population, and not just an arbitrary sample of 50 students. (So, that is twice the word 'random' is used in the paper when it seems inappropriate.)

It hardly matters here, as clearly neither the validity nor the reliability of a spiritual therapy can be judged from a questionnaire (especially when administered to people who have never undertaken the therapy). In any case, the "reliability coefficient" obtained from an administration of a questionnaire ONLY applies to that sample on that occasion. So, the statistic could not apply to the four participants in the study. And, in any case, the result is not reported, so the reader has no idea what the value of Cronbach's alpha was (but then, this was described as a qualitative study!)

Moreover, Cronbach's alpha only indicates the internal coherence of the items on a scale (Taber, 2019): so, it only indicates whether the set of questions included in the questionnaire seem to be accessing the same underlying construct in motivating the responses of those surveyed across the set of items. It gives no information about the reliability of the instrument (i.e., whether it would give the same results on another occasion).

This approach to testing validity and reliability is then completely inappropriate and unhelpful. So, even if the outcomes of the testing had been reported (and they are not) they would not offer any relevant evidence. Yet it seems that peer reviewers and editor did not think to question why this section was included in the paper.

Ethical issues

A study of this kind raises ethical issues. It may well be that the research was carried out in an entirely proper and ethical manner, but it is usual in studies with human participants ('human subjects') to make this clear in the published report (Taber, 2014b). A standard issue is whether the participants gave voluntary, informed, consent. This would mean that they were given sufficient information about the study at the outset to be able to decide if they wished to participate, and were under no undue pressure to do so. The 'respondents' were school students: if they were considered minors in the research context (and oddly for a 'case study' such basic details as age and gender are not reported) then parental permission would also be needed, again subject to sufficient briefing and no duress.

However, in this specific research there are also further issues due to the nature of the study. The participants were subject to medical disorders, so how did the researcher obtain information about, and access to, the students without medical confidentiality being broken? Who were the 'gatekeepers' who provided access to the children and their personal data? The researcher also obtained assessment data "from  the  class  teacher  or  from  the  Student Affairs section of the student's school" (p.44), so it is important to know that students (and parents/guardians) consented to this. Again, peer review does not seem to have identified this as an issue to address before publication.

There is also the major underlying question about the ethics of a study when recognising that these students were (or could be, as details are not provided) suffering from serious medical conditions, but employing religious education as a treatment ("This method of treatment is to help respondents who suffer from hallucinations caused by demons or subtle beings", p.44). Part of the theoretical framework underpinning the study is the assumption that what is being addressed is"the problem of hallucinations caused by the presence of ethereal beings…" (p.43) yet it is also acknowledged that,

"Hallucinatory disorders in learning that will be emphasized in this study are due to several problems that have been identified in several schools in Malaysia. Such disorders are psychological, environmental, cultural, and sociological disorders. Psychological disorders such as hallucinatory disorders can lead to a more critical effect of bringing a person prone to Schizophrenia. Psychological disorders such as emotional disorders and psychiatric disorders. …Among the causes of emotional disorders among students are the school environment, events in the family, family influence, peer influence, teacher actions, and others."

Awang, 2022, p.41

There seem to be three ways of understanding this apparent discrepancy, which I might gloss:

  1. there are many causes of conditions that involve hallucinations, including, but not only, possession by evil or mischievousness spirits;
  2. the conditions that lead to young people having hallucinations may be understood at two complementary levels, at a spiritual level in terms of a need for inner cleansing and exorcising of subtle beings, and in terms of organic disease or conditions triggered by, for example, social and psychological factors;
  3. in the introduction the author has relied on various academic sources to discuss the nature of the phenomenon of students having hallucinations, but he actually has a working assumption that is completely different: hallucinations are due to the presence of jinn or other spirits.

I do not think it is clear which of these positions is being taken by the study's author.

  1. In the first case it would be necessary to identify which causes are present in potential respondents and only recruit those suffering possession for this study (which does not seem to have been done);
  2. In the second case, spiritual treatment would need to complement medical intervention (which would completely undermine the validity of the study as medical treatments for the underlying causes of hallucinations are likely to be the cause of hallucinations ceasing, not the tested intervention);
  3. The third position is clearly problematic in terms of academic scholarship as it is either completely incompetent or deliberately disregards academic norms that require the design of a study to reflect the conceptual framework set out to motivate it.

So, was this tested intervention implemented instead of or alongside formal medical intervention?

  • If it was alongside medical treatment, then that raises a major confound for the study.
  • Yet it would clearly be unacceptable to deny sufferers indicated medical treatment in order to test an educational intervention that is in effect a form of exorcism.

Again, it may be there are simple and adequate responses to these questions (although here I really cannot see what they might be), but unfortunately it seems the journal referees and editor did not think to ask for them.  

Findings


Results tables presented in Awang, 2022 (p.45) [Published with a creative commons licence allowing reproduction]: "Based on the findings stated in Table I show that serial respondents experienced a decline in academic achievement while they face the problem of hallucinations. In contrast to Table II which shows an improvement in students' academic achievement  after  hallucinatory  disorders  can  be  resolved." If we assume that columns in the second table have been mislabelled, then it seems the school performance of these four students suffered while they were suffering hallucinations, but improved once they recovered. From this, we can infer…?

The key findings presented concern academic performance at school. Core results are presented in tables I and II. Unfortunately these tables are not consistent as they report contradictory results for the academic performance of students before and during periods when they had hallucinations.

They can be made consistent if the reader assumes that two of the columns in table II are mislabelled. If the reader assumes that the column labelled 'before disruption' actually reports the performance 'during disruption' and that the column actually labelled 'during disruption' is something else, then they become consistent. For the results to tell a coherent story and agree with the author's interpretation this 'something else' presumably should be 'after disruption'.

This is a very unfortunate error – and moreover one that is obvious to any careful reader. (So, why was it not obvious to the referees and editor?)

As well as looking at these overall scores, other assessment data is presented separately for each of respondent 1 – respondent 4. Theses sections comprise presentations of information about grades and class positions, mixed with claims about the effects of the intervention. These claims are not based on any evidence and in many cases are conclusions about 'respondents' in general although they are placed in sections considering the academic assessment data of individual respondents. So,there are a number of problems with these claims:

  • they are of the nature of conclusions, but appear in the section presenting the findings;
  • they are about the specific effects of the intervention that the author assumes has influenced academic performance, not the data analysed in these sections;
  • they are completely unsubstantiated as no data or analysis is offered to support them;
  • often they make claims about 'respondents' in general, although as part of the consideration of data from individual learners.

Despite this, the paper passed peer-review and editorial scrutiny.

Rhetorical research?

This paper seems to be an example of a kind of 'rhetorical research' where a researcher is so convinced about their pre-existant theoretical commitments that they simply assume they have demonstrated them. Here the assumption seem to be:

  1. Recovering from suffering hallucinations will increase student performance
  2. Hallucinations are caused by jinn and devils
  3. A spiritual intervention will expel jinn and devils
  4. So, a spiritual intervention will cure hallucinations
  5. So, a spiritual intervention will increase student performance

The researcher provided a spiritual intervention, and the student performance increased, so it is assumed that the scheme is demonstrated. The data presented is certainly consistent with the assumption, but does not in itself support this scheme without evidence. Awang provides evidence that student performance improved in four individuals after they had received the intervention – but there is no evidence offered to demonstrate the assumed mechanism.

A gardener might think that complimenting seedlings will cause them to grow. Perhaps she praises her seedlings every day, and they do indeed grow. Are we persuaded about the efficacy of her method, or might we suspect another cause at work? Would the peer-reveiewers and editor of the European Journal of Education and Pedagogy be persuaded this demonstrated that compliments cause plant growth? On the evidence of this paper, perhaps they would.

This is what Awang tells readers about the analysis undertaken:

Each student  respondent  involved  in  this  study  [sic, presumably not, rather the researcher] will  use  the analysis  of  the  respondent's  performance  to  determine the effect of hallucination disorders on student achievement in secondary school is accurate.

The elements compared in this analysis are as follows: a) difference in mean percentage of achievement by subject, b) difference in grade achievement by subject and c) difference in the grade of overall student achievement. All academic results of the respondents will be analyzed as well as get the mean of the difference between the  performance  before, during, and after the  respondents experience  hallucinations. 

These  results  will  be  used  as research material to determine the accuracy of the use of the Tazkiyatun  Nafs  Module  in  solving  the  problem  of hallucinations   in   school   and   can   improve   student achievement in academic school."

Awang, 2022, p.45

There is clearly a large jump between the analysis outlined in the second paragraph here, and testing the study hypotheses as set out in the final paragraph. But the author does not seem to notice this (and more worryingly, nor do the journal's reviewers and editor).

So interleaved into the account of findings discussing "mean percentage of achievement by subject…difference in grade achievement by subject…difference in the grade of overall student achievement" are totally unsupported claims. Here is an example for Respondent 1:

"Based on the findings of the respondent's achievement in the  grade  for  Respondent  1  while  facing  the  problem  of hallucinations  shows  that  there  is  not  much  decrease  or deterioration  of  the  respondent's  grade.  There  were  only  4 subjects who experienced a decline in grade between before and  during  hallucination  disorder.  The  subjects  that experienced  decline  were  English,  Geography,  CBC, and Civics.  Yet  there  is  one  subject  that  shows  a  very  critical grade change the Civics subject. The decline occurred from grade A to grade E. This shows that Civics education needs to be given serious attention in overcoming this problem of decline. Subjects experiencing this grade drop were subjects involving  emotion,  language,  as  well  as  psychomotor fitness.  In  the  context  of  psychology,  unstable  emotional development  leads  to  a  decline  in the psychomotor  and emotional development of respondents.

After  the  use  of  the  Tazkiyatun  Nafs  module  in overcoming  this  problem,  hallucinatory  disorders  can  be overcome.  This  situation  indicates  the  development  of  the respondents  during  and  after  experiencing  hallucinations after  practicing  the  Tazkiyatun  Nafs  module.  The  process that takes place in the Tzkiyatun Nafs module can help the respondent  to  stabilize  his  emotions  and  psyche  for  the better. From the above findings there were 5 subjects who experienced excellent improvement in grades. The increase occurred in English, Malay, Geography, and Civics subjects. The best improvement is in the subject of Civic education from grade E to grade B. The improvement in this language subject  shows  that  the  respondents'  emotions  have stabilized.  This  situation  is  very  positive  and  needs  to  be continued for other subjects so that respondents continue to excel in academic achievement in school.""

Awang, 2022, p.45 (emphasis added)

The material which I show here as underlined is interjected completely gratuitously. It does not logically fit in the sequence. It is not part of the analysis of school performance. It is not based on any evidence presented in this section. Indeed, nor is it based on any evidence presented anywhere else in the paper!

This pattern is repeated in discussing other aspects of respondents' school performance. Although there is mention of other factors which seem especially pertinent to the dip in school grades ("this was due to the absence of the  respondents  to  school  during  the  day  the  test  was conducted", p.46; "it was an increase from before with no marks due to non-attendance at school", p.46) the discussion of grades is interspersed with (repetitive) claims about the effects of the intervention for which no evidence is offered.


Respondent 1Respondent 2Respondent 3Respondent 4
§: Differences in Respondents' Grade Achievement by Subject"After the use of the Tazkiyatun Nafs module in overcoming this problem, hallucinatory disorders can be overcome. This situation indicates the development of the respondents during and after experiencing hallucinations after practicing the Tazkiyatun Nafs module. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.45)"After the use of the Tazkiyatun Nafs module as a soul purification module, showing the development of the respondents during and after experiencing hallucination disorders is very good. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better" (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)
§:Differences in Respondent Grades according to Overall Academic Achievement"Based on the findings of the study after the hallucination
disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement.
" (pp.46-7)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module. … This excellence also shows that the respondents have recovered from hallucinations after practicing the methods found in the Tazkiayatun Nafs module that has been introduced.
In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement
." (p.47)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module has successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)
Unsupported claims made within findings sections reporting analyses of individual student academic grades: note (a) how these statements included in the analysis of individual school performance data from four separate participants (in a case study – a methodology that recognises and values diversity and individuality) are very similar across the participants; (b) claims about 'respondents' (plural) are included in the reports of findings from individual students.

Awang summarises what he claims the analysis of 'differences in respondents' grade achievement by subject' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective achievement grades. Therefore, this soul purification module should be practiced by every student to help them in stabilizing their soul and emotions and stay away from all the disturbances of the subtle beings that lead to hallucinations"

Awang, 2022, p.46

And, on the next page, Awang summarises what he claims the analysis of 'differences in respondent grades according to overall academic achievement' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective overall academic achievement. Therefore, this soul purification module should be practiced by every student to help them in stabilizing the soul and emotions as well as to stay away from all the disturbances of the subtle beings that lead to hallucination disorder."

Awang, 2022, p.47

So, the analysis of grades is said to demonstrate the value of the intervention, and indeed Awang considers this is reason to extend the intervention beyond the four participants, not just to others suffering hallucinations, but to "every student". The peer review process seems not to have raised queries about

  • the unsupported claims,
  • the confusion of recommendations with findings (it is normal to keep to results in a findings section), nor
  • the unwarranted generalisation from four hallucination suffers to all students whether healthy or not.

Interpreting the results

There seem to be two stories that can be told about the results:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved.  

Narrative 1

Now narrative 1 relies on a very substantial implied assumption – which is that the numbers presented as school performance are comparable over time. So, a control would be useful: such as what happened to the performance scores of other students in the same classes over the same time period. It seems likely they would not have shown the same dip – unless the dip was related to something other than hallucinations – such as the well-recognised dip after long school holidays, or some cultural distraction (a major sports tournament; fasting during Ramadan; political unrest; a pandemic…). Without such a control the evidence is suggestive (after all, being ill, and missing school as a result, is likely to lead to a dip in school performance, so the findings are not surprising), but inconclusive.

Intriguingly, the author tells readers that "student  achievement  statistics  from  the  beginning  of  the year to the middle of the current [sic, published in 2022] year in secondary schools in Northern Peninsular Malaysia that have been surveyed by researchers show a decline (Sabri, 2015 [sic])" (p.42), but this is not considered in relation to the findings of the study.

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, as a result of undergoing the soul purification module, their school performance improved.  

Narrative 2

Clearly narrative 2 suffers from the same limitation as narrative 1. However, it also demands an extra step in making an inference. I could re-write this narrative:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved. 
AND
the recovery was due to engagement with the soul purification module.

Narrative 2'.

That is, even if we accept narrative 1 as likely, to accept narrative 2 we would also need to be convinced that:

  • a) sufferers from medical conditions leading to hallucinations do not suffer periodic attacks with periods of remission in between; or
  • b) episodes of hallucinations cannot be due to one-off events (emotional trauma, T.I.A. {transient ischaemic attack or mini-strokes},…) that resolve naturally in time; or
  • c) sufferers from medical conditions leading to hallucinations do not find they resolve due to maturation; or
  • d) the four participants in this study did not undertaken any change in life-style (getting more sleep, ceasing eating strange fungi found in the woods) unrelated to the intervention that might have influenced the onset of hallucinations; or
  • e) the four participants in this study did not receive any medical treatment independent of the intervention (e.g., prescribed medication to treat migraine episodes) that might have influenced the onset of hallucinations

Despite this study being supposedly a case study (where the expectation is there should be 'thick description' of the case and its context), there is no information to help us exclude such options. We do not know the medical diagnoses of the conditions causing the participants' hallucinations, or anything about their lives or any medical treatment that may have been administered. Without such information, the analysis that is provided is useless for answering the research question.

In effect, regardless of all the other issues raised, the key problem is that the research design is simply inadequate to test the research question. But it seems the referees and editor did not notice this shortcoming.

Alleged implications of the research

After presenting his results Awang draws various implications, and makes a number of claims about what had been found in the study:

  • "After the students went through the treatment session by using the Tazkiayatun Nafsmodule to treat hallucinations, it showed a positive effect on the student respondents. All this was certified by the expert, the student's parents as well as the  counselor's  teacher." (p.48)
  • "Based on these findings, shows that hallucinations are very disturbing to humans and the appropriate method for now to solve this problem is to use the Tazkiyatun Nafs Module." (p.48)
  • "…the use of the Tazkiyatun Nafs module while the  respondent  is  suffering  from  hallucination  disorder  is very  appropriate…is very helpful to the respondents in restoring their minds and psyche to be calmer and healthier. These changes allow  students  to  focus  on  their  studies  as  well  as  allow them to improve their academic performance better." (p.48)
  • "The use of the Tazkiyatun Nafs Module in this study has led to very positive changes there are attitudes and traits of students  who  face  hallucinations  before.  All  the  negative traits  like  irritability, loneliness,  depression,etc.  can  be overcome  completely." (p.49)
  • "The personality development of students is getting better and perfect with the implementation of the Tazkiaytun Nafs module in their lives." (p.49)
  • "Results  indicate that  students  who  suffer  from  this hallucination  disorder are in  a  state  of  high  depression, inactivity, fatigue, weakness and pain,and insufficient sleep." (p.49)
  • "According  to  the  findings  of  this study,  the  history  of  this  hallucination  disorder  started in primary  school  and  when  a  person  is  in  adolescence,  then this  disorder  becomes  stronger  and  can  cause  various diseases  and  have  various  effects  on  a  person who  is disturbed." (p.50)

Given the range of interview data that Awang claims to have collected and analysed, at least some of the claims here are possibly supported by the data. However, none of this data and analysis is available to the reader. 2 These claims are not supported by any evidence presented in the paper. Yet peer reviewers and the editor who read the manuscript seem to feel it is entirely acceptable to publish such claims in a research paper, and not present any evidence whatsoever.

Summing up

In summary: as far as these four students were concerned (but not perhaps the fifth participant?), there did seem to be a relationship between periods of experiencing hallucinations and lower school performance (perhaps explained by such factors as "absenteeism to school during the day the test was conducted" p.46) ,

"the performance shown by students who face chronic hallucinations is also declining and  declining.  This  is  all  due  to  the  actions  of  students leaving the teacher's learning and teaching sessions as well as  not  attending  school  when  this  hallucinatory  disorder strikes.  This  illness or  disorder  comes  to  the  student suddenly  and  periodically.  Each  time  this  hallucination  disease strikes the student causes the student to have to take school  holidays  for  a  few  days  due  to  pain  or  depression"

Awang, 2022, p.42

However,

  • these four students do not represent any wider population;
  • there is no information about the specific nature, frequency, intensity, etcetera, of the hallucinations or diagnoses in these individuals;
  • there was no statistical test of significance of changes; and
  • there was no control condition to see if performance dips were experienced by others not experiencing hallucinations at the same time.

Once they had recovered from the hallucinations (and it is not clear on what basis that judgement was made) their scores improved.

The author would like us to believe that the relief from the hallucinations was due to the intervention, but this seems to be (quite literally) an act of faith 3 as no actual research evidence is offered to show that the soul purification module actually had any effect. It is of course possible the module did have an effect (whether for the conjectured or other reasons – such as simply offering troubled children some extra study time in a calm and safe environment and special attention – or because of an expectancy effect if the students were told by trusted authority figures that the intervention would lead to the purification of their hearts and the healing of their hallucinatory disorder) but the study, as reported, offers no strong grounds to assume it did have such an effect.

An irresponsible journal

As hallucinations are often symptoms of organic disease affecting blood supply to the brain, there is a major question of whether treating the condition by religious instruction is ethically sound. For example, hallucinations may indicate a tumour growing in the brain. Yet, if the module was only a complement to proper medical attention, a reader may prefer to suspect that any improvement in the condition (and consequent increased engagement in academic work) may have been entirely unrelated to the module being evaluated.

Indeed, a published research study that claims that soul purification is a suitable treatment for medical conditions presenting with hallucinations is potentially dangerous as it could lead to serious organic disease going untreated. If Awang's recommendations were widely taken up in Malaysia such that students with serious organic conditions were only treated for their hallucinations by soul purification rather than with medication or by surgery it would likely lead to preventable deaths. For a research journal to publish a paper with such a conclusion, where any qualified reviewer or editor could easily see the conclusion is not warranted, is irresponsible.

As the journal website points out,

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general."

https://www.ej-edu.org/index.php/ejedu/about

So, why did the European Journal of Education and Pedagogy not subject this submission to meaningful review to help the author of this study meet the standards of the discipline, and of science in general?


Work cited:

Notes:

1 In mature fields in the natural sciences there are recognised traditions ('paradigms', 'disciplinary matrices') in any active field at any time. In general (and of course, there will be exceptions):

  • at any historical time, there is a common theoretical perspective underpinning work in a research programme, aligned with specific ontological and epistemological commitments;
  • at any historical time, there is a strong alignment between the active theories in a research programme and the acceptable instrumentation, methodology and analytical conventions.

Put more succinctly, in a mature research field, there is generally broad agreement on how a phenomenon is to be understood; and how to go about investigating it, and how to interpret data as research evidence.

This is generally not the case in educational research – which is in part at least due to the complexity and, so, multi-layered nature, of the phenomena studied (Taber, 2014a): phenomena such as classroom teaching. So, in reviewing educational papers, it is sometimes necessary to find different experts to look at the theoretical and the methodological aspects of the same submission.


2 The paper is very strange in that the introductory sections and the conclusions and implications sections have a very broad scope, but the actual research results are restricted to a very limited focus: analysis of school test scores and grades.

It is as if as (and could well be that) a dissertation with a number of evidential strands has been reduced to a paper drawing upon only one aspect of the research evidence, but with material from other sections of the dissertation being unchanged from the original broader study.


3 Readers are told that

"All  these  acts depend on the sincerity of the medical researcher or fortune-teller seeking the help of Allah S.W.T to ensure that these methods and means are successful. All success is obtained by the permission of Allah alone"

Awang, 2022, p.43


What causes the clouds in your coffee?

Of liars, paradoxes, and vanity


Keith S. Taber


the song works wonderfully as a kind of paradox as in a sense
the song can only be about someone whom it is not about

Are your dreams no more than clouds in your coffee?
(Image by kyuubicreeper from Pixabay)

In a popular song of the early 1970s, singer-songwriter Carly Simon reflected on having had some "clouds in my coffee", which is an intriguing reference. If this was meant as an objective observation, then it seems to invite some interpretation. What kinds of things are clouds and coffee such that clouds can be observed in coffee?

Solutions, suspensions and supersaturation

In everyday life clouds are usually observed in the sky, and are due to myriad tiny water droplets. The air always naturally contains some water vapour, and the amount depends on the conditions – air just over a hot sea is likely to have a high 'moisture' content due to the rate of evaporation. If very moist air cools then it may become supersaturated with water vapour, in which case any suitable 'nuclei' will facilitate condensation. (These nuclei may be dust particles for example – but ions can also act as condensation nuclei.)

Today everyone is taught at school about the water cycle which is so essential for life on this planet, by which water is recycled through repeated evaporation/transpiration and condensation and precipitation. (Sadly, in Isaac Newton's day the school curriculum was mostly limited to learning maths and Latin, which was unfortunate – as if he had been taught about the water cycle he might not have felt the need to posit an extraterrestrial explanation for how the seas do not dry up with all that evaporation.)


Newton had a suggestion for how the earth's seas did not dry up
(Images by 1980supra and Gordon Johnson from Pixabay)

Clouds may occur on a smaller scale, such as in cloud chambers used to detect the traces left by alpha or beta radiation. Here, material soaked with a suitable volatile liquid, such as ethanol, is placed in a chamber so that the air becomes saturated, and then, where it cools, supersaturated. An alpha or beta source will emit fast moving particles that transfer momentum by colliding with molecules in the air, often ionising them. As the alpha or beta particle moves through the chamber it leaves behind it a 'trace' in terms of a trail of ions – in a cloud chamber the alcohol or other other supersaturated vapour condenses around these ions giving a visible trail – somewhat like the vapour trails left by jets that are often still visible when the plane is too far away to be seen.


The atmosphere – nature's own cloud chamber


So, what is coffee? I think that depends on how you make it. Assuming you take your coffee black, then if you serve it in a glass, and hold the glass up to the light, it may seem to be transparent. That is, it has a brown colour, but you can see through it to what is behind. If so, that is a solution with various substances in the coffee dissolved in the solvent (hot water). Perhaps you cannot see through your coffee, and if you try shining a torch or laser pen at it you see the beam lighting up its route through the coffee? If so, as well as dissolved material, it also contains suspended particles that are too large to be in solution. You can test this – as long as you do not mind not drinking your coffee. Given enough time, if the glass is undisturbed, the suspended particles will form a sediment at the bottom, and you will be left with a clear solution above. (But your coffee will now be cold.)

Coffee is made in various ways, and whether your coffee is a solution or has both dissolved solute and suspended particles will depend on how finely the coffee solids are filtered in preparing the drink. If you take milk or something similar in your coffee, then you definitely have some suspended particles of fat or oil in there.

So, how are we to understand how clouds can form in coffee? If one had hot coffee which was purely a solution (finely filtered), and was very strong coffee, then perhaps some of the solute would be saturated – the most that could be dissolved at that temperature. If the coffee cooled, then perhaps it would become a supersaturated solution, and, if suitable nuclei were present (so perhaps not too fine a filter, so allowing a few suspended particles?), 'clouds' of precipitating coffee solids would be seen in the solution?

Song-writing as representing a poetic truth

Now, dear reader, you are probably suspecting that I am being an over-literal scientist here, as clearly Carly Simon was writing a song and not making laboratory observations. Surely, it is obvious, that the clouds in her coffee were metaphorical clouds? She is representing how she felt – as she mused over her coffee – she was sad or melancholy or at least reflective.

When released as a single, the record, 'You're so vain', was a big hit in many parts of the world, no doubt in part because it was a very catchy song, but perhaps also in part because of ongoing speculation about WHOM it was Ms. Simon was accusing of vanity. Over the years she has suggested the song is about a composite of three men, and she has acknowledged one of them (the actor Warren Beatty) but speculation has continued. Perhaps if it was released today, a song that includes the line "You had me several years ago when I was still quite naive" might be viewed as reporting something darker than just a failed love affair? But what especially appeals to me about the song is its sense of paradox.


The album including the hit song 'You're so vain' proclaimed 'No secrets' but the precise target(s) of the song have remained a matter of speculation


The liar paradox

There is a famous paradox which was said to have bemused and puzzled some ancients. Imagine meeting someone who tells you:

All Cretans are liars.
I am a Cretan.

I mean no disrespect to the people of Crete, but this is how I understand the paradox was originally framed. We could substitute Venusians or politicians or whatever. A modern version could be

All members of the Bullingdon Club are liars.
I am a member of the Bullingdon Club

This is supposed to present a paradox. Either the first statement is true, in which case the second is not. Or the second is true, in which case the first is not.

If (and see below) we accept this is a paradox then it has a simple solution. As well as saying things they think are true, and things they think are false, people are also capable of saying things that do not make sense – even to themselves! Not all texts can be considered to have truth value. There is then no paradox, just a lack of consistency!

After all, we can say all kinds of things that do not relate to possible situations

  • Gas sample A contains 2g of hydrogen at a lower temperature and higher pressure than gas sample B
  • Gas sample B contains 2g of hydrogen that occupies a smaller volume than gas sample A

Oh, how much easier (if less interesting) life would be if there was a law of nature that meant we could not say or write things that were not true or not physically possible! Scholars would simply need to sit down and start writing. Anything they were able to produce would be true and we would not need the expense of CERN and all those other laboratories!

Applying hermeneutics

Now, even though what people say or write need not make good sense, one should be careful dismissing an apparently non-sensical statement too easily. I know from working with science students who may have various alternative conceptions and alternative conceptual frameworks that often they say things that do not seem to make sense. Certainly, sometimes, this may be because they are confused or are guessing an answer to a teacher's question without fully thinking it through.

But sometimes what they say makes good sense from their perspective. We only find this out by engaging them in conversation when it may transpire from the wider context of their talk that they are using a term in a somewhat non-canonical way, or have a different way of dividing up the world, or they limit certain principles to a too restricted set of contexts (or apply principles beyond their valid range of application), et cetera. That is, we apply a hermeneutic approach to seeking sense by seeking to understand a statement in terms of the wider 'text'.

Whilst, from a canonical scientific perspective, the student has still got some of the science wrong, it is much more likely a teacher can shift their thinking towards the target knowledge in the curriculum if she recognises it has coherence for the student and understands and if she engages with the student's way of thinking (for example exploring limitations, pointing out it has absurd or clearly incorrect implications), than if she simply dismisses it as 'wrong'. This, of course, is the basis of the constructivist approach to science teaching.

Read about constructivist pedagogy

Liars, and effective lairs

However, even if we take the Cretan's couplet as a paradox, it is not very convincing. A liar is someone who tells lies – not someone who only ever tells lies. A 'good liar' (if that is not an oxymoron – I mean someone good at lying), that is someone able to use lying to their advantage, presumably does this by being truthful enough of the time that people do not suspect when they are lying. Someone who announced themselves on the telephone with…

"Hi, I'm John. I am a fish. I eat oak trees for breakfast. I am four thousands years old. I used to be Napoleon Bonaparte. I can hold my breath for months at a time. I levitate when I sleep. I am England's greatest goalscorer, even though, as a fish, I do not have any feet. I am phoning from your bank because we are concerned about some suspicious activity on your account, so would like to just check with you on some recent transactions to make sure you authorised them. First of all, because we take customer privacy and security very seriously, I need to be sure who I am talking to, so would you mind giving me your full name, postcode, account number and password."

A very unconvincing scammer

…would be unlikely to be believed. Much better to start with something that is clearly true if you want to sneak in a lie without it being noticed. (The recent demise of a UK Prime Minister perhaps offers an example of how, when you already have a reputation for not telling the truth, people are more likely to suspect, scrutinise and check your claims, and, so, detect dishonest statements.)

Reductio ad absurdum

So, an improvement on the Cretan liar paradox is the card which has a statement on each side:

  • the statement on the other side of this card is true
  • the statement on the other side of this card is a lie

This corrects for the need to understand 'liar' as someone who only tells lies.

If the statement on the first side is correct, then the statement on the other side is true, which means the statement on the first side was a lie, so not correct.

But if the statement on the first side is a indeed a lie (as we are informed by the statement on the other side) then the statement the second side is not true, which means the statement on the first side was not a lie, and is true

Either way, whichever statement we begin by accepting we find is contradicted later. This reflects the method of 'reductio ad absurdum' which is a technique used to demonstrate false arguments.

Imagine we wanted to demonstrate that atoms can be divided. Let us posit that atoms are indivisible. This would lead us to conclude there are not discrete subatomic particles. Yet electrons, alpha particles, neutron, protons have all been shown to be subatomic particles. Therefore our premise (atoms are indivisible) must be false.

An even simpler version of the liar paradox is the statement:

  • this statement is a lie

The statement claims to be a lie, but if it is a lie that means the truth is contrary to what it claims. So (as it claims to be a lie) it is true. But if it is true, then the statement must be correct. So if it is correct, as it claims to be a lie, it is a lie. So, if true it is a lie. But then if it is a lie…

Clearly we have self-contradiction. Again, there is no real mystery here – it is simply a clever statement that is neither true nor false but lacks coherent sense. What is a mystery, is who 'is so vain'?

Do the vain think themselves vain?

The hook of the song is the chorus

You're so vain
You probably think this song is about you
You're so vain (you're so vain)
I bet you think this song is about you
Don't you don't you?

This seems a nice reflection of the Cretan paradox. Carly's ex-lover would have to be very vain to think she would be so obsessed with him that she would write a song about him. So, if he thinks the song is about him, he is indeed 'so vain'.

Except of course, the song may actually be about him. If an ex-lover whom the song is about thinks it is about him then is that vanity? Surely, not. It is not vanity for someone to acknowledge, say, being a Nobel prize winner, if she is indeed a Nobel laureate. Vanity is thinking you should have won the Nobel that was given to someone else!

The song contains some specific biographic details, such as

Well I hear you went up to Saratoga
And your horse naturally won
Then you flew your Lear jet up to Nova Scotia
To see the total eclipse of the sun

So, someone hearing the song who had been a lover of Ms. Simon several years earlier, and had been up to Saratoga to watch a horse race where his own horse had won the race, and had flown himself to Nova Scotia in his own Leah jet to see the total eclipse, surely would have good grounds for feeling this could well be him.

In particular, we might think, if they recognised themselves as being vain! But this is what makes the song delicious lyrically, as surely a vain person does not recognise themselves as vain?

So, if someone thinks the song is about them, when it is not, they are vain enough to think an ex-lover would write a song about them. BUT that is not someone the song is actually about, so not whom is being accused of being 'so vain'.

If the person whom is being written about does not think it is them, then they are presumably not so vain. If they do recognise themselves, then they are justified in doing so, so that is not really evidence of vanity, either!

So, the song works wonderfully as a kind of paradox as in a sense the song can only be about someone whom it is not about! Did Carly Simon realise that when she wrote the song. I assume so. Does this contribute to its continuing popularity? Perhaps, if you, dear reader, know this song, do you too appreciate this aspect of it? Or, perhaps most people just sing along with the catchy tune and let the lyrics flow? They are poetry after all, not formal knowledge claims.

Explaining the clouds in the coffee

So, were the clouds in the coffee just meant as a metaphor for how Carly was feeling about the plans she had had during her time with her ex-lover?

Well you said that we made such a pretty pair and that you would never leave
But you gave away the things you loved
And one of them was me
I had some dreams they were clouds in my coffee clouds in my coffee and
You're so vain
You probably think this song is about you

On a number of websites Ms. Simon is quoted as explaining (in 2001) that

"'Clouds in my coffee' are the confusing aspects of life and love. That which you can't see through, and yet seems alluring…until. Like a mirage that turns into a dry patch. Perhaps there is something in the bottom of the coffee cup that you could read if you could (like tea leaves or coffee grinds)"

Carly Simon quoted on a range of websites

However, Carly has also explained she took the line from a comment her friend and pianist Billy Mernit made when they were served coffee on a plane – "As I got my coffee, there were clouds outside the window of the airplane and you could see the reflection in the cup of coffee. Billy said to me, 'Look at the clouds in your coffee.  That's like a Truffaut shot!'."

Mermet recalls on his blog that he had actually compared the image to a scene from a Godard film: "what I had talked about was a Godard shot, namely the overhead close-up of a coffee cup from [the film] 2 or 3 Things I Know About Her.


A still from the Jean-Luc Godard film '2 or 3 Things I Know About Her' – clouds? I see galaxies!

Clearly Carly [sic] may have been in a reflective mood, but the clouds that appeared to be in her coffee were due to a different kind of reflection. So, it seems there was a sound physical interpretation, after all.

POEsing assessment questions…

…but not fattening the cow


Keith S. Taber


A well-known Palestinian proverb reminds us that we do not fatten the cow simply by repeatedly weighing it. But, sadly, teachers and others working in education commonly get so fixated on assessment that it seems to become an end in itself.


Images by Clker-Free-Vector-Images from PixabayOpenClipart-Vectors and Deedster from Pixabay

A research study using P-O-E

I was reading a report of a study that adopted the predict-observe-explain, P-O-E, technique as a means to elicit "high school students' conceptions about acids and bases" (Kala, Yaman & Ayas, 2013, p.555). As the name suggests, P-O-E asks learners to make a prediction before observing some phenomenon, and then to explain their observations (something that can be specially valuable when the predictions are based on strongly held intuitions which are contrary to what actually happens).

Read about Predict-Observe-Explain


The article on the publisher website

Kala and colleagues begin the introduction to their paper by stating that

"In any teaching or learning approach enlightened by constructivism, it is important to infer the students' ideas of what is already known"

Kala, Yaman & Ayas, 2013, p.555
Constructivism?

Constructivism is a perspective on learning that is informed by research into how people learn and a great many studies into student thinking and learning in science. A key point is how a learner's current knowledge and understanding influences how they make sense of teaching and what they go on to learn. Research shows it is very common for students to have 'alternative conceptions' of science topics, and often these conceptions either survive teaching or distort how it is understood.

The key point is that teachers who teach the science without regard to student thinking will often find that students retain their alternative ways of thinking, so constructivist teaching is teaching that takes into account and responds to the ideas about science topics that students bring to class.

Read about constructivism

Read about constructivist pedagogy

Assessment: summative, formative and diagnostic

If teachers are to take into account, engage with, and try to reshape, learners ideas about science topics, then they need to know what those ideas are. Now there is a vast literature reporting alternative conceptions in a wide range of science topics, spread across thousands or research reports – but no teacher could possibly find time to study them all. There are books which discuss many examples and highlight some of the most common alternative conceptions (including one of my own, Taber, 2014)



However, in any class studying some particular topic there will nearly always be a spread of different alternative conceptions across the students – including some so idiosyncratic that they have never been reported in any literature. So, although reading about common misconceptions is certainly useful to prime teachers for what to look out for, teachers need to undertake diagnostic assessment to find out about the thinking of their own particular students.

There are many resources available to support teachers in diagnostic assessment, and some activities (such as using concept cartoons) that are especially useful at revealing student thinking.

Read about diagnostic assessment

Diagnostic assessment, assessment to inform teaching, is carried out at the start of a topic, before the teaching, to allow teachers to judge the learners' starting points and any alternative conceptions ('misconceptions') they may have. It can therefore be considered aligned to formative assessment ('assessment for learning') which is carried out as part of the learning process, rather than summative assessment (assessment of leaning) which is used after studying to check, score, grade and certify learning.

P-O-E as a learning activity…

P-O-E can best support learning in topics where it is known learners tend to have strongly held, but unhelpful, intuitions. The predict stage elicits students' expectations – which, when contrary to the scientific account, can be confounded by the observe step. The 'cognitive conflict' generated by seeing something unexpected (made more salient by having been asked to make a formal prediction) is thought to help students concentrate on that actual phenomena, and to provide 'epistemic relevance' (Taber, 2015).

Epistemic relevance refers to the idea that students are learning about things they are actually curious about, whereas for many students following a conventional science course must be experienced as being presented with the answers to a seemingly never-ending series questions that had never occurred to them in the first place.

Read about the Predict-Observe-Explain technique

Students are asked to provide an explanation for what they have observed which requires deeper engagement than just recording an observation. Developing explanations is a core scientific practice (and one which is needed before another core scientific practice – testing explanations – is possible).

Read about teaching about scientific explanations

To be most effective, P-O-E is carried out in small groups, as this encourages the sharing, challenging and justifying of ideas: the kind of dialogic activity thought to be powerful in supporting learners in developing their thinking, as well as practicing their skills in scientific argumentation. As part of dialogic teaching such an open-forum for learners' ideas is not an end in itself, but a preparatory stage for the teacher to marshal the different contributions and develop a convincing argument for how the best account of the phenomenon is the scientific account reflected in the curriculum.

Constructivist teaching is informed by learners' ideas, and therefore relies on their elicitation, but that elicitation is never the end in itself but is a precursor to a customised presentation of the canonical account.

Read about dialogic teaching and learning

…and as a diagnostic activity

Group work also has another function – if the activity is intended to support diagnostic assessment, then the teacher can move around the room listening in to the various discussions and so collecting valuable information on what students think and understand. When assessment is intended to inform teaching it does not need to be about students completing tests and teachers marking them – a key principle of formative assessment is that it occurs as a natural part of the teaching process. It can be based on productive learning activities, and does not need marks or grades – indeed as the point is to help students move on in their thinking, any kind of formal grading whilst learning is in progress would be inappropriate as well as a misuse of teacher time.

Probing students' understandings about acid-base chemistry

The constructivist model of learning applies to us all: students, teachers, professors, researchers. Given what I have written above about P-O-E, about diagnostic assessment, and dialogic approaches to learning, I approached Kala and colleagues' paper with expectations about how they would have carried out their project.

These authors do report that they were able to diagnose aspects of student thinking about acids and bases, and found some learning difficulties and alternative conceptions,

"it was observed that eight of the 27 students had the idea that the "pH of strong acids is the lowest every time," while two of the 27 students had the idea that "strong acids have a high pH." Furthermore, four of the 27 students wrote the idea that the "substance is strong to the extent to which it is burning," while one of the 27 students mentioned the idea that "different acids which have equal concentration have equal pH."

Kala, Yaman & Ayas, 2013, pp.562-3

The key feature seems to be that, as reported in previous research, students conflate acid concentration and acid strength (when it is possible to have a high concentration solution of a weak acid or a very dilute solution of a strong acid).

Yet some aspects of this study seemed out of alignment with the use of P-O-E.

The best research style?

One feature was the adoption of a positivistic approach to the analysis,

Although there has been no reported analyzing procedure for the POE, in this study, a different [sic] analyzing approach was offered taking into account students' level of understanding… Data gathered from the written responses to the POE tasks were analyzed and divided into six groups. In this context, while students' prediction were divided into two categories as being correct or wrong, reasons for predictions were divided into three categories as being correct, partially correct, or wrong.

Kala, Yaman & Ayas, 2013, pp.560


GroupPredictionReasons
correctcorrect
correctpartially correct
correctwrong
wrongcorrect
wrongpartially correct
wrongwrong
"the written responses to the POE tasks were analyzed and divided into six groups"

There is nothing inherently wrong with doing this, but it aligns the research with an approach that seems at odds with the thinking behind constructivist studies that are intended to interpret a learner's thinking in its own terms, rather than simply compare it with some standard. (I have explored this issue in some detail in a comparison of two research studies into students' conceptions of forces – see Taber, 2013, pp.58-66.)

In terms of research methodology we might say it seem to be conceptualised within the 'wrong' paradigm for this kind of work. It seems positivist (assuming data can be unambiguously fitted into clear categories), nomothetic (tied to 'norms' and canonical answers) and confirmatory (testing thinking as matching model responses or not), rather than interpretivist (seeking to understand student thinking in its own terms rather than just classifying it as right or wrong), idiographic (acknowledging that every learner's thinking is to some extent unique to them) and discovery (exploring nuances and sophistication, rather than simply deciding if something is acceptable or not).

Read about paradigms in educational research

The approach used seemed more suitable for investigating something in the science laboratory, than the complex, interactive, contextualised, and ongoing life of classroom teaching. Kala and colleagues describe their methodology as case study,

"The present study used a case study because it enables the giving of permission to make a searching investigation of an event, a fact, a situation, and an individual or a group…"

Kala, Yaman & Ayas, 2013, pp.558
A case study?

Case study is a naturalistc methodology (rather than involving an intervention, such as an experiment), and is idiographic, reflecting the value of studying the individual case. The case is one from among many instances of its kind (one lesson, one school, one examination paper, etc.), and is considered as a somewhat self contained entity yet one that is embedded in a context in which it is to some extent entangled (for example, what happens in a particular lesson is inevitably somewhat influenced by

  • the earlier sequence of lessons that teacher taught that class {the history of that teacher with that class},
  • the lessons the teacher and student came from immediately before this focal lesson,
  • the school in which it takes place,
  • the curriculum set out to be followed…)

Although a lesson can be understood as a bounded case (taking place in a particular room over a particular period of time involving a specified group of people) it cannot be isolated from the embedding context.

Read about case study methodology


Case study – study of one instance from among many


As case study is idiographic, and does not attempt to offer direct generalisation to other situations beyond that case, a case study should be reported with 'thick description' so a reader has a good mental image of the case (and can think about what makes it special – and so what makes it similar to, or different from, other instances the reader may be interested in). But that is lacking in Kala and colleagues' study, as they only tell readers,

"The sample in the present study consisted of 27 high school students who were enrolled in the science and mathematics track in an Anatolian high school in Trabzon, Turkey. The selected sample first studied the acid and base subject in the middle school (grades 6 – 8) in the eighth year. Later, the acid and base topic was studied in high school. The present study was implemented, based on the sample that completed the normal instruction on the acid and base topic."

Kala, Yaman & Ayas, 2013, pp.558-559

The reference to a sample can be understood as something of a 'reveal' of their natural sympathies – 'sample' is the language of positivist studies that assume a suitably chosen sample reflects a wider population of interest. In case study, a single case is selected and described rather than a population sampled. A reader is left to rather guess what population being sampled here, and indeed precisely what the 'case' is.

Clearly, Kala and colleagues elicited some useful information that could inform teaching, but I sensed that their approach would not have made optimal use of a learning activity (P-O-E) that can give insight into the richness, and, sometimes, subtlety of different students' ideas.

Individual work

Even more surprising was the researchers' choice to ask students to work individually without group discussion.

"The treatment was carried out individually with the sample by using worksheets."

Kala, Yaman & Ayas, 2013, p.559

This is a choice which would surely have compromised the potential of the teaching approach to allow learners to explore, and reveal, their thinking?

I wondered why the researchers had made this choice. As they were undertaking research, perhaps they thought it was a better way to collect data that they could readily analyse – but that seems to be choosing limited data that can be easily characterised over the richer data that engagement in dialogue would surely reveal?

Assessment habits

All became clear near the end of the study when, in the final paragraph, the reader is told,

"In the present study, the data collection instruments were used as an assessment method because the study was done at the end of the instruction/ [sic] on the acid and base topics."

Kala, Yaman & Ayas, 2013, p.571

So, it appears that the P-O-E activity, which is an effective way of generating the kind of rich but complex data that helps a teacher hone their teaching for a particular group, was being adopted, instead, as means of a summative assessment. This is presumably why the analysis focused on the degree of match to the canonical science, rather than engaging in interpreting the different ways of thinking in the class. Again presumably, this is why the highly valuable group aspect of the approach was dropped in favour of individual working – summative assessment needs to not only grade against norms, but do this on the basis of each individual's unaided work.

An activity which offers great potential for formative assessment (as it is a learning activity as well as a way of exploring student thinking); and that offers an authentic reflection of scientific practice (where ideas are presented, challenged, justified, and developed in response to criticism); and that is generally enjoyed by students because it is interactive and the predictions are 'low stakes' making for a fun learning session, was here re-purposed to be a means of assessing individual students once their study of a topic was completed.

Kala and colleagues certainly did identify some learning difficulties and alternative conceptions this way, and this allowed them to evaluate student learning. But I cannot help thinking an opportunity was lost here to explore how P-O-E can be used in a formative assessment mode to inform teaching:

  • diagnostic assessment as formative assessment can inform more effective teaching
  • diagnostic assessment as summative assessment only shows where teaching has failed

Yes, I agree that "in any teaching or learning approach enlightened by constructivism, it is important to infer the students' ideas of what is already known", but the point of that is to inform the teaching and so support student learning. What were Kala and colleagues going to do with their inferences about students ideas when they used the technique as "an assessment method … at the end of the instruction".

As the Palestinian adage goes, you do not fatten up the cow by weighing it, just as you do not facilitate learning simply by testing students. To mix my farmyard allusions, this seems to be a study of closing the barn door after the horse has already bolted.


Work cited

The mystery of the disappearing authors

Original image by batian lu from Pixabay 

Can an article be simultaneously out of scope, and limited in scope?

Keith S. Taber

Not only had two paragraphs from the abstract gone missing, along with the figures, but the journal article had also lost two-thirds of its authors.

I have been reading some papers in a journal that I believed, on the basis of its misleading title and website details, was an example of a poor-quality 'predatory journal'. That is, a journal which encourages submissions simply to be able to charge a publication fee (currently $1519, according to the website), without doing the proper job of editorial scrutiny. I wanted to test this initial evaluation by looking at the quality of some of the work published.

Although the journal is called the Journal of Chemistry: Education Research and Practice (not to be confused, even if the publishers would like it to be, with the well-established journal Chemistry Education Research and Practice) only a few of the papers published are actually education studies.

One of the articles that IS on an educational topic is called 'An overview of the first year Undergraduate Medical Students [sic] Feedback on the Point of Care Ultrasound Curriculum' (Mohialdin, 2018a), by Vian Mohialdin, an
Associate Professor of Pathology and Molecular Medicine at McMaster University in Ontario.

A single-authored paper by Prof. Mohialdin

Review articles

Research journals tend to distinguish between different types of articles, and most commonly:

  • papers that report empirical studies,
  • articles which set out theoretical perspectives/positions, and
  • articles that offer reviews of the existing literature on a topic.

'An overview of the first year Undergraduate Medical Students Feedback on the Point of Care Ultrasound Curriculum' is classified as a review article.

A review article?

Typically, review articles cite a good deal of previous literature. Prof. Mohialdin cites a modest number of previous publications – just 10. Now one might suspect that perhaps the topic of point-of-care ultrasound in undergraduate medical education is a fairly specialist topic, and perhaps even a novel topic, in which case there may not be much literature to review. But a review of ultrasound in undergraduate medical education published a year earlier (Feilchenfeld, Dornan, Whitehead & Kuper, 2017) cited over a hundred works.

Actually a quick inspection of Mohialdin's paper reveals it is not a review article at all, as it reports a single empirical study. Either the journal has misclassified the article, or the author submitted it as a review article and the journal did not query this. To be fair, the journal website does note that classification into article types "is subjective to some degree". 1

So, is it a good study?

Not a full paper

Well, that is not easy to evaluate as the article is less than two pages in length whereas most research studies in education are much more substantial. Even the abstract of the article seems lacking (see the table below, left hand column). An abstract of a research paper is usually expected to very briefly report something about the research sample/population (who participated in the study?); the research design/methodology (is it an experiment, a survey…), and the results (what did the researchers find out?) The abstract of Prof. Mohialdin's paper misses all these points and so tells readers nothing about the research.

The main text also lacks some key information. The study is a type of research report that is sometimes called a 'practice paper' – the article reports some teaching innovation carried out by practitioners in their own teaching context. The text does give some details of what the practice was – but simply writing about practice is not usually considered sufficient for a research paper. At the least, there needs to be some evaluation of the innovation.

The research design for the evaluation is limited to two sentences under the section heading 'Conclusion/Result Result'. (Mohialdin, 2018a, p.1)

Here there has been some evaluation, but the report is very sketchy, and so might seem inadequate for a research report. Under a rather odd section heading, the reader is informed,

"A questionnaire was handed to the first year undergraduate medical students at the end of session four, to evaluate their hands on ultrasound session experience."

Mohialdin, 2018a, p.1

That one sentence comprises the account of data collection.

The questionnaire is not reproduced for readers. Nor is it described (how many questions, what kinds of questions?) Nor is its development reported. There is not any indication of how many of the 150 students in the population completed the questionnaire, whether ethical procedures were followed 2, where the students completed the questionnaire (for example, was this undertaken in a class setting where participants were being observed by the teaching staff, or did they take it away with them "at the end of session four" to complete in private?) or whether they were able to respond anonymously (rather than have their teachers be able to identify who made which responses).

Perhaps there are perfectly appropriate responses to these questions – but as the journal peer reviewers and editor do not seem to have asked, the reader is left in the dark.

Invisible analytical techniques

Similarly, details of the analysis undertaken are, again, sketchy. A reader is told:

"Answers were collected and data was [sic] analyzed into multiple graphs (as illustrated on this poster)."

Mohialdin, 2018a, p.1

Now that sounds promising, except either the author forgot to submit the graphs with the text, or the journal somehow managed to lose them in production. 3 (And as I've found out, even the most prestigious and well established publishers can lose work they have accepted for publication!)

So, readers are left with no idea what questions were asked, nor what responses were offered, that led to the graphs – that are not provided.

There were also comments – presumably [sic – it would be good to be told] in response to open-ended items on the questionnaire.

"The comments that we [sic, not I] got from this survey were mainly positive; here are a few of the constructive comments that we [sic] received:…

We [sic] also received some comments about recommendations and
ways to improve the sessions (listed below):…"

Mohialdin, 2018a, 1-2.

A reader might ask who decided which comments should be counted as positive (e.g., was it a rater independent of the team who implemented the innovation?), and what does 'mainly' mean here (e.g., 90 of 100 responses? 6 of 11?).

So, in summary, there is no indication of what was asked, who exactly responded, or how the analysis was carried out. As the Journal of Chemistry: Education Research and Practice claims to be a peer reviewed journal one might expect reviewers to have recommended at least that such information (along with the missing graphs) should be included before publication might be considered.

There is also another matter that one would expect peer reviewers, and especially the editor, to have noticed.

Not in scope

Research journals usually have a scope – a range of topics they publish articles on. This is normally made clear in the information on journal websites. Despite its name, the Journal of Chemistry: Education Research and Practice does not restrict itself to chemistry education, but invites work on all aspects of the chemical sciences, and indeed most of its articles are not educational.

Outside the scope of the journal? (Original Image by Magnascan from Pixabay )

But 'An overview of the first year Undergraduate Medical Students Feedback on the Point of Care Ultrasound Curriculum' is not about chemistry education or chemistry in a wider sense. Ultrasound diagnostic technology falls under medical physics, not a branch of chemistry. And, more pointedly, teaching medical students to use ultrasound to diagnose medical conditions falls under medical education – as the reference to 'Medical Students' in the article title rather gives away. So, it is odd that this article was published where it was, as it should have been rejected from this particular journal as being out of scope.

Despite the claims of Journal of Chemistry: Education Research and Practice to be a peer reviewed journal (that means that all submissions are supposedly sent out to, and scrutinised and critiqued by, qualified experts on the topic who make recommendations about whether something is sufficient quality for publication, and, if so, whether changes should be made first – like perhaps including graphs that are referred to, but missing), the editor managed to decide the submission should be published just seven days after it was submitted for consideration.

The chemistry journal accepted the incomplete report of the medical education study, to be described as a review article, one week after submission.

The journal article as a truncated conference poster?

The reference to "multiple graphs (as illustrated on this poster)" (my emphasis) suggested that the article was actually the text (if not the figures) of a poster presented at a conference, and a quick search revealed that Mohialdin, Wainman and Shali had presented on 'An overview of the first year Undergraduate Medical Students Feedback on the Point of Care Ultrasound Curriculum' at an experimental biology (sic, not chemistry) conference.

A poster at a conference is not considered a formal publication, so there is nothing inherently wrong with publishing the same material in a journal – although often posters report either quite provisional or relatively inconsequential work so it is unusual for the text of a poster to be considered sufficiently rigorous and novel to justify appearing in a research journal in its original form. It is notable that despite being described by Prof. Mohialdin as a 'preliminary' study, the journal decided it was of publishable quality.

Although norms vary between fields, it is generally the case that a conference poster is seen as something quite different from a journal article. There is a limited amount of text and other material that can be included on a poster if it is to be readable. Conferences often have poster sessions where authors are invited to stand by their poster and engage with readers – so anyone interested can ask follow-up questions to supplement the often limited information given on the poster itself.

By contrast, a journal article has to stand on its own terms (as the authors cannot be expected to pop round for a conversation when you decide to read it). It is meant to present an argument for some new knowledge claim(s): an argument that depends on the details of the research conceptualisation, design, and data analysis. So what may seem as perfectly adequate in a poster may well not be sufficient to satisfy journal peer review.

The abstract of the conference poster was published in a journal (Mohialdin, Wainman & Shali, 2018) and I have reproduced that abstract in the table below, in the right hand column.


Mohialdin, 2018a
(Journal paper)
Mohialdin, Wainman & Shali, 2018
(Conference poster)
With the technological progress of different types of portable Ultrasound machines, there is a growing demand by all health care providers to perform bedside Ultrasonography, also known as Point of Care Ultrasound (POCUS). This technique is becoming extremely useful as part of the Clinical Skills/Anatomy teaching in the undergraduate Medical School Curriculum.With the technological progress of different types of portable Ultrasound machines, there is a growing demand by all health care providers to perform bedside Ultrasonography, also known as Point of Care Ultrasound (POCUS). This technique is becoming extremely useful as part of the Clinical Skills/Anatomy teaching in the undergraduate Medical School Curriculum.
Teaching/training health care providers how to use these portable Ultrasound machines can complement their physical examination findings and help in a more accurate diagnosis, which leads to a faster and better improvement in patient outcomes. In addition, using portable Ultrasound machines can add more safety measurements to every therapeutic/diagnostic procedure when it is done under an Ultrasound guide. It is also considered as an extra tool in teaching Clinical Anatomy to Medical students. Using an Ultrasound is one of the different imaging modalities that health care providers depend on to reach their diagnosis, while also being the least invasive method.Teaching/training health care providers how to use these portable Ultrasound machines can complement their physical examination findings and help in a more accurate diagnosis, which leads to a faster and better improvement in patient outcomes. In addition, using portable Ultrasound machines can add more safety measurements to every therapeutic/diagnostic procedure when it is done under an Ultrasound guide. It is also considered as an extra tool in teaching Clinical Anatomy to Medical students. Using an Ultrasound is one of the different imaging modalities that health care providers depend on to reach their diagnosis, while also being the least invasive method.
We thought investing in training the undergraduate Medical students on the basic Ultrasound scanning skills as part of their first year curriculum will help build up the foundation for their future career.We thought investing in training the undergraduate Medical students on the basic Ultrasound scanning skills as part of their first year curriculum will help build up the foundation for their future career.
The research we report in this manuscript is a preliminary qualitative study. And provides the template for future model for teaching a hand on Ultrasound for all health care providers in different learning institutions.
A questionnaire was handed to the first year medical students to evaluate their hands on ultrasound session experience. Answers were collected and data was [sic] analyzed into multiple graphs.
Abstracts from Mohialdin's paper, plus the abstract from co-authored work presented at the Experimental Biology 2018 Meeting according to the journal of the Federation of American Societies for Experimental Biology. (See note 4 for another version of the abstract.)

The abstract includes some very brief information about what the researchers did (which is strangely missing from the journal article's abstract). Journals usually put limits on the word count for abstracts. Surely the poster's abstract was not considered too long for the journal, so someone (the author? the editor?) simply dropped the final two paragraphs – that is, arguably the two most relevant paragraphs for readers?

The lost authors?

Not only had two paragraphs from the abstract gone missing, along with the figures, but the journal article had also lost two-thirds of its authors.

A poster with multiple authors

Now in the academic world authorship of research reports is not an arbitrary matter (Taber, 2018). An author is someone who has made a substantial intellectual contribution to the work (regardless of how much of the writing-up they undertake, or whether they are present when work is presented at a conference). That is a simple principle, which unfortunately may lead to disputes as it needs to be interpreted when applied; but, in most academic fields, there are conventions regarding what kind of contribution is judged significant and substantive enough for authorship

It may well be that Prof. Mohialdin was the principal investigator on this study and that the contributions of Prof. Wainman and Prof. Shali were more marginal, and so it was not obvious whether or not they should be considered authors when reporting the study. But it is less easy to see how they qualified for authorship on the poster but not on the journal article with the same title which seems (?) to be the text of the poster (i.e., describes itself as being the poster). [It is even more difficult to see how they could be authors of the poster when it was presented at one conference, but not when it was presented somewhere else. 4]

Of course, one trivial suggestion might be to suggest that Wainman and Shali contributed the final two paragraphs of the abstract, and the graphs, and that without these the – thus reduced – version in the journal only deserved one author according to the normal academic authorship conventions. That is clearly not an acceptable rationale as academic studies have to be understood more holistically than that!

Perhaps Wainman and Shali asked to have their names left off the paper as they did not want to be published in a journal of chemistry that would publish a provisional and incomplete account of a medical education practice study classified as a review article. Maybe they suspected that this would hardly enhance their scholarly reputations?

Work cited:
  • Feilchenfeld, Z., Dornan, T., Whitehead, C., & Kuper, A. (2017). Ultrasound in undergraduate medical education: a systematic and critical review. Medical Education. 51: 366-378. doi: 10.1111/medu.13211
  • Mohialdin, V. (2018a) An overview of the first year Undergraduate Medical Students Feedback on the Point of Care Ultrasound Curriculum. Journal of Chemistry: Education Research and Practice, 2 (2), 1-2.
  • Mohialdin, V. (2018b). An overview of the first year undergraduate medical students feedback on the point of care ultrasound curriculum. Journal of Health Education Research & Development, 6, 30.
  • Mohialdin, V., Wainman, B. & Shali, A. (2018) An overview of the first year Undergraduate Medical Students Feedback on the Point of Care Ultrasound Curriculum. The FASIB Journal. 32 (S1: Experimental Biology 2018 Meeting Abstracts), 636.4
  • Taber, K. S. (2013). Classroom-based Research and Evidence-based Practice: An introduction (2nd ed.). London: Sage.
  • Taber, K. S. (2018). Assigning Credit and Ensuring Accountability. In P. A. Mabrouk & J. N. Currano (Eds.), Credit Where Credit Is Due: Respecting Authorship and Intellectual Property (Vol. 1291, pp. 3-33). Washington, D.C.: American Chemical Society. [The publisher appears to have made this open access]

Footnotes:

1 The following section appears as part of the instructions for authors:

"Article Types

Journal of Chemistry: Education Research and Practice accepts Original Articles, Review, Mini Review, Case Reports, Editorial, and Letter to the Editor, Commentary, Rapid Communications and Perspectives, Case in Images, Clinical Images, and Conference Proceedings.

In general the Manuscripts are classified in to following [sic] groups based on the criteria noted below [I could not find these]. The author(s) are encouraged to request a particular classification upon submitting (please include this in the cover letter); however the Editor and the Associate Editor retain the right to classify the manuscript as they see fit, and it should be understood by the authors that this process is subjective to some degree. The chosen classification will appear in the printed manuscript above the manuscript title."

https://opastonline.com/journal/journal-of-chemistry-education-research-and-practice/author-guidelines

2 The ethical concerns in this kind of research are minimal, and in an area like medical education one might feel there is a moral imperative for future professionals to engage in activities to innovate and to evaluate such innovations. However, there is a general principle that all participants in research should give voluntary, informed consent.

(Read about Research Ethics here).

According to the policy statement on the author's (/authors'?) University's website (Research involving human participants, Sept. 2002) at the time of this posting (November, 2021) McMaster University "endorses the ethical principles cited in the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (1998)".

According to Article 2.1 of that document, Research Ethics Board Review is required for any research involving "living human participants". There are some exemptions, including (Article 2.5): "Quality assurance and quality improvement studies, program evaluation activities, and performance reviews, or testing within normal educational requirements when used exclusively for assessment, management or improvement purposes" (my emphasis).

My reading then is that this work would not have been subject to requiring approval following formal ethical review if it had been exclusively used for internal purposes, but that publication of the work as research means it should have been subject to Research Ethics Board Review before being carried out. This is certainly in line with advice to teachers who invite their own students to participate in research into their teaching that may be reported later (in a thesis, at a conference, etc.) (Taber, 2013, pp.244-248).


3 Some days ago, I wrote to the Journal of Chemistry: Education Research and Practice (in reply to an invitation to publish in the journal), with a copy of the email direct to the editor, asking where I could find the graphs referred to in this paper, but have not yet had a response. If I do get a reply I will report this in the comments below.


4 Since drafting this post, I have found another publication with the same title published in an issue of another journal reporting conference proceedings (Mohialdin, 2018b):

A third version of the publication (Mohialdin, 2018b).

The piece begins with the same material as in the table above. It ends with the following account of empirical work:

A questionnaire was handed to the first year undergraduate medical students at the end of session four, to evaluate their hands on ultrasound session experience. Answers were collected and data was [sic] analyzed into multiple graphs. The comments that we [sic] got from this survey were mainly positive; here are a few of the constructive comments that we [sic] received: This was a great learning experience; it was a great learning opportunity; very useful, leaned [sic] a lot; and loved the hand on experience.

Mohialdin, 2018b, p.30

There is nothing wrong with the same poster being presented at multiple conferences and this is quite a common academic strategy. Mohialdin (2018b) reports from a conference in Japan, whereas Mohialdin, Wainman, Shali (2018) refers to a US meeting – but it is not clear why the author list is different as the two presentations would seem to report the same research – indeed, it seems reasonable to assume from the commonality of Mohialdin, 2018b) with Mohialdin, Wainman, Shali, 2018 that they are the same report (poster).

Profs. Wainman and Shali should be authors of any report of this study if, and only if, they made substantial intellectual contributions to the work reported – and, surely, either they did, or they did not.

Not motivating a research hypothesis

A 100% survey return that represents 73% (or 70%, or perhaps 48%) of the population

Keith S. Taber

…the study seems to have looked for a lack of significant difference regarding a variable which was not thought to have any relevance…

This is like hypothesising…that the amount of alkali needed to neutralise a certain amount of acid will not depend on the eye colour of the researcher; experimentally confirming this is the case; and then seeking to publish the results as a new contribution to knowledge.

…as if a newspaper headline was 'Earthquake latest' and then the related news story was simply that, as usual, no earthquakes had been reported.

Structuring a research report

A research report tends to have a particular kind of structure. The first section sets out background to the study to be described. Authors offer an account of the current state of the relevant field – what can be called a conceptual framework.

In the natural sciences it may be that in some specialised fields there is a common, accepted way of understanding that field (e.g., the nature of important entities, the relevant variables to focus on). This has been described as working within an established scientific 'paradigm'. 1 However, social phenomena (such as classroom teaching) may be of such complexity that a full account requires exploration at multiple levels, with a range of analytical foci (Taber, 2008). 2 Therefore the report may indicate which particular theoretical perspective (e.g., personal constructivism, activity theory, Gestalt psychology, etc.) has informed the study.

This usually leads to one or more research questions, or even specific hypotheses, that are seen to be motivated by the state of the field as reflected in the authors' conceptual framework.

Next, the research design is explained: the choice of methodology (overall research strategy), the population being studied and how it was sampled, the methods of data collection and development of instruments, and choice of analytical techniques.

All of this is usually expected before any discussion (leaving aside a short statement as part of the abstract) of the data collected, results of analysis, conclusions and implications of the study for further research or practice.

There is a logic to designing research. (Image after Taber, 2014).

A predatory journal

I have been reading some papers in a journal that I believed, on the basis of its misleading title and website details, was an example of a poor-quality 'predatory journal'. That is, a journal which encourages submissions simply to be able to charge a publication fee (currently $1519, according to the website), without doing the proper job of editorial scrutiny. I wanted to test this initial evaluation by looking at the quality of some of the work published.

Although the journal is called the Journal of Chemistry: Education Research and Practice (not to be confused, even if the publishers would like it to be, with the well-established journal Chemistry Education Research and Practice) only a few of the papers published are actually education studies. One of the articles that IS on an educational topic is called 'Students' Perception of Chemistry Teachers' Characteristics of Interest, Attitude and Subject Mastery in the Teaching of Chemistry in Senior Secondary Schools' (Igwe, 2017).

A research article

The work of a genuine academic journal

A key problem with predatory journals is that because their focus is on generating income they do not provide the service to the community expected of genuine research journals (which inevitably involves rejecting submissions, and delaying publication till work is up to standard). In particular, the research journal acts as a gatekeeper to ensure nonsense or seriously flawed work is not published as science. It does this in two ways.

Discriminating between high quality and poor quality studies

Work that is clearly not up to standard (as judged by experts in the field) is rejected. One might think that in an ideal world no one is going to send work that has no merit to a research journal. In reality we cannot expect authors to always be able to take a balanced and critical view of their own work, even if we would like to think that research training should help them develop this capacity.

This assumes researchers are trained, of course. Many people carrying out educational research in science teaching contexts are only trained as natural scientists – and those trained as researchers in natural science often approach the social sciences with significant biases and blind-spots when carrying out research with people. (Watch or read 'Why do natural scientists tend to make poor social scientists?')

Also, anyone can submit work to a research journal – be they genius, expert, amateur, or 'crank'. Work is meant to be judged on its merits, not by the reputation or qualifications of the author.

De-bugging research reports – helping authors improve their work

The other important function of journal review is to identify weaknesses and errors and gaps in reports of work that may have merit, but where these limitations make the report unsuitable for publication as submitted. Expert reviewers will highlight these issues, and editors will ensure authors respond to the issues raised before possible publication. This process relies on fallible humans, and in the case of reviewers usually unpaid volunteers, but is seen as important for quality control – even if it not a perfect system. 3

This improvement process is a 'win' all round:

  • the quality of what is published is assured so that (at least most) published studies make a meaningful contribution to knowledge;
  • the journal is seen in a good light because of the quality of the research it publishes; and
  • the authors can be genuinely proud of their publications which can bring them prestige and potentially have impact.

If a predatory journal which claims (i) to have academic editors making decisions and (ii) to use peer review does not rigorously follow proper processes, and so publishes (a) nonsense as scholarship, and (b) work with major problems, then it lets down the community and the authors – if not those making money from the deceit.

The editor took just over a fortnight to arrange any peer review, and come to a decision that the research report was ready for publication

Students' perceptions of chemistry teachers' characteristics

There is much of merit in this particular research study. Dr Iheanyi O. Igwe explains why there might be a concern about the quality of chemistry teaching in the research context, and draws upon a range of prior literature. Information about the population (the public secondary schools II chemistry students in Abakaliki Education Zone of Ebonyi State) and the sample is provided – including how the sample, of 300 students at 10 schools, was selected.

There is however an unfortunate error in characterising the population:

"the chemistry students' population in the zone was four hundred and ten (431)"

Igwe, 2017, p.8

This seems to be a simple typographic error, but the reader cannot be sure if this should read

  • "…four hundred and ten (410)" or
  • "…four hundred and thirty one (431)".

Or perhaps neither, as the abstract tells readers

"From a total population of six hundred and thirty (630) senior secondary II students, a sample of three hundred (300) students was used for the study selected by stratified random sampling technique."

Igwe, 2017, abstract

Whether the sample is 300/410 or 300/431 or even 300/630 does not fundamentally change the study, but one does wonder how these inconsistencies were not spotted by the editor, or a peer reviewer, or someone in the production department. (At least, one might wonder about this if one had not seen much more serious failures to spot errors in this journal.) A reader could wonder whether the presence of such obvious errors may indicate a lack of care that might suggest the possibility of other errors that a reader is not in a position to spot. (For example, if questionnaire responses had not been tallied correctly in compiling results, then this would not be apparent to anyone who did not have access to the raw data to repeat the analysis.) The author seems to have been let down here.

A multi-scale instrument

The final questionnaire contained 5 items on each of three scales

  • students' perception of teachers' interest in the teaching of chemistry;
  • students' perception of teachers' attitude towards the teaching of chemistry;
  • students' perception of teachers' mastery of the subject in the teaching of chemistry

Igwe informs readers that,

"the final instrument was tested for reliability for internal consistency through the Cronbach Alpha statistic. The reliability index for the questionnaire was obtained as 0.88 which showed that the instrument was of high internal consistency and therefore reliable and could be used for the study"

Igwe, 2017, p.4

This statistic is actually not very useful information as one would want to know about the internal consistency within the scales – an overall value across scales is not informative (conceptually, it is not clear how it should be interpreted – perhaps that the three scales are largely eliciting much the same underlying factor? ) (Taber, 2018). 4

There are times when aggregate information is not very informative (Image by Syaibatul Hamdi from Pixabay )

Again, one might have hoped that expert reviewers would have asked the author to quote the separate alpha values for the three scales as it is these which are actually informative.

The paper also offers a detailed account of the analysis of the data, and an in-depth discussion of the findings and potential implications. This is a serious study that clearly reflects a lot of work by the researcher. (We might hope that could be taken for granted when discussing work published in a 'research journal', but sadly that is not so in some predatory journals.) There are limitations of course. All research has to stop somewhere, and resources and, in particular, access opportunities are often very limited. One of these limitations is the wider relevance of the population sampled.

But do the results apply in Belo Horizonte?

This is the generalisation issue. The study concerns the situation in one administrative zone within a relatively small state in South East Nigeria. How do we know it has anything useful to tell us about elsewhere in Nigeria, let alone about the situation in Mexico or Vietnam or Estonia? Even within Ebonyi State, the Abakaliki Education Zone (that is, the area of the state capital) may well be atypical – perhaps the best qualified and most enthusiastic teachers tend to work in the capital? Perhaps there would have been different findings in a more rural area?

Yet this is a limitation that applies to a good deal of educational research. This goes back to the complexity of educational phenomena. What you find out about an electron or an oxidising agent studied in Abakaliki should apply in Cambridge, Cambridgeshire or equally in Cambridge, Massachusetts. That cannot be claimed about what you may find out about a teacher in Abakaliki, or a student, a class, a school, a University

Misleading study titles?

Educational research studies often have strictly misleading titles – or at least promise a lot more than they deliver. This may in part be authors making unwarranted assumptions, or it may be journal editors wanting to avoid unwieldy titles.

"This situation has inadvertently led to production of half backed graduate Chemistry educators."

Igwe, 2017, p.2

The title of this study does suggest that the study concerns perceptions of Chemistry Teachers' Characteristics …in Senior Secondary Schools, when we cannot assume that chemistry teachers in the Abakaliki Education Zone of Ebonyi State can stand for chemistry teachers more widely. Indeed some of the issues raised as motivating the need for the study are clearly not issues that would apply in all other educational contexts – that is the 'situation', which is said to be responsible for the "production of half backed [half-baked?] graduate Chemistry educators" in Nigeria, will not apply everywhere. Whilst the title could be read as promising more general findings than were possible in the study, Igwe's abstract is quite explicit about the specific population sampled.

A limited focus?

Another obvious limitation is that whilst pupils' perceptions of their teachers are very important, it does not offer a full picture. Pupils may feel the need to give positive reviews, or may have idealistic conceptions. Indeed, assuming that voluntary, informed consent was given (which would mean that students knew they could decline to take part in the research without fear of sanctions) it is of note that every one of the 30 students targeted in each of the ten schools agreed to complete the survey,

"The 300 copies of the instrument were distributed to the respondents who completed them for retrieval on the spot to avoid loss and may be some element of bias from the respondents. The administration and collection were done by the researcher and five trained research assistants. Maximum return was made of the instrument."

Igwe, 2017, p.4

To get a 100% return on a survey is pretty rare, and if normal ethical procedures were followed (with the voluntary nature of the activity made clear) then this suggests these students were highly motivated to appease adults working in the education system.

But we might ask how student perceptions of teacher characteristics actually relate to teacher characteristics?

For example, observations of the chemistry classes taught by these teachers could possibly give a very different impression of those teachers than that offered by the student ratings in the survey. (Another chemistry teacher may well be able to distinguish teacher confidence or bravado from subject mastery when a learner is not well placed to do so.) Teacher self-reports could also offer a different account of their 'Interest, Attitude and Subject Mastery', as could evaluations by their school managers. Arguably, a study that collected data from multiple sources would offer the possibility of 'triangulating' between sources.

However, Igwe, is explicit about the limited focus of the study, and other complementary strands of research could be carried out to follow-up on the study. So, although the specific choice of focus is a limitation, this does not negate the potential value of the study.

Research questions

Although I recognise a serious and well-motivated study, there is one aspect of Igwe's study which seemed rather bizarre. The study has three research questions (which are well-reflected in the title of the study) and a hypothesis which I suspect will likely surprise some readers.

That is not a good thing. At least, I always taught research students that unlike in a thriller or 'who done it?' story, where a surprise may engage and amuse a reader, a research report or thesis is best written to avoid such surprises. The research report is an argument that needs to flow though the account – if a reader is surprised at something the researcher reports doing then the author has probably forgotten to properly introduce or explain something earlier in the report.

Here are the research questions and hypotheses:

"Research Questions

The following research questions guided the study, thus:

How do students perceive teachers' interest in the teaching of chemistry?

How do students perceive teachers' attitude towards the teaching of chemistry?

How do students perceive teachers' mastery of the subjects in the teaching of chemistry?

Hypotheses
The following null hypothesis was tested at 0.05 alpha levels, thus:
HO1 There is no significant difference in the mean ratings of male and female students on their perception of chemistry teachers' characteristics in the teaching of chemistry."

Igwe, 2017, p.3

A surprising hypothesis?

A hypothesis – now where did that come from?

Now, I am certainly not criticising a researcher for looking for gender differences in research. (That would be hypocritical as I looked for such differences in my own M.Sc. thesis, and published on gender differences in teacher-student interactions in physics classes, gender differences in students' interests in different science topics on stating secondary school, and links between pupil perceptions of (i) science-relatedness and (ii) gender-appropriateness of careers.)

There might often be good reasons in studies to look for gender differences. But these reasons should be stated up-front. As part of the conceptual framework motivating the study, researchers should explain that based on their informal observations, or on anecdotal evidence, or (better) drawing upon explicit theoretical considerations, or that informed by the findings of other related studies – or whatever reason there might – there are good reasons to check for gender differences.

The flow of research (Underlying image from Taber, 2013) The arrows can be read as 'inform(s)'.

Perhaps Igwe had such reasons, but there seems to be no mention of 'gender' as a relevant variable prior to the presentation of the hypothesis: not even a concerning dream, or signs in the patterns of tea leaves. 5 To some extent, this is reinforced by the choice of the null hypothesis – that no such difference will be found. Even if it makes no substantive difference to a study whether a hypothesis is framed in terms of there being a difference or not, psychologically the study seems to have looked for a lack of significant difference regarding a variable which was not thought to have any relevance.

Misuse of statistics

It is important for researchers not to test for effects that are not motivated in their studies. Statistical significance tells a researcher something is unlikely to happen just by chance – but it still might. Just as someone buying a lottery ticket is unlikely to win the lottery – but they might. Logically a small proportion of all the positive statistical results in the literature are 'false positives' because unlikely things do happen by chance – just not that often. 6 The researcher should not (metaphorically!) go round buying up lots of lottery tickets, and then seeing an occasional win as something more than chance.

No alarms and no surprises

And what was found?

"From the result of analysis … the null hypothesis is accepted which means that there is no significant difference in the mean ratings of male and female students in their perception of chemistry teachers' characteristics (interest, attitude and subject mastery) in the teaching of chemistry."

Igwe, 2017, p.6

This is like hypothesising, without any motivation, that the amount of alkali needed to neutralise a certain amount of acid will not depend on the eye colour of the researcher; experimentally confirming this is the case; and then seeking to publish the results as a new contribution to knowledge.

Why did Igwe look for gender difference (or more strictly, look for no gender difference)?

  • A genuine relevant motivation missing from the paper?
  • An imperative to test for something (anything)?
  • Advice that journals are more likely to publish studies using statistical testing?
  • Noticing that a lot of studies do test for gender differences (whether there seems a good reason to do so or not)?

This seems to be an obvious point for peer reviewers and the editor to raise: asking the author to either (a) explain why it makes sense to test for gender differences in this study – or (b) to drop the hypothesis from the paper. It seems they did not notice this, and readers are simply left to wonder – just as you would if a newspaper headline was 'Earthquake latest' and then the related news story was simply that, as usual, no earthquakes had been reported.

Work cited:


Footnotes:

1 The term paradigm became widely used in this sense after Kuhn's (1970) work although he later acknowledged criticisms of the ambiguous way he used the term, in particular as learning about a field through working through standard examples, paradigms, and the wider set of shared norms and values that develop in an established field which he later termed 'disciplinary matrix'. In psychology research 'paradigm' may be used in the more specific sense of an established research design/protocol.


2 There are at least three ways of explaining why a lot of research in the social science seems more chaotic and less structured to outsiders than most research in the natural sciences.

  • a) Ontology. Perhaps the things studied in the natural sciences really exist, and some of those in the social sciences are epiphenomena and do not reflect fundamental, 'real', things. There may be some of that sometimes, but if so I think it is a matter of degree (that is, scientists have not been beyond studying the ether or phlogiston), because of the third option (c).
  • b) The social sciences are not as mature as many areas of the natural sciences and so are sill 'pre-paradigmatic'. I am sure there is sometimes an element of this: any new field will take time to focus in on reliable and productive ways of making sense of its domain.
  • c) The complexity of the phenomena. Social phenomena are inherently more complex, often involving feedback loops between participants' behaviours and feelings and beliefs (including about the research, the researcher, etc.)

Whilst (a) and (b) may sometimes be pertinent, I think (c) is often especially relevant to this question.


3 An alternative approach that has gained some credence is to allow authors to publish, but then invite reader reviews which will also be published – and so allowing a public conversation to develop so readers can see the original work, criticism, responses to those criticisms, and so forth, and make their own judgements. To date this has only become common practice in a few fields.

Another approach for empirical work is for authors to submit research designs to journals for peer review – once a design has been accepted by the journal, the journal agrees to publish the resulting study as long as the agreed protocol has been followed. (This is seen as helping to avoid the distorting bias in the literature towards 'positive' results as studies with 'negative' results may seem less interesting and so less likely to be accepted in prestige journals.) Again, this is not the norm (yet) in most fields.


4 The statistic has a maximum value of 1, which would indicate that the items were all equivalent, so 0.88 seems a high value, till we note that a high value of alpha is a common artefact of including a large number of items.

However, playing Devil's advocate, I might suggest that the high overall value of alpha could suggest that the three scales

  • students' perception of teachers' interest in the teaching of chemistry;
  • students' perception of teachers' attitude towards the teaching of chemistry;
  • students' perception of teachers' mastery of the subject in the teaching of chemistry

are all tapping into a single underlying factor that might be something like

  • my view of whether my chemistry teacher is a good teacher

or even

  • how much I like my chemistry teacher

5 Actually the discrimination made is between male and female students – it is not clear what question students were asked to determine 'gender', and whether other response options were available, or whether students could decline to respond to this item.


6 Our intuition might be that only a small proportion of reported positive results are false positives, because, of course, positive results reflect things unlikely to happen by chance. However if, as is widely believed in many fields, there is a bias to reporting positive results, this can distort the picture.

Imagine someone looking for factors that influence classroom learning. Consider that 50 variables are identified to test, such as teacher eye colour, classroom wall colour, type of classroom window frames, what the teacher has for breakfast, the day of the week that the teacher was born, the number of letters in the teacher's forename, the gender of the student who sits nearest the fire extinguisher, and various other variables which are not theoretically motivated to be considered likely to have an effect. With a confidence level of p[robability] ≤ 0.05 it is likely that there will be a very small number of positive findings JUST BY CHANCE. That is, if you look across enough unlikely events, it is likely some of them will happen. There is unlikely to be a thunderstorm on any particular day. Yet there will likely be a thunderstorm some day in the next year. If a report is written and published which ONLY discusses a positive finding then the true statistical context is missing, and a likely situation is presented as unlikely to be due to chance.


'In my head, son' – mind reading commentators

Keith S. Taber

*

"Tim Howard is a little frustrated with himself that it wasn't a tidier save, because he feels he ought to have done better with the first attempt."

Thus claimed the commentator on the television highlights programme Match of the Day (BBC) commenting on the association football (soccer) match Everton vs. Spurs on May 25th 2015.  

It was not a claim that was obviously contradicted by the footage being shown, but inevitably my reaction (as someone who teaches research methods to students) was 'how do you know?" The goalkeeper was busy playing a game of football, some distance from the commentator, and there was no obvious conversation between them. The answer of course is that the commentator was a mind reader who knew what someone else was thinking and feeling.

This is not so strange, as we are all mind readers – or at least we commonly make statements about the thoughts, attitude, feels, beliefs etc. of others, based on their past or present behaviour, subtle body language, facial expressions and/or the context of their current predicament.

Of course, that is not strictly mind reading, as minds are not visible. But part of normal human development is acquiring a 'theory of mind' that allows us to draw inferences about the thoughts and feelings of others – the internal subjective experiences of others – drawing upon our own feelings and thoughts as a model. In everyday life, this ability is essential to normal social functioning – even if we do not always get it right. Yet we become so used to relying upon these skills that public commentators (well, a sports commentator here) feel no discomfort in not only interpreting the play, but the feelings and thoughts of the players they are observing.

A large part of the kind of educational research that I tend to be involved in is very similar to this – it involves using available evidence to make inferences about what others think and feel. [There are many examples in the blog posts on this site.]  Sometimes we have very strong evidence (what people tell us about their thoughts and feelings) but even then this is indirect evidence – we can never actually see another mind at work (1). We do not "see the cogs moving", even if we may like to talk as though we do.

In everyday life we forgive the kinds of under-determined claims made by sports commentators, and may not even notice when they draw such inferences and question what support their claims have. Sadly this seems to be a human quality that we often take for granted a little too much. A great deal of the research literature in science education is written as though research offers definite results about students' conceptions (and misconceptions) and whether or not they know something or understand it – as though such matters are simple, binary, and readily detected (1). Yet research actually suggests this is far from the case (2).

Research that explores students' thinking and learning is actually very challenging, and is in effect a enterprise to build and test models rather than uncover simple truths. I suspect quite a bit of the disagreement about the nature of student thinking in the science education research literature is down to researchers who forget that even if people are mind readers in everyday life, they must become careful and self-critical model builders when they are seeking to make claims presented as research (1).

References:

Taber, K. S. (2013). Modelling Learners and Learning in Science Education: Developing representations of concepts, conceptual structure and conceptual change to inform teaching and research. Dordrecht: Springer.

(2) Taber, K. S. (2014). Student Thinking and Learning in Science: Perspectives on the nature and development of learners' ideas. New York: Routledge.

* Previously published at http://people.ds.cam.ac.uk/kst24/science-education-research: 25th May 2015

Why write about Cronbach's alpha?

Keith S. Taber

What is Cronbach's alpha?

It is a statistic that is commonly quoted by researchers when reporting the use of scales and questionnaires.

Why carry out a study of the use of this statistic?

I am primarily a qualitative researcher, so do not usually use statistics in my own work. However, I regularly came across references to alpha in manuscripts I was asked to review for journals, and in manuscripts submitted to the journal I was editing myself (i.e., Chemistry Education Research and Practice).

I did not really understand what alpha was, or what is was supposed to demonstrate, or what value was desirable – which made it difficult to evaluate that aspect of a manuscript which was citing the statistic. So, I thought I had better find out more about it.

So, what is Cronbach's alpha?

It is a statistic that tests for internal consistency in scales. It should only be applied to a scale intended to measure a unidimensional factor – something it is assumed can be treated a single underlying variable (perhaps 'confidence in physics learning', 'enjoyment of school science practicals', or 'attitude to genetic medicine').

If someone developed a set of questionnaire items intended to find out, say, how skeptical a person was regarding scientific claims in the news, and administered the items to a sample of people, then alpha would offer a measure of the similarity of the set of items in terms of the patterns of responses from that sample. As the items are meant to be measuring a single underlying factor, they should all elicit similar responses from any individual respondent. If they do, then alpha would approach 1 (its maximum value).

Does alpha not measure reliability?

Often, studies state that alpha is measuring reliability – as internal consistency is sometimes considered a kind of reliability. However, more often in research what we mean by reliability is that repeating the measurements later will give us (much) the same result – and alpha does not tell us about that kind of reliability.

I think there is a kind of metaphorical use of 'reliability' here. The technique derives from an approach used to test equivalence based on dividing the items in a scale into two subsets*, and seeing whether analysis of the two subsets gives comparable results – so one could see if the result from the 'second' measure reliably reproduced that from the 'first' (but of course the ordering of the two calculations is arbitrary, and the two subsets of items were actually administered at the same time as part of a single scale).

* In calculating alpha, all possible splits are taken into account.

Okay, so that's what alpha is – but, still, why carry out a study of the use of this statistic?

Once I understood what alpha was, I was able to see that many of the manuscripts I was reviewing did not seem to be using it appropriately. I got the impression that alpha was not well understood among researchers even though it was commonly used. I felt it would be useful to write a paper that both highlighted the issues and offered guidance on good practice in applying and reporting alpha.

In particular studies would often cite alpha for broad features like 'understanding of chemistry' where it seems obvious that we would not expect understanding of pH, understanding of resonance in benzene, understanding of oxidation numbers, and understanding of the mass spectrometer, to be the 'same' thing (or if they are, we could save a lot of time and effort by reducing exams to a single question!)

It was also common for studies using instruments with several different scales to not only quote alpha for each scale (which is appropriate), but to also give an overall alpha for the whole instrument even though it was intended to be multidimensional. So imagine a questionnaire which had a section on enjoyment of physics, another on self-confidence in genetics, and another on attitudes to science-fiction elements in popular television programmes: why would a researcher want to claim there was a high level of internal consistency across what are meant to be such distinct scales?

There was also incredible diversity in how different authors describe different values of alpha they might calculate – so the same value of alpha might be 'acceptable' in one study, 'fairly high' in another, and 'excellent' in a third (see figure 1).


Fig. 1 Qualitative descriptors used for values/ranges of values of Cronbach's alpha reported in papers in leading science education journals (The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education)

Some authors also suggested that a high value of alpha for an instrument implied it was unidimensional – that all the items were measuring the same things – which is not the case.

But isn't it the number that matters: we want alpha to be as high as possible, and at least 0.7?

Yes, and no. And no, and no.

But the number matters?

Yes of course, but it needs to be interpreted for a reader: not just 'alpha was 0.73'.

But the critical value is 0.7, is that right?

No.

It seems extremely common for authors to assume that they need alpha to reach, or exceed, 0.7 for their scale to be acceptable. But that value seems to be completely arbitrary (and was not what Cronbach was suggesting).

Well, it's a convention, just as p<0.05 is commonly taken as a critical value.

But it is not just like that. Alpha is very sensitive to how many items are included in a scale. If there are only a few items, then a value of, say, 0.6 might well be sensibly judged acceptable. In any case it is nearly always possible to increase alpha by adding more items till you reach 0.7.

But only if the added items genuinely fit for the scale?

Sadly, no.

Adding a few items that are similar to each other, but not really fitting the scale, would usually increase alpha. So adding 'I like Manchester United', 'Manchester United are the best soccer team', and 'Manchester United are great' as items to be responded to in a scale about self-efficacy in science learning would likely increase alpha.

Are you sure: have you tried it?

Well, no. But, as I pointed out above, instruments often contain unrelated scales, and authors would sometimes calculate an overall alpha (the computer found to be greater than that of each of its component scales – at least that would be the implication if it were assumed that a larger alpha means a higher internal consistency without factoring how alpha tends to be larger the more items are included in the calculation.

But still, it is clear that the bigger alpha the better?

Up to a point.

But consider a scale with five items where everybody responds to each item in exactly the same way (not, that is, different people respond in the same way as each other, just whatever response a person gives to one item – e.g., 2 on a scale of 1-7 – they also give to the other items). So alpha should be 1, as high as it can get. But Cronbach would suggest you are wasting researcher and participant effort by having many items if they all elicit the same response. The point of scales having several items is that we assume no one item directly catches perfectly what we are trying to measure. Whether they do or not, there is no point in multiple items that are effectively equivalent.

Was it necessary to survey science education journals to make the point?

I did not originally think so.

My draft manuscript made the argument by drawing on some carefully selected examples of published papers in relation to the different issues I felt needed to be highlighted and discussed. I think the draft manuscript effectively made the point that there were papers getting published in good journals that quoted alpha but seemed to simply assume it demonstrated something (unexplained) to readers, and/or used alpha when their instrument was clearly not meant to be multidimensional, and/or took 0.7 as a definitive cut-off regardless of the number of items concerned, and/or quoted alpha values for overall instruments as well as for the distinct scales as if that added some evidence of instrument quality, or claimed a high value of alpha for an instrument demonstrated it was unidimensional.

So why did you then spend time reviewing examples across four journals over a whole year of publication?

Although I did not think this was necessary, when the paper was reviewed for publication a journal reviewer felt the paper was too anecdotal: that just because a few papers included weak practice, that may not have been especially significant. I think there was also a sense that a paper critiquing a research technique did not fit in the usual categories of study published in the journal, but a study with more empirical content (even if the data were published papers) better fitted the journal.

At that point I could have decided to try and get the paper published elsewhere, but Research in Science Education is a good journal and I wanted the paper in a good science education journal. This took extra work, but satisfied the journal.

I still think the paper would have made a contribution without the survey BUT the extra work did strengthen paper. In retrospect, I am happy that I responded to review comments in that way – as it did actually show just how frequency alpha is used in science education, and the wide variety of practice in reporting the statistic. Peer review is meant to help authors improve their work, and I think it did here.

Has the work had impact?

I think so, but…

The study has been getting a lot of citations, and it is always good to think someone notices a study, given the work it involves. Perhaps a lot of people have genuinely thought about their use of alpha as a result of reading the paper, and perhaps there are papers out their which do a better job of using and reporting alpha as a result of authors reading my study. (I would like to think so.)

However, I have also noticed that a lot of papers citing this study as an authority for using alpha in the reported research are still doing the very things I was criticising, and sometimes directly justifying poor practice by citing my study! These authors either had not actually read the study (but were just looking for something about alpha to cite) or perhaps did not fully appreciate the points made.

Oh well, I think it was Oscar Wilde who said there is only one thing in academic life worse than being miscited…