Creeping bronzes

Evidence of journalistic creep in 'surprising' Benin bronzes claim


Keith S. Taber


How certain can we be about the origin of metals used in historic artefacts? (Image by Monika from Pixabay)


Science offers reliable knowledge of the natural world – but not absolutely certain knowledge. Conclusions from scientific studies follow from the results, but no research can offer absolutely certain conclusions as there are always provisos.

Read about critical reading of research

Scientists tend to know this, something emphasised for example by Albert Einstein (1940), who described scientific theories (used to interpret research results) as "hypothetical, never completely final, always subject to question and doubt".

When scientists talk to one another within some research programme they may used a shared linguistic code where they can omit the various conditionals ('likely', 'it seems', 'according to our best estimates', 'assuming the underlying theory', 'within experimental error', and the rest) as these are understood, and so may be left unspoken, thus increasing economy of language.

When scientists explain their work to a wider public such conditionals may also be left out to keep the account simple, but really should be mentioned. A particular trope that annoyed me when I was younger was the high frequency of links in science documentaries that told me "this could only mean…" (Taber, 2007) when honest science is always framed more along the lines "this would seem to mean…", "this could possibly mean…", "this suggested the possibility"…

Read about scientific certainty in the media

Journalistic creep

By journalistic creep I mean the tendency for some journalists who act as intermediates between research scientists and the public to keep the story simple by omitting important provisos. Science teachers will appreciate this, as they often have to decide which details can be included in a presentation without loosing or confusing the audience. A useful mantra may be:

Simplification may be necessary – but oversimplification can be misleading

A slightly different type of journalist creep occurs within stories themselves, Sometimes the banner headline and the introduction to a piece report definitive, certain scientific results – but reading on (for those that do!) reveals nuances not acknowledged at the start. Teachers will again appreciate this tactic: offer the overview with the main point, before going back to fill in the more subtle aspects. But then, teachers have (somewhat) more control over whether the audience engages with the full account.

I am not intending to criticise journalists in general here, as scientists themselves have a tendency to do something similar when it comes to finding titles for papers that will attract attention by perhaps suggesting something more certain (or, sometimes, poetic or even controversial) than can be supported by the full report.


An example of a Benin Bronze (a brass artefact from what is now Nigeria) in the British [sic] Museum

(British Museum, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)


Where did the Benin bronzes metal come from?

The title of a recent article in the RSC's magazine for teachers, Education in Chemistry, proclaimed a "Surprise origin for Benin bronzes".1 The article started with the claim:

"Geochemists have confirmed that most of the Benin bronzes – sculptured heads, plaques and figurines made by the Edo people in West Africa between the 16th and 19th centuries – are made from brass that originated thousands of miles away in the German Rhineland."

So, this was something that scientists had apparently confirmed as being the case.

Reading on, one finds that

  • it has been "long suspected that metal used for the artworks was melted-down manillas that the Portuguese brought to West Africa"
  • scientists "analysed 67 manillas known to have been used in early Portuguese trade. The manillas were recovered from five shipwrecks in the Atlantic and three land sites in Europe and Africa"
  • they "found strong similarities between the manillas studied and the metal used in more than 700 Benin bronzes with previously published chemical compositions"
  • and "the chemical composition of the copper in the manillas matched copper ores mined in northern Europe"
  • and "suggests that modern-day Germany, specifically the German Rhineland, was the main source of the metal".

So, there is a chain of argument here which seems quite persuasive, but to move from this to it being "confirmed that most of the Benin bronzes…are made from brass that originated …in the German Rhineland" seems an example of journalistic creep.

The reference to "the chemical composition of the copper [sic] in the manillas" is unclear, as according to the original research paper the sample of manilla analysed were:

"chemically different from each other. Although most manillas analysed here …are brasses or leaded brasses, sometimes with small amounts of tin, a few specimens are leaded copper with little or no zinc."

Skowronek, et al., 2023

The key data presented in the paper concerned the ratios of different lead isotopes (205Pb:204Pb; 206Pb:204Pb; 207Pb:204Pb; 208Pb:204Pb {see the reproduced figure below}) in

  • ore from different European locations (according to published sources)
  • sampled Benin bronze (as reported from earlier research), and
  • sampled recovered manillas

and the ratios of different elements (Ni:AS; Sb:As; Bi:As) in previously sampled Benin bronzes and sampled manillas.

The tendency to consider a chain of argument where each link seems reasonably persuasive as supporting fairly certain conclusions is logically flawed (it is like concluding from knowledge that one's chance of dying on any particular day is very low, that one must be immortal) but seems reflected in something I have noticed with some research students: that often their overall confidence in the conclusions of a research paper they have scrutinised is higher than their confidence in some of the distinct component parts of that study.


An example of a student's evaluation of a research study


This is like being told by a mechanic that your cycle brakes have a 20% of failing in the next year; the tyres 30%; the chain 20%; and the frame 10%; and concluding from this that there is only about a 20% chance of having any kind of failure in that time!

A definite identification?

The peer reviewed research paper which reports the study discussed in the Education in Chemistry article informs readers that

"In the current study, documentary sources and geochemical analyses are used to demonstrate that the source of the early Portuguese "tacoais" manillas and, ultimately, the Benin Bronzes was the German Rhineland."

"…this study definitively identifies the Rhineland as the principal source of manillas at the opening of the Portuguese trade…"

Skowronek, et al.,2023

which sounds pretty definitive, but interestingly the study did not rely on chemical analysis alone, but also 'documentary' evidence. In effect, historical evidence provided another link in the argument, by suggesting the range of possible sources of the alloy that should be considered in any chemical comparisons. This assumes there were no mining and smelting operations providing metal for the trade with Africa which have not been well-documented by historians. That seems a reasonable assumption, but adds another proviso to the conclusions.

The researchers reported that

Pre-18th century manillas share strong isotopic similarities with Benin's famous artworks. Trace elements such as antimony, arsenic, nickel and bismuth are not as similar as the lead isotope data…. The greater data derivation suggests that manillas were added to older brass or bronze scrap pieces to produce the Benin works, an idea proposed earlier.

and acknowledges that

Millions of these artifacts were sent to West Africa where they likely provided the major, virtually the only, source of brass for West African casters between the 15th and the 18th centuries, including serving as the principal metal source of the Benin Bronzes. However, the difference in trace elemental patterns between manillas and Benin Bronzes does not allow postulating that they have been the only source.

The figure below is taken from the research report.


Part of Figure 2 from the open access paper (© 2023 Skowronek et al. – distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)

The chart shows results from sampled examples of Benin bronzes (blue circles); compared with the values of the same isotope ratios from different copper ore site (squares) and manillas sampled from different archaeological sties (triangles).


The researchers feel that the pattern of clustering of results (in this, and other similar comparisons between lead isotope ratios) from the Benin bronzes, compared with those from the sampled manillas, and the ore sites, allows them to identify the source of metal re-purposed by the Edo craftspeople to make the bronzes.

It is certainly the case that the blue circles (which refer to the artworks) and the green squares (which refer to copper ore samples from Rhineland) do seem to generally cluster in a similar region of the graph – and that some of the samples taken from the manillas also seem to fit this pattern.

I can see why this might strongly suggest the Rhineland (certainly more so than Wales) as the source of the copper believed to be used in manillas which were traded in Africa and are thought to have been later melted down as part of the composition of alloy used to make the Benin bronzes.

Whether that makes for either

  • definitive identification of the Rhineland as the principal source of manillas (Skowronek paper), or
  • confirmation that most of the Benin bronze are made from brass that originated thousands of miles away in the German Rhineland (EiC)

seems somewhat less certain. Just as scientific claims should be.


A conclusion for science education

It is both human nature, and often good journalistic or pedagogic practice to begin with a clear, uncomplicated statement of what is to be communicated. But we also know that what is heard or read first may be better retained in memory than what follows. It also seems that people in general tend to apply the wrong kind of calculus when there are multiple source of doubt – being more likely to estimate overall doubt as being the mean or modal level of the several discrete sources of doubt, rather than something that accumulates step-on-step.

It seems there is a major issue here for science education in training young people in critically questioning claims, looking for the relevant provisos, and understanding how to integrate levels of doubt (or, similarly, risk) that are distributed over a sequence of phases in a process.


All research conclusions (in any empirical study in any discipline) rely on a network of assumptions and interpretations, any one of which could be a weak link in the chain of logic. This is my take on some of the most critical links and assumptions in the Benin bronzes study. One could easily further complicate this scheme (for example, I have ignored the assumptions about the validity of the techniques and calibration of the instrumentation used to find the isotopic composition of metal samples).


Work cited:

Note:

1 It is not clear to me what the surprise was – but perhaps this is meant to suggest the claim may be surprising to readers of the article. The study discussed was premised on the assumption that the Benin Bronzes were made from metal largely re-purposed from manillas traded from Europe, which had originally been cast in one of the known areas in Europe with metal working traditions. The researchers included the Rhineland as one of the potential regional sites they were considering. So, it was surely a surprise only in a similar sense to rolling a die and it landing on 4, rather than say 2 or 5, would be a surprise.

But then, would you be just as likely to read an article entitled "Benin bronzes found to have anticipated origin"?


The best science education journal

Where is the best place to publish science education research?


Keith S. Taber



OutletDescriptionNotes
International Journal of Science EducationTop-tier general international science education journalHistorically associated with the European Science Education Research Association
Science EducationTop-tier general international science education journal
Journal of Research in Science TeachingTop-tier general international science education journalAssociated with NARST
Research in Science EducationTop-tier general international science education journalAssociated with the Australasian Science Education Research Association
Studies in Science EducationLeading journal for publishing in-depth reviews of topics in science education
Research in Science and Technological Education Respected general international science education journal
International Journal of Science and Maths EducationRespected general international science education journalFounded by the National Science and Technology Council, Taiwan
Science Education InternationalPublishes papers that focus on the teaching and learning of science in school settings ranging from early childhood to university educationPublished by the International Council of Associations for Science Education
Science & EducationHas foci of historical, philosophical, and sociological perspectives on science educationAssociated with the International History, Philosophy, and Science Teaching Group
Journal of Science Teacher EducationConcerned with the preparation and development of science teachersAssociated with the Association for Science Teacher Education
International Journal of Science Education, Part B – Communication and Public EngagementConcerned with research into science communication and public engagement / understanding of science
Cultural Studies of Science EducationConcerned with science education as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon
Journal of Science Education and TechnologyConcerns the intersection between science education and technology.
Disciplinary and Interdisciplinary Science Education ResearchConcerned with science education within specific disciplines and between disciplines.Affiliated with the Faculty of Education, Beijing Normal University
Journal of Biological Education For research specifically within biology educationPublished for the Royal Society of Biology.
Journal of Chemical EducationA long-standing journal of chemistry education, which includes a section for Chemistry Education Research papersPublished by the American Chemical Society.
Chemistry Education Research and Practice The leading research journal for chemistry educationPublished by the Royal Society of Chemistry
Some of the places to publish research in science education

I was recently asked which was the best journal in which to seek publication of science education research. This was a fair question, given that I had been been warning of the large number of low quality journals now diluting the academic literature.

I had been invited to give a seminar talk to the Physics Education and Scholarship Section in the Department of Physics at Durham University. I had been asked to talk on the theme of 'Publishing research in science education'.

The talk considered the usual processes involved in submitting a paper to a research journal and the particular responsibilities involved for authors, editors and reviewers. In the short time available I said a little about ethical issues, including difficulties that can arise when scholars are not fully aware of, or decide to ignore, the proper understanding of academic authorship 1 . I also discussed some of the specific issues that can arise when those with research training in the natural sciences undertake educational research without any further preparation (for example, see: Why do natural scientists tend to make poor social scientists?), such as underestimating the challenge of undertaking valid experiments in educational contexts.

I had not intended to offer advice on specific journals for the very good reasons that

  • there are a lot of journals
  • my experience of them is very uneven
  • I have biases!
  • knowledge of journals can quickly become out of date when publishers change policies, or editorial teams change

However, it was pointed out that there does not seem to be anywhere where such advice is readily available, so I made some comments based on my own experience. I later reflected that some such guidance could be useful, especially to those new to research in the area.

I do, in the 'Research methodology' section of the site, offer some advice to the new researcher on 'Publishing research', that includes some general advice on things to consider when thinking about where to send your work:

Read about 'Selecting a research journal: Selecting an outlet for your research articles'

Although I name check some journals there, I did not think I should offer strong guidance for the reasons I give above. However, taking on board the comment about the lack of guidance readily available, I thought I would make some suggestions here, with the full acknowledgement that this is a personal perspective, and that the comments facility below will allow other views and potential correctives to my biases! If I have missed an important journal, or seem to have made a misjudgement, then please tell me and (more importantly) other readers who may be looking for guidance.

Publishing in English?

My focus here is on English language journals. There are many important journals that publish in other languages such as Spanish. However, English is often seen as the international language for reporting academic research, and most of the journals with the greatest international reach work in the English language.

These journals publish work from all around the world, which therefore includes research into contexts where the language of instruction is NOT English, and where data is collected, and often analysed, in the local language. In these cases, reporting research in English requires translating material (curriculum materials, questions posed to participants, quotations from learners etc.) into English. That is perfectly acceptable, but translation is a skilled and nuanced activity, and needs to be acknowledged and reported, and some assurance of the quality of translation offered (Taber, 2018).

Read about guidelines for good practice regarding translation in reporting research

Science research journal or science education journal?

Sometime science research journals will publish work on science education. However, not all science journals will consider this, and even for those that do, this tends to be an occasional event.

With the advent of open-access, internet accessible publishing, some academic publishers are offering journals with very wide scope (presumably as it is considered that in the digital age it is easier to find research without it needing to be in a specialist journal), however, authors should be wary of journals that have titles implying a specialist scientific focus but which seem to accept material from a wide range of fields, as this is one common indicator of predatory journals – that is, journals which do not use robust peer review (despite what they may claim) and have low quality standards.

Read about predatory journals

There are some scientific journals with an interdisciplinary flavour which are not education journals per se, but are open to suitable submissions on educational topics. I am most familiar (disclosure of interest, being on the Editorial Board) is Foundations of Chemistry (published by Springer).



Science Education Journal or Education Journal?

Then, there is the question of whether to publish work in specialist science education journals or one of the many more general education journals. (There are too many to discuss them here.) General education journals will sometimes publish work from within science education, as long as they feel it is of high enough general interest to their readership. This may in part be a matter of presentation – if the paper is written so it is only understandable to subject specialists, and only makes recommendations for specialists in science education, it is unlikely to seem suitable for a more general journal.

On the other hand, just because research has been undertaken in science teaching and learning context, this may not make it of particular interest to science educators if the research aims, conceptualisation, conclusions and recommendations concern general educational issues, and anything that may be specific to science teaching and learning is ignored in the research – that is, if a science classroom was chosen just as a matter of convenience, but the work could have been just as well undertaken in a different curriculum context (Taber, 2013).

Research Journal or Professional Journal?

Another general question is whether it is best to send one's work to an academic research journal (offering more kudos for the author{s} if published) or a journal widely read by practitioners (but usually considered less prestigious when a scholar's academic record is examined for appointment and promotion). These different types of output usually have different expectations about the tone and balance of articles:

Read about Research journals and practitioner journals

Some work is highly theoretical, or is focussed on moving forward a research field – and is unlikely to be seen as suitable for a teacher's journal. Other useful work may have developed and evaluated new educational resources, but without critically exploring any educational questions in any depth. Information about this project would likely be of great interest to teachers, but is unlikely to meet the criteria to be accepted for publication in a research journal.

But what about a genuine piece of research that would be of interest to other researchers in the field, but also leads to strong recommendations for policy and practice? Here you do not have to choose one or other option. Although you cannot publish the same article in different journals, a research report sent to an academic journal and an article for teachers would be sufficiently different, with different emphases and weightings. For example, a professional journal does not usually want a critical literature review and discussion of details of data analysis, or long lists of references. But it may value vignettes that teachers can directly relate to, as well as exemplification of how recommendation might be followed through – information that would not fit in the research report.

Ideally, the research report would be completed and published first, and the article for the professional audience would refer to (and cite) this, so that anyone who does want to know more about the theoretical background and technical details can follow up.

Some examples of periodicals aimed at teachers (and welcoming work written by classroom teachers) include the School Science Review, (published by the Association for Science Education), Physics Education (published by the Institute of Physics) and the Royal Society of Chemistry's magazine Education in Chemistry. Globally, there are many publications of this kind, often with a national focus serving teachers working in a particular curriculum context by offering articles directly relevant to the specifics of the local education contexts.

The top science education research journals

Having established our work does fit in science education as a field, and would be considered academic research, we might consider sending it to one of these journals

  • International Journal of Science Education (IJSE)
  • Science Education (SE)
  • Journal of Research in Science Teaching (JRST)
  • Research in Science Education (RiSE)


To my mind these are the top general research journals in the field.

IJSE is the journal I have most worked with, having published quite a few papers in the journal, and have reviewed a great many. I have been on the Editorial Board for about 20 years, so I may be biased here.2 IJSE started as the European Journal of Science Education and has long had an association with the European Science Education Research Association (ESERA – not to be confused with ASERA).

Strictly this journal is now known as IJSE Part A, as there is also a Part B which has a particular focus on 'Communication and Public Engagement' (see below). IJSE is published by Taylor and Francis / Routledge.

SE is published by Wiley.

JRST is also published by Wiley, and is associated with NARST.

RISE is published by Springer, and is associated with the Australasian Science Education Research Association (ASERA – not to be confused with ESERA)

N.A.R.S.T. originally stood for the National Association for Research in Science Teaching, where the Nation referred to was the USA. However, having re-branded itself as "a global organization for improving science teaching and learning through research" it is now simply known as NARST. In a similar way ESERA describes itself as "an European organisation focusing on research in science education with worldwide membership" and ASERA clams it "draws together researchers in science education from Australia, New Zealand and more broadly".


The top science education reviews journal

Another 'global' journal I hold in high esteem in Studies in Science Education (published by Taylor & Francis / Routledge) 3 .

This journal, originally established at the University of Leeds and associated with the world famous Centre for Studies in Science Education 4, is the main reviews journal in science education. It publishes substantive, critical reviews of areas of science education, and some of the most influential articles in the field have been published here.

Studies in Science Education also has a tradition of publishing detailed scholarly book reviews.


In my view, getting your work published in any of these five journals is something to be proud of. I think people in many parts of the world tend to know IJSE best, but I believe that in the USA it is often considered to be less prestigious than JRST and SE. At one time RISE seemed to have a somewhat parochial focus, and (my impression is) attracted less work from outside Australasia and its region – but that has changed now. 'Studies' seems to be better known in some contexts than other, but it is the only high status general science education journal that publishes full-length reviews (both systematic, and thematic perspectives), with many of its contributions exceeding the normal word-length limits of other top science education journals. This is the place to send an article based on that literature review chapter that thesis examiners praised for its originality and insight!



There are other well-established general journals of merit, for example Research in Science and Technological Education (published by Taylor & Francis / Routledge, and originally based at the University of Hull) and the International Journal of Science and Maths Education (published by Springer, and founded by the National Science and Technology Council, Taiwan). The International Council of Associations for Science Education publishes Science Education International.

There are also journals with particular foci with the field of science education.

More specialist titles

There are also a number of well-regarded international research journals in science education which particular specialisms or flavours.


Science & Education (published by Springer) is associated with the International History, Philosophy, and Science Teaching Group 5, which as the name might suggest has a focus on science eduction with a focus on the nature of science, and "publishes research using historical, philosophical, and sociological approaches in order to improve teaching, learning, and curricula in science and mathematics".


The Journal of Science Teacher Education (published by Taylor & Francis / Routledge), as the name suggests is concerned with the preparation and development of science teachers. The journal is associated with the USA based Association for Science Teacher Education.


As suggested above, IJSE has a companion journal (also published by Taylor & Francis / Routledge), International Journal of Science Education, Part B – Communication and Public Engagement


Cultural Studies of Science Education (published by Springer) has a particular focus on  science education "as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon".


The Journal of Science Education and Technology (published by Springer) has a focus on the intersection between science education and technology.


Disciplinary and Interdisciplinary Science Education Research has a particular focus on science taught within and across disciplines. 6 Whereas most of the journals described here are now hybrid (which means articles will usually be behind a subscription/pay-wall, unless the author pays a publication fee), DISER is an open-access journal, with publication costs paid on behalf of authors by the sponsoring organisation: the Faculty of Education, Beijing Normal University.

This relatively new journal reflects the increasing awareness of the importance of cross-disciplinary, interdisciplinary and transdisciplinary research in science itself. This is also reflected in notions of whether (or to what extent) science education should be considered part of a broader STEM education, and there are now journals styled as STEM education journals.


Science as part of STEM?

Read about STEM in the curriculum


Research within teaching and learning disciplines

Whilst both the Institute of Physics and the American Institute of Physics publish physics education journals (Physics Education and The Physics Teacher, respectively) neither publishes full length research reports of the kind included in research journals. The American Physical Society does publish Physical Review Physics Education Research as part of its set of Physical Review Journals. This is an on-line journal that is Open Access, so authors have to pay a publication fee.


The Journal of Biological Education (published by Taylor and Francis/Routledge) is the education journal of the Royal Society of Biology.


The Journal of Chemical Education is a long-established journal published by the American Chemical Society. It is not purely a research journal, but it does have a section for educational research and has published many important articles in the field. 7


Chemistry Education Research and Practice (published by the Royal Society of Chemistry, RSC) is purely a research journal, and can be considered the top international journal for research specifically in chemistry education. (Perhaps this is why there is a predatory journal knowingly called the Journal of Chemistry Education Research and Practice)

As CERP is sponsored by the RSC (which as a charity looks to use income to support educational and other valuable work), all articles in CERP are accessible for free on-line, but there are no publication charges for authors.


Not an exhaustive list!

These are the journals I am most familiar with, which focus on science education (or a science discipline education), publish serous peer-reviewed research papers, and can be considered international journals.

I know there are other discipline-based journals (e.g, biochemistry education, geology education) and indeed I expect there are many worthwhile places to publish that have slipped my mind or about which I am ignorant. Many regional or national journals have high standards and publish much good work. However, when it comes to research papers (rather than articles aimed primarily at teachers) academics usually get more credit when they publish in higher status international journals. It is these outlets that can best attract highly qualified editors and reviewers, and so peer review feedback tends to be most helpful 8, and the general standard of published work tends to be of a decent quality – both in terms of technical aspects, and its significance and originality.

There is no reason why work published in English is more important than work published in other languages, but the wide convention of publishing research for an international audience in English means that work published in English language journals probably gets wider attention globally. I have published a small number of pieces in other languages, but am primarily limited by my own restricted competence to only one language. This reflects my personal failings more than the global state of science education publishing!

A personal take – other viewpoints are welcome

So, this is my personal (belated) response to the question about where one should seek to publish research in science education. I have tried to give a fair account, but it is no doubt biased by my own experiences (and recollections), and so inadvertently subject to distortions and omissions.

I welcome any comments (below) to expand upon, or seek to correct, my suggested list, which might indeed make this a more useful listing for readers who are new to publishing their work. If you have had good (or bad) experiences with science education journals included in, or omitted from, my list, please share…


Sources cited:

Notes

1 Academic authorship is understood differently to how the term 'author' is usually used: in most contexts, the author is the person who prepared (wrote, types, dictated) a text. In academic research, the authors of the research paper are those who made a substantial direct intellectual contribution to the work being reported. That is, an author need not contribute to the writing-up phase (though all authors should approve the text) as long as they have made a proper contribution to the substance of the work. Most journals have clear expectations that all deserving authors, and only those people, should be named as authors.

Read about academic authorship


2 For many years the journal was edited by the late Prof. John Gilbert, who I first met sometime in the 1984-5 academic year when I applied to join the University of Surrey/Roehampton Institute part-time teachers' programme in the Practice of Science Education, and he – as one of course directors – interviewed me. I was later privileged to work with John on some projects – so this might be considered as a 'declaration of interest'.


3 Again, I must declare an interest. For some years I acted as the Book Reviews editor for the journal.


4 The centre was the base for the highly influential Children's Learning in Science Project which undertook much research and publication in the field under the Direction of the late Prof. Ros Driver.


5 Another declaration of interest: at the time of writing I am on the IHPST Advisory Board for the journal.


6 Declaration of interest: I am a member of the DISER's Editorial Board


7 I have recently shown some surprise at one research article published in JChemEd where major problems seem to have been missed in peer review. This is perhaps simply an aberration, or may reflect the challenge of including peer-reviewed academic research in a hybrid publication that also publishes a range of other kinds of articles.


8 Peer-review evaluates the quality of submissions, in part to inform publication decisions, but also to provide feedback to authors on areas where they can improve a manuscript prior to publication.

Read about peer review


Download this post


Experimental pot calls the research kettle black

Do not enquire as I do, enquire as I tell you


Keith S. Taber


Sotakova, Ganajova and Babincakova (2020) rightly criticised experiments into enquiry-based science teaching on the grounds that such studies often used control groups where the teaching methods had "not been clearly defined".

So, how did they respond to this challenge?

Consider a school science experiment where students report comparing the rates of reaction of 1 cm strips of magnesium ribbon dropped into:
(a) 100 ml of hydrochloric acid of 0.2 mol/dm3 concentration at a temperature of 28 ˚C; and
(b) some unspecified liquid.


This is a bit like someone who wants to check they are not diabetic, but – being worried they are – dips the test strip in a glass of tap water rather than their urine sample.


Basic premises of scientific enquiry and reporting are that

  • when carrying out an experiment one should carefully manage the conditions (which is easier in laboratory research than in educational enquiry) and
  • one should offer detailed reports of the work carried out.

In science there is an ideal that a research report should be detailed enough to allow other competent researchers to repeat the original study and verify the results reported. That repeating and checking of existing work is referred to as replication.

Replication in science

In practice, replication is more problematic for both principled and pragmatic reasons.

It is difficult to communicate tacit knowledge

It has been found that when a researcher develops some new technique, the official report in the literature is often inadequate to allow researchers elsewhere to repeat the work based only on the published account. The sociologist of science, Harry Collins (1992) has explored how there may be minor (but critical) details about the setting-up of apparatus or laboratory procedures that the original researchers did not feel were significant enough to report – or even that the researchers had not been explicitly aware of. Replication may require scientists to physically visit each others' laboratories to learn new techniques.

This should not be surprising, as the chemist and philosopher Michael Polanyi (1962/1969) long ago argued that science relied on tacit knowledge (sometimes known as implicit knowledge) – a kind of green fingers of the laboratory where people learn ways of doing things more as a kind of 'muscle memory' than formal procedural rules.

Novel knowledge claims are valued

The other problem with replication is that there is little to be gained for scientists by repeating other people's work if they believe it is sound, as journals put a premium on research papers that claim to report original work. Even if it proves possible to publish a true replication (at best, in a less prestigious journal), the replication study will just be an 'also ran' in the scientific race.


Copies need not apply!

Scientific kudos and rewards go to those who produce novel work: originality is a common criterion used when evaluating reports submitted to research journals

(Image by Tom from Pixabay)


Historical studies (Shapin & Schaffer, 2011) show that what actually tends to happen is that scientists – deliberately – do not exactly replicate published studies, but rather make adjustments to produce a modified version of the reported experiment. A scientist's mind set is not to confirm, but to seek a new, publishable, result,

  • they say it works for tin, so let's try manganese?
  • they did it in frogs, let's see if it works in toads?
  • will we still get that effect closer to the boiling point?
  • the outcome in broad spectrum light has been reported, but might monochromatic light of some particular frequency be more efficient?
  • they used glucose, we can try fructose

This extends (or finds the limits of) the range of application of scientific ideas, and allows the researchers to seek publication of new claims.

I have argued that the same logic is needed in experimental studies of teaching approaches, but this requires researchers detailing the context of their studies rather better than many do (e.g., not just 'twelve year olds in a public school in country X'),

"When there is a series of studies testing the same innovation, it is most useful if collectively they sample in a way that offers maximum information about the potential range of effectiveness of the innovation. There are clearly many factors that may be relevant. It may be useful for replication studies of effective innovations to take place with groups of different socio-economic status, or in different countries with different curriculum contexts, or indeed in countries with different cultural norms (and perhaps very different class sizes; different access to laboratory facilities) and languages of instruction …It may be useful to test the range of effectiveness of some innovations in terms of the ages of students, or across a range of quite different science topics. Such decisions should be based on theoretical considerations.

…If all existing studies report positive outcomes, then it is most useful to select new samples that are as different as possible from those already tested…When existing studies suggest the innovation is effective in some contexts but not others, then the characteristics of samples/context of published studies can be used to guide the selection of new samples/contexts (perhaps those judged as offering intermediate cases) that can help illuminate the boundaries of the range of effectiveness of the innovation."

Taber, 2019, pp.104-105

When scientists do relish replication

The exception, that tests the 'scientists do not simply replicate' rule, is when it is suspected that a research finding is wrong. Then, an attempt at replication might be used to show a published account is flawed.

For example, when 'cold fusion' was announced with much fanfare (ahead of the peer reviewed publications reporting the research) many scientists simply thought it was highly unlikely that atomic energy generation was going to be possible in fairly standard glassware (not that unlike the beakers and flasks used in school science) at room temperature, and so that there was a challenge to find out what the original researchers had got wrong.

"When it was claimed that power could be generated by 'cold fusion', scientists did not simply accept this, but went about trying it for themselves…Over a period of time, a (near) consensus developed that, when sufficient precautions were made to measure energy inputs and outputs accurately, there was no basis for considering a new revolutionary means of power generation had been discovered.

Taber, 2020, p.18

Of course, one failed replication might just mean the second team did not quite do the experiment correctly, so it may take a series of failed replications to make the point. In this situation, being the first failed replication of many (so being first to correct the record in the literature) may bring prestige – but this also invites the risk of being the only failed replication (so, perhaps, being judged a poorly executed replication) if subsequently other researchers confirm the fidnings of the original study!

So, a single attempt at replication is nether enough to definitely verify nor reject a published result. What all this does show is that the simple notion that there are crucial or critical experiments in science which once reported immediately 'prove' something for all time is a naïve oversimplification of how science works.

Experiments in education

Experiments are often the best way to test ideas about natural phenomena. They tend to be much less useful in education as there are often many potentially relevant variables that usually cannot be measured, let alone controlled, even if they can be identified.

  • Without proper control, you do not have a meaningful experiment.
  • Without a detailed account of the different treatments, and so how the comparison condition is different from the experimental condition, you do not have a useful scientific report, but little more than an anecdote.
Challenges of experimental work in classrooms

Despite this, the research literature includes a vast number of educational studies claiming to be experiments to test this innovation or that (Taber, 2019). Some are very informative. But many are so flawed in design or execution that their conclusions rely more on the researchers' expectations than a logical chain of argument from robust evidence. They often use poorly managed experimental conditions to find differences in learning outcomes between groups of students that are initially not equivalent. 1 (Poorly managed?: because there are severe – practical and ethical – limits on the variables you can control in a school or college classroom.)

Read about expectancy effects in research

Statistical tests are then used which would be informative had there been a genuinely controlled experiment with identical starting points and only the variable of interest being different in the two conditions. Results are claimed by ignoring the inconvenient fact that studies use statistical tests that, strictly, do not apply in the actual conditions studied! Worse than this, occasionally the researchers think they should have got a positive result and so claim one even when the statistical tests suggests otherwise (e.g., read 'Falsifying research conclusions')! In order to try and force a result, a supposed innovation may be compared with control conditions that have been deliberately framed to ensure the learners in that condition are not taught well!

Read about unethical control conditions

A common problem is that it is not possible to randomise students to conditions, so only classes are assigned to treatments randomly. As there are usually only a few classes in each condition (indeed, often only one class in each condition) there are not enough 'units of analysis' to validly use statistical tests. A common solution to this common problem, is…to do the tests anyway, as if there had been randomisation of learners. 2 The computer that crunches the numbers follows a programme that has been written on the assumption researchers will not cheat, so it churns out statistical results and (often) reports significant outcomes due to a misuse of the tests. 3

This is a bit like someone who wants to check they are not diabetic, but being worried they are, dips the test strip in a glass of tap water rather than their urine sample. They cannot blame the technology for getting it wrong if they do not follow the proper procedures.

I have been trying to make a fuss about these issues for some time, because a lot of the results presented in the educational literature are based upon experimental studies that, at best, do not report the research in enough detail, and often, when there is enough detail to be scrutinised, fall well short of valid experiments.

I have a hunch that many people with scientific training are so convinced of the superiority of the experimental method, that they tacitly assume it is better to do invalid experiments into teaching, than adopt other approaches which (whilst not as inherently convincing as a well-designed and executed experiment) can actually offer useful insights in the complex and messy context of classrooms. 4

Read: why do natural scientists tend to make poor social scientists?

So, it is uplifting when I read work which seems to reflect my concerns about the reliance on experiments in those situations where good experiments are not feasible. In that regard, I was reading a paper reporting a study into enquiry-based teaching (Sotakova, Ganajova & Babincakova, 2020) where the authors made the very valid criticism:

"The ambiguous results of research comparing IBSE [enquiry-based science education] with other teaching methods may result from the fact that often, [sic] teaching methods used in the control groups have not been clearly defined, merely referred to as "traditional teaching methods" with no further specification, or there has been no control group at all."

Sotakova, Ganajova & Babincakova, 2020, p.500

Quite right!


The pot calling the kettle black

idiom "that means people should not criticise someone else for a fault that they have themselves" 5 (https://dictionary.cambridge.org/dictionary/english/pot-calling-the-kettle-black)

(Images by OpenClipart-Vectors from Pixabay)


Now, I do not want to appear to be the pot calling the kettle black myself, so before proceeding I should acknowledge that I was part of a major funded research project exploring a teaching innovation in lower secondary science and maths teaching. Despite a large grant, the need to enrol a sufficient number of classes to randomise to treatments to allow statistical testing meant that we had very limited opportunities to observe, and so detail, the teaching in the control condition, which was basically the teachers doing their normal teaching, whilst the teachers of the experimental classes were asked to follow a particular scheme of work.


Results from a randomised trial showing the range of within-condition outcomes (After Figure 5, Taber, 2019)

In the event, the electricity module I was working on produced almost identical mean outcomes as the control condition (see the figure). The spread of outcomes was large, in both sets of conditions – so, clearly, there were significant differences between individual classes that influenced learning: but these differences were even more extreme in the condition where the teachers were supposed to be teaching the same content, in the same order, with the same materials and activities, than in the control condition where teachers were free to do whatever they thought best!

The main thing I learned from this experience is that experiments into teaching are highly problematic.

Anyway, Sotakova, Ganajova and Babincakova were quite right to point out that experiments with poorly defined control conditions are inadequate. Consider a school science experiment designed by students who report comparing the rates of reaction of 1 cm strips of magnesium ribbon dropped into

  • (a) 100 ml of hydrochloric acid of 0.2 mol/dm3 concentration at a temperature of 28 ˚C; and
  • (b) some unspecified liquid.

A science teacher might be disappointed with the students concerned, given the limited informativeness of such an experiment – yet highly qualified science education researchers often report analogous experiments where some highly specified teaching is compared with instruction that is not detailed at all.

The pot decides to follow the example of the kettle

So, what did Sotakova and colleagues do?

"Pre-test and post-test two-group design was employed in the research…Within a specified period of time, an experimental intervention was performed within the experimental group while the control group remained unaffected. The teaching method as an independent variable was manipulated to identify its effect on the dependent variable (in this case, knowledge and skills). Both groups were tested using the same methods before and after the experiment…both groups proceeded to revise the 'Changes in chemical reactions' thematic unit in the course of 10 lessons"

Sotakova, Ganajova & Babincakova, 2020, pp.501, 505.

In the experimental condition, enquiry-based methods were used in five distinct activities as a revision approach (an example activity is detailed in the paper). What about the control conditions?

"…in the control group IBSE was not used at all…In the control group, teachers revised the topic using methods of their choice, e.g. questions & answers, oral and written revision, textbook studying, demonstration experiments, laboratory work."

Sotakova, Ganajova & Babincakova, 2020, pp.502, 505

So, the 'control' condition involved the particular teachers in that condition doing as they wished. The only control seems to be that they were asked not to use enquiry. Otherwise, anything went – and that anything was not necessarily typical of what other teachers might have done. 6

This might have involved any of a number of different activities, such as

  • questions and answers
  • oral and written revision
  • textbook studying
  • demonstration experiments
  • laboratory work

or combinations of them. Call me picky (or a blackened pot), but did these authors not complain that

"The ambiguous results of research comparing IBSE [enquiry-based science education] with other teaching methods may result from the fact that often…teaching methods used in the control groups have not been clearly defined…"

Sotakova, Ganajova & Babincakova, 2020, p.500

Hm.


Work cited

Notes:

1 A very common approach is to use a pre-test to check for significant differences between classes before the intervention. Where differences between groups do not reach the usual criterion for being statistically significant (probability, p<0.05) the groups are declared 'equivalent'. That is, a negative result in a test for unlikely differences is treated inappropriately as an indicator of equivalence (Taber, 2019).

Read about testing for initial equivalence


2 So, for example, a valid procedure may be to enter the mean class scores on some instrument as data, but what are actually entered are the individual students scores as though the students can be treated as independent units rather than members of a treatment class.

Some statistical tests lead to a number (the statistic) which is then compared with the critical value that reaches statistical significance as listed in a table. The number in the table selected depends on the number of 'degrees of freedom' in the experimental design. Often that should be the determined by the number of classes involved in the experiment – but if instead the number of learners is used, a much smaller value of the calculated statistic will seem to reach significance.


3 Some of these studies would surely have given positive outcomes even if they had been able to randomise students to conditions or used a robust test for initial equivalence – but we cannot use that as a justification for ignoring the flaws in the experiment. That would be like claiming a laboratory result was obtained with dilute acid when actually concentrated acid was used – and then justifying the claim by arguing that the same result might have occurred with dilute acid.


4 Consider, for example, a case study that involves researchers in observing teaching, interviewing students and teachers, documenting classroom activities, recording classroom dialogue, collecting samples of student work, etc. This type of enquiry can offer a good deal of insight into the quality of teaching and learning in the class and the processes at work during instruction (and so whether specific outcomes seem to be causally linked to features of the innovation being tested).

Critics of so-called qualitative methods quite rightly point out that such approaches cannot actually show any one approach is better than others – only experiments can do that. Ideally, we need both types of study as they complement each other offering different kinds of information.

The problem with many experiments reported in the education literature is that because of the inherent challenges of setting up genuinely fair testing in educational contexts they are not comparing like with like, and often it is not even clear what the comparison is with! Probably this can only be avoided in very large scale (and so expensive) studies where enough different classrooms can be randomly assigned to each condition to allow statistics to be used.

Why do researchers keep undertaking small scale experimental studies that often lack proper initial equivalence between conditions, and that often have inadequate control of variables? I suggest they will continue to do so as long as research journals continue to publish the studies (and allow them to claim definitive conclusions) regardless of their problems.


5 At a time when cooking was done on open fires, using wood that produced much smoke, the idiom was likely easily understood. In an age of ceramic hobs and electric kettles the saying has become anachronistic.

From the perspective of thermal physics, black cooking pots (rather than shiny reflective surfaces) may be a sensible choice.


6 So, the experimental treatment was being compared with the current standard practice of the teachers assigned to the control condition. It would not matter so much that this varies between teachers, nor that we do not know what that practice is, if we could be confident that the teachers in the control condition were (or were very probably) a representative sample of the wider population of teachers – such as a sufficiently large number of teachers randomly chosen from the wider population (Taber, 2019). Then we would at least know whether the enquiry based approach was an improvement on current common practice.

All we actually know is how the experimental condition fared in comparison with the unknown practices of a small number of teachers who may or may not have been representative of the wider population.

Falsifying research conclusions

You do not need to falsify your results if you are happy to draw conclusions contrary to the outcome of your data analysis.


Keith S. Taber


Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: but their own data analaysis does not support this.

I recently read a research study to evaluate a teaching innovation where the authors

  • presented their results,
  • reported the statistical test they had used to analyse their results,
  • acknowledged that the outcome of their experiment was negative (not statistically significant), then
  • stated their findings as having obtained a positive outcome, and
  • concluded their paper by arguing they had demonstrated their teaching innovation was effective.

Li, Ouyang, Xu and Zhang's (2022) paper in the Journal of Chemical Education contravenes the scientific norm that your conclusions should be consistent with the outcome of your data analysis.
(Magnified portions of this scheme are presented below)

And this was not in a paper in one of those predatory journals that I have criticised so often here – this was a study in a well regarded journal published by a learned scientific society!

The legal analogy

I have suggested (Taber, 2013) that writing up research can be understood in terms of a number of metaphoric roles: researchers need to

  • tell the story of their research;
  • teach readers about the unfamiliar aspects of their work;
  • make a case for the knowledge claims they make.

Three metaphors for writing-up research

All three aspects are important in making a paper accessible and useful to readers, but arguably the most important aspect is the 'legal' analogy: a research paper is an argument to make a claim for new public knowledge. A paper that does not make its case does not add anything of substance to the literature.

Imagine a criminal case where the prosecution seeks to make its argument at a pre-trial hearing:

"The police found fingerprints and D.N.A. evidence at the scene, which they believe were from the accused."

"Were these traces sent for forensic analysis?"

"Of course. The laboratory undertook the standard tests to identify who left these traces."

"And what did these analyses reveal?"

"Well according to the current standards that are widely accepted in the field, the laboratory was unable to find a definite match between the material collected at the scene, and fingerprints and a D.N.A. sample provided by the defendant."

"And what did the police conclude from these findings?"

"The police concluded that the fingerprints and D.N.A. evidence show that the accused was at the scene of the crime."

It seems unlikely that such a scenario has ever played out, at least in any democratic country where there is an independent judiciary, as the prosecution would be open to ridicule and it is quite likely the judge would have some comments about wasting court time. What would seem even more remarkable, however, would be if the judge decided on the basis of this presentation that there was a prima facie case to answer that should proceed to a full jury trial.

Yet in educational research, it seems parallel logic can be persuasive enough to get a paper published in a good peer-reviewed journal.

Testing an educational innovation

The paper was entitled 'Implementation of the Student-Centered Team-Based Learning Teaching Method in a Medicinal Chemistry Curriculum' (Li, Ouyang, Xu & Zhang, 2022), and it was published in the Journal of Chemical Education. 'J.Chem.Ed.' is a well-established, highly respected periodical that takes peer review seriously. It is published by a learned scientific society – the American Chemical Society.

That a study published in such a prestige outlet should have such a serious and obvious flaw is worrying. Of course, no matter how good editorial and peer review standards are, it is inevitable that sometimes work with serious flaws will get published, and it is easy to pick out the odd problematic paper and ignore the vast majority of quality work being published. But, I did think this was a blatant problem that should have been spotted.

Indeed, because I have a lot of respect for the Journal of Chemical Education I decided not to blog about it ("but that is what you are doing…?"; yes, but stick with me) and to take time to write a detailed letter to the journal setting out the problem in the hope this would be acknowledged and the published paper would not stand unchallenged in the literature. The journal declined to publish my letter although the referees seemed to generally accept the critique. This suggests to me that this was not just an isolated case of something slipping through – but a failure to appreciate the need for robust scientific standards in publishing educational research.

Read the letter submitted to the Journal of Chemical Education

A flawed paper does not imply worthless research

I am certainly not suggesting that there is no merit in Li, Ouyang, Xu and Zhang's work. Nor am I arguing that their work was not worth publishing in the journal. My argument is that Li and colleague's paper draws an invalid conclusion, and makes misleading statements inconsistent with the research data presented, and that it should not have been published in this form. These problems are pretty obvious, and should (I felt) have been spotted in peer review. The authors should have been asked to address these issues, and follow normal scientific standards and norms such that their conclusions follow from, rather than contradict, their results.

That is my take. Please read my reasoning below (and the original study if you have access to J.Chem.Ed.) and make up your own mind.

Li, Ouyang, Xu and Zhang report an innovation in a university course. They consider this to have been a successful innovation, and it may well have great merits. The core problem is that Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: when their own data analysis does not support this.

The evidence for a successful innovation

There is much material in the paper on the nature of the innovation, and there is evidence about student responses to it. Here, I am only concerned with the failure of the paper to offer a logical chain of argument to support their knowledge claim that the teaching innovation improved student achievement.

There are (to my reading – please judge for yourself if you can access the paper) some slight ambiguities in some parts of the description of the collection and analysis of achievement data (see note 5 below), but the key indicator relied on by Li, Ouyang, Xu and Zhang is the average score achieved by students in four teaching groups, three of which experienced the teaching innovation (these are denoted collectively as the 'the experimental group') and one group which did not (denoted as 'the control group', although there is no control of variables in the study 1). Each class comprised of 40 students.

The study is not published open access, so I cannot reproduce the copyright figures from the paper here, but below I have drawn a graph of these key data:


Key results from Li et al, 2022: this data was the basis for claiming an effective teaching innovation.

Loading poll ...

It is on the basis of this set of results that Li and colleagues claim that "the average score showed a constant upward trend, and a steady increase was found". Surely, anyone interrogating these data might have pause to wonder if that is the most authentic description of the pattern of scores year on year.

Does anyone teaching in a university really think that assessment methods are good enough to produce average class scores that are meaningful to 3 or 4 significant figures. To a more reasonable level of precision, nearest %age point (which is presumably what these numbers are – that is not made explicit), the results were:


CohortAverage class score
201780
201880
201980
202080
Average class scores (2 s.f.) year on year

When presented to a realistic level of precision, the obvious pattern is…no substantive change year on year!

A truncated graph

In their paper, Li and colleagues do present a graph to compare the average results in 2017 with (not 2018, but) 2019 and 2020, somewhat similar to the one I have reproduced here which should have made it very clear how little the scores varied between cohorts. However, Li and colleagues did not include on their axis the full range of possible scores, but rather only included a small portion of the full range – from 79.4 to 80.4.

This is a perfectly valid procedure often used in science, and it is quite explicitly done (the x-axis is clearly marked), but it does give a visual impression of a large spread of scores which could be quite misleading. In effect, their Figure 4b includes just a slither of my graph above, as shown below. If one takes the portion of the image below that is not greyed out, and stretches it to cover the full extent of the x axis of a graph, that is what is presented in the published account.


In the paper in J.Chem.Ed., Li and colleagues (2022) truncate the scale on their average score axis to expand 1% of the full range (approximated above in the area not shaded over) into a whole graph as their Figure 4b. This gives a visual impression of widely varying scores (to anyone who does not read the axis labels).

Compare images: you can use the 'slider' to change how much of each of the two images is shown.

What might have caused those small variations?

If anyone does think that differences of a few tenths of a percent in average class scores are notable, and that this demonstrates increasing student achievement, then we might ask what causes this?

Li and colleagues seem to be convinced that the change in teaching approach caused the (very modest) increase in scores year on year. That would be possible. (Indeed, Li et al seem to be arguing that the very, very modest shift from 2017 to subsequent years was due to the change of teaching approach; but the not-quite-so-modest shifts from 2018 to 2019 to 2020 are due to developing teacher competence!) However, drawing that conclusion requires making a ceteris paribus assumption: that all other things are equal. That is, that any other relevant variables have been controlled.

Read about confounding variables

Another possibility however is simply that each year the teaching team are more familiar with the science, and have had more experience teaching it to groups at this level. That is quite reasonable and could explain why there might be a modest increase in student outcomes on a course year on year.

Non-equivalent groups of students?

However, a big assumption here is that each of the year groups can be considered to be intrinsically the same at the start of the course (and to have equivalent relevant experiences outside the focal course during the programme). Often in quasi-experimental studies (where randomisation to conditions is not possible 1) a pre-test is used to check for equivalence prior to the innovation: after all, if students are starting from different levels of background knowledge and understanding then they are likely to score differently at the end of a course – and no further explanation of any measured differences in course achievement need be sought.

Read about testing for initial equivalence

In experiments, you randomly assign the units of analysis (e.g., students) to the conditions, which gives some basis for at least comparing any differences in outcomes with the variations likely by chance. But this was not a true experiment as there was no randomisation – the comparisons are between successive year groups.

In Li and colleagues' study, the 40 students taking the class in 2017 are implicitly assumed equivalent to the 40 students taking the class in each of the years 20818-2020: but no evidence is presented to support this assumption. 3

Yet anyone who has taught the same course over a period of time knows that even when a course is unchanged and the entrance requirements stable, there are naturally variations from one year to the next. That is one of the challenges of educational research (Taber, 2019): you never can "take two identical students…two identical classes…two identical teachers…two identical institutions".

Novelty or expectation effects?

We would also have to ignore any difference introduced by the general effect of there being an innovation beyond the nature of the specific innovation (Taber, 2019). That is, students might be more attentive and motivated simply because this course does things differently to their other current courses and past courses. (Perhaps not, but it cannot be ruled out.)

The researchers are likely enthusiastic for, and had high expectations for, the innovation (so high that it seems to have biased their interpretation of the data and blinded them to the obvious problems with their argument) and much research shows that high expectation, in its own right, often influences outcomes.

Read about expectancy effects in studies

Equivalent examination questions and marking?

We also have to assume the assessment was entirely equivalent across the four years. 4 The scores were based on aggregating a number of components:

"The course score was calculated on a percentage basis: attendance (5%), preclass preview (10%), in-class group presentation (10%), postclass mind map (5%), unit tests (10%), midterm examination (20%), and final examination (40%)."

Li, et al, 2022, p.1858

This raises questions about the marking and the examinations:

  • Are the same test and examination questions used each year (that is not usually the case as students can acquire copies of past papers)?
  • If not, how were these instruments standardised to ensure they were not more difficult in some years than others?
  • How reliable is the marking? (Reliable meaning the same scores/mark would be assigned to the same work on a different occasion.)

These various issues do not appear to have been considered.

Change of assessment methodology?

The description above of how the students' course scores were calculated raises another problem. The 2017 cohort were taught by "direct instruction". This is not explained as the authors presumably think we all know exactly what that is : I imagine lectures. By comparison, in the innovation (2018-2020 cohorts):

"The preclass stage of the SCTBL strategy is the distribution of the group preview task; each student in the group is responsible for a task point. The completion of the preview task stimulates students' learning motivation. The in-class stage is a team presentation (typically PowerPoint (PPT)), which promotes students' understanding of knowledge points. The postclass stage is the assignment of team homework and consolidation of knowledge points using a mind map. Mind maps allow an orderly sorting and summarization of the knowledge gathered in the class; they are conducive to connecting knowledge systems and play an important role in consolidating class knowledge."

Li, et al, 2022, p.1856, emphasis added.

Now the assessment of the preview tasks, the in-class group presentations, and the mind maps all contributed to the overall student scores (10%, 10%, 5% respectively). But these are parts of the innovative teaching strategy – they are (presumably) not part of 'direct instruction'. So, the description of how the student class scores were derived only applies to 2018-2020, and the methodology used in 2017 must have been different. (This is not discussed in the paper.) 5

A quarter of the score for the 'experimental' groups came from assessment components that could not have been part of the assessment regime applied to the 2017 cohort. At the very least, the tests and examinations must have been more heavily weighed into the 'control' group students' overall scores. This makes it very unlikely the scores can be meaningfully directly compared from 2017 to subsequent years: if the authors think otherwise they should have presented persuasive evidence of equivalence.


Li and colleagues want to convince us that variations in average course scores can be assumed to be due to a change in teaching approach – even though there are other conflating variables.

So, groups that we cannot assume are equivalent are assessed in ways that we cannot assume to be equivalent and obtain nearly identical average levels of achievement. Despite that, Li and colleagues want to persuade us that the very modest differences in average scores between the 'control' and 'experimental' groups (which is actually larger between different 'experimental group' cohorts than between the 'control' group and the successive 'experimental' cohort) are large enough to be significant and demonstrate their teaching innovation improves student achievement.

Statistical inference

So, even if we thought shifts of less than a 1% average in class achievement were telling, there are no good reasons to assume they are down to the innovation rather than some other factor. But Li and colleagues use statistical tests to tell them whether differences between the 'control' and 'experimental' conditions are significant. They find – just what anyone looking at the graph above would expect – "there is no significant difference in average score" (p.1860).

The scientific convention in using such tests is that the choice of test, and confidence level (e.g., a probability of p<0.05 to be taken as significant) is determined in advance, and the researchers accept the outcomes of the analysis. There is a kind of contract involved – a decision to use a statistical test (chosen in advance as being a valid way of deciding the outcome of an experiment) is seen as a commitment to accept its outcomes. 2 This is a form of honesty in scientific work. Just as it is not acceptable to fabricate data, nor is is acceptable to ignore experimental outcomes when drawing conclusions from research.

Special pleading is allowed in mitigation (e.g., "although our results were non-significant, we think this was due to the small samples sizes, and suggest that further research should be undertaken with large groups {and we are happy to do this if someone gives us a grant}"), but the scientist is not allowed to simply set aside the results of the analysis.


Li and colleagues found no significant difference between the two conditions, yet that did not stop them claiming, and the Journal of Chemical Education publishing, a conclusion that the new teaching approach improved student achievement!

Yet setting aside the results of their analysis is what Li and colleagues do. They carry out an analysis, then simply ignore the findings, and conclude the opposite:

"To conclude, our results suggest that the SCTBL method is an effective way to improve teaching quality and student achievement."

Li, et al, 2022, p.1861

It was this complete disregard of scientific values, rather than the more common failure to appreciate that they were not comparing like with like, that I found really shocking – and led to me writing a formal letter to the journal. Not so much surprise that researchers might do this (I know how intoxicating research can be, and how easy it is to become convinced in one's ideas) but that the peer reviewers for the Journal of Chemical Education did not make the firmest recommendation to the editor that this manuscript could NOT be published until it was corrected so that the conclusion was consistent with the findings.

This seems a very stark failure of peer review, and allows a paper to appear in the literature that presents a conclusion totally unsupported by the evidence available and the analysis undertaken. This also means that Li, Ouyang, Xu and Zhang now have a publication on their academic records that any careful reader can see is critically flawed – something that could have been avoided had peer reviewers:

  • used their common sense to appreciate that variations in class average scores from year to year between 79.8 and 80.3 could not possibly be seen as sufficient to indicate a difference in the effectiveness of teaching approaches;
  • recommended that the authors follow the usual scientific norms and adopt the reasonable scholarly value position that the conclusion of your research should follow from, and not contradict, the results of your data analysis.


Work cited:

Notes

1 Strictly the 2017 cohort has the role of a comparison group, but NOT a control group as there was no randomisation or control of variables, so this was not a true experiment (but a 'quasi-experiment'). However, for clarity, I am here using the original authors' term 'control group'.

Read about experimental research design


2 Some journals are now asking researchers to submit their research designs and protocols to peer review BEFORE starting the research. This prevents wasted effort on work that is flawed in design. Journals will publish a report of the research carried out according to an accepted design – as long as the researchers have kept to their research plans (or only made changes deemed necessary and acceptable by the journal). This prevents researchers seeking to change features of the research because it is not giving the expected findings and means that negative results as well as positive results do get published.


3 'Implicitly' assumed as nowhere do the authors state that they think the classes all start as equivalent – but if they do not assume this then their argument has no logic.

Without this assumption, their argument is like claiming that growing conditions for tree development are better at the front of a house than at the back because on average the trees at the front are taller – even though fast-growing mature trees were planted at the front and slow-growing saplings at the back.


4 From my days working with new teachers, a common rookie mistake was assuming that one could tell a teaching innovation was successful because students achieved an average score of 63% on the (say, acids) module taught by the new method when the same class only averaged 46% on the previous (say, electromagnetism) module. Graduate scientists would look at me with genuine surprise when I asked how they knew the two tests were of comparable difficulty!

Read about why natural scientists tend to make poor social scientists


5 In my (rejected) letter to the Journal of Chemical Education I acknowledged some ambiguity in the paper's discussion of the results. Li and colleagues write:

"The average scores of undergraduates majoring in pharmaceutical engineering in the control group and the experimental group were calculated, and the results are shown in Figure 4b. Statistical significance testing was conducted on the exam scores year to year. The average score for the pharmaceutical engineering class was 79.8 points in 2017 (control group). When SCTBL was implemented for the first time in 2018, there was a slight improvement in the average score (i.e., an increase of 0.11 points, not shown in Figure 4b). However, by 2019 and 2020, the average score increased by 0.32 points and 0.54 points, respectively, with an obvious improvement trend. We used a t test to test whether the SCTBL method can create any significant difference in grades among control groups and the experimental group. The calculation results are shown as follows: t1 = 0.0663, t2 = 0.1930, t3 =0.3279 (t1 <t2 <t3 <t𝛼, t𝛼 =2.024, p>0.05), indicating that there is no significant difference in average score. After three years of continuous implementation of SCTBL, the average score showed a constant upward trend, and a steady increase was found. The SCTBL method brought about improvement in the class average, which provides evidence for its effectiveness in medicinal chemistry."

Li, et al, 2022, p.1858-1860, emphasis added

This appears to refer to three distinct measures:

  • average scores (produced by weighed summations of various assessment components as discussed above)
  • exam scores (perhaps just the "midterm examination…and final examination", or perhaps just the final examination?)
  • grades

Formal grades are not discussed in the paper (the word is only used in this one place), although the authors do refer to categorising students into descriptive classes ('levels') according to scores on 'assessments', and may see these as grades:

"Assessments have been divided into five levels: disqualified (below 60), qualified (60-69), medium (70-79), good (80-89), and excellent (90 and above)."

Li, et al, 2022, p.1856, emphasis added

In the longer extract above, the reference to testing difference in "grades" is followed by reporting the outcome of the test for "average score":

"We used a t test to test …grades …The calculation results … there is no significant difference in average score"

As Student's t-test was used, it seems unlikely that the assignment of students to grades could have been tested. That would surely have needed something like the Chi-squared statistic to test categorical data – looking for an association between (i) the distributions of the number of students in the different cells 'disqualified', 'qualified', 'medium', 'good' and 'excellent'; and (ii) treatment group.

Presumably, then, the statistical testing was applied to the average course scores shown in the graph above. This also makes sense because the classification into descriptive classes loses some of the detail in the data and there is no obvious reason why the researchers would deliberately chose to test 'reduced' data rather than the full data set with the greatest resolution.


Delusions of educational impact

A 'peer-reviewed' study claims to improve academic performance by purifying the souls of students suffering from hallucinations


Keith S. Taber


The research design is completely inadequate…the whole paper is confused…the methodology seems incongruous…there is an inconsistency…nowhere is the population of interest actually identified…No explanation of the discrepancy is provided…results of this analysis are not reported…the 'interview' technique used in the study is highly inadequate…There is a conceptual problem here…neither the validity nor reliability can be judged…the statistic could not apply…the result is not reported…approach is completely inappropriate…these tables are not consistent…the evidence is inconclusive…no evidence to demonstrate the assumed mechanism…totally unsupported claims…confusion of recommendations with findings…unwarranted generalisation…the analysis that is provided is useless…the research design is simply inadequate…no control condition…such a conclusion is irresponsible

Some issues missed in peer review for a paper in the European Journal of Education and Pedagogy

An invitation to publish without regard to quality?

I received an email from an open-access journal called the European Journal of Education and Pedagogy, with the subject heading 'Publish Fast and Pay Less' which immediately triggered the thought "another predatory journal?" Predatory journals publish submissions for a fee, but do not offer the editorial and production standards expected of serious research journals. In particular, they publish material which clearly falls short of rigorous research despite usually claiming to engage in peer review.

A peer reviewed journal?

Checking out the website I found the usual assurances that the journal used rigorous peer review as:

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general.

We use a double-blind system for peer-reviewing; both reviewers and authors' identities remain anonymous to each other. The paper will be peer-reviewed by two or three experts; one is an editorial staff and the other two are external reviewers."

https://www.ej-edu.org/index.php/ejedu/about

Peer review is critical to the scientific process. Work is only published in (serious) research journals when it has been scrutinised by experts in the relevant field, and any issues raised responded to in terms of revisions sufficient to satisfy the editor.

I could not find who the editor(-in-chief) was, but the 'editorial team' of European Journal of Education and Pedagogy were listed as

  • Bea Tomsic Amon, University of Ljubljana, Slovenia
  • Chunfang Zhou, University of Southern Denmark, Denmark
  • Gabriel Julien, University of Sheffield, UK
  • Intakhab Khan, King Abdulaziz University, Saudi Arabia
  • Mustafa Kayıhan Erbaş, Aksaray University, Turkey
  • Panagiotis J. Stamatis, University of the Aegean, Greece

I decided to look up the editor based in England where I am also based but could not find a web presence for him at the University of Sheffield. Using the ORCID (Open Researcher and Contributor ID) provided on the journal website I found his ORCID biography places him at the University of the West Indies and makes no mention of Sheffield.

If the European Journal of Education and Pedagogy is organised like a serious research journal, then each submission is handled by one of this editorial team. However the reference to "editorial staff" might well imply that, like some other predatory journals I have been approached by (e.g., Are you still with us, Doctor Wu?), the editorial work is actually carried out by office staff, not qualified experts in the field.

That would certainly help explain the publication, in this 'peer-reviewed research journal', of the first paper that piqued my interest enough to motivate me to access and read the text.


The Effects of Using the Tazkiyatun Nafs Module on the Academic Achievement of Students with Hallucinations

The abstract of the paper published in what claims to be a peer-reviewed research journal

The paper initially attracted my attention because it seemed to about treatment of a medical condition, so I wondered was doing in an education journal. Yet, the paper seemed to also be about an intervention to improve academic performance. As I read the paper, I found a number of flaws and issues (some very obvious, some quite serious) that should have been spotted by any qualified reviewer or editor, and which should have indicated that possible publication should have been be deferred until these matters were satisfactorily addressed.

This is especially worrying as this paper makes claims relating to the effective treatment of a symptom of potentially serious, even critical, medical conditions through religious education ("a  spiritual  approach", p.50): claims that might encourage sufferers to defer seeking medical diagnosis and treatment. Moreover, these are claims that are not supported by any evidence presented in this paper that the editor of the European Journal of Education and Pedagogy decided was suitable for publication.


An overview of what is demonstrated, and what is claimed, in the study.

Limitations of peer review

Peer review is not a perfect process: it relies on busy human beings spending time on additional (unpaid) work, and it is only effective if suitable experts can be found that fit with, and are prepared to review, a submission. It is also generally more challenging in the social sciences than in the natural sciences. 1

That said, one sometimes finds papers published in predatory journals where one would expect any intelligent person with a basic education to notice problems without needing any specialist knowledge at all. The study I discuss here is a case in point.

Purpose of the study

Under the heading 'research objectives', the reader is told,

"In general, this journal [article?] attempts to review the construction and testing of Tazkiyatun Nafs [a Soul Purification intervention] to overcome the problem of hallucinatory disorders in student learning in secondary schools. The general objective of this study is to identify the symptoms of hallucinations caused by subtle beings such as jinn and devils among students who are the cause of disruption in learning as well as find solutions to these problems.

Meanwhile, the specific objective of this study is to determine the effect of the use of Tazkiyatun Nafs module on the academic achievement of students with hallucinations.

To achieve the aims and objectives of the study, the researcher will get answers to the following research questions [sic]:

Is it possible to determine the effect of the use of the Tazkiyatun Nafs module on the academic achievement of students with hallucinations?"

Awang, 2022, p.42

I think I can save readers a lot of time regarding the research question by suggesting that, in this study, at least, the answer is no – if only because the research design is completely inadequate to answer the research question. (I should point that the author comes to the opposite conclusion: e.g., "the approach taken in this study using the Tazkiyatun Nafs module is very suitable for overcoming the problem of this hallucinatory disorder", p.49.)

Indeed, the whole paper is confused in terms of what it is setting out to do, what it actually reports, and what might be concluded. As one example, the general objective of identifying "the symptoms of hallucinations caused by subtle beings such as jinn and devils" (but surely, the hallucinations are the symptoms here?) seems to have been forgotten, or, at least, does not seem to be addressed in the paper. 2


The study assumes that hallucinations are caused by subtle beings such as jinn and devils possessing the students.
(Image by Tünde from Pixabay)

Methodology

So, this seems to be an intervention study.

  • Some students suffer from hallucinations.
  • This is detrimental to their education.
  • It is hypothesised that the hallucinations are caused by supernatural spirits ("subtle beings that lead to hallucinations"), so, a soul purification module might counter this detriment;
  • if so, sufferers engaging with the soul purification module should improve their academic performance;
  • and so the effect of the module is being tested in the study.

Thus we have a kind of experimental study?

No, not according to the author. Indeed, the study only reports data from a small number of unrepresentative individuals with no controls,

"The study design is a case study design that is a qualitative study in nature. This study uses a case study design that is a study that will apply treatment to the study subject to determine the effectiveness of the use of the planned modules and study variables measured many times to obtain accurate and original study results. This study was conducted on hallucination disorders [students suffering from hallucination disorders?] to determine the effectiveness of the Tazkiyatun Nafs module in terms of aspects of student academic achievement."

Awang, 2022, p.42

Case study?

So, the author sees this as a case study. Research methodologies are better understood as clusters of similar approaches rather than unitary categories – but case study is generally seen as naturalistic, rather than involving an intervention by an external researcher. So, case study seems incongruous here. Case study involves the detailed exploration of an instance (of something of interest – a lesson, a school, a course of tudy, a textbook, …) reported with 'thick description'.

Read about the characteristics of case study research

The case is usually a complex phenomena which is embedded within a context from which is cannot readily be untangled (for example, a lesson always takes place within a wider context of a teacher working over time with a class on a course of study, within a curricular, and institutional, and wider cultural, context, all of which influence the nature of the specific lesson). So, due to the complex and embedded nature of cases, they are all unique.

"a case study is a study that is full of thoroughness and complex to know and understand an issue or case studied…this case study is used to gain a deep understanding of an issue or situation in depth and to understand the situation of the people who experience it"

Awang, 2022, p.42

A case is usually selected either because that case is of special importance to the researcher (an intrinsic case study – e.g., I studied this school because it is the one I was working in) or because we hope this (unique) case can tell us something about similar (but certainly not identical) other (also unique) cases. In the latter case [sic], an instrumental case study, we are always limited by the extent we might expect to be able to generalise beyond the case.

This limited generalisation might suggest we should not work with a single case, but rather look for a suitably representative sample of all cases: but we sometimes choose case study because the complexity of the phenomena suggests we need to use extensive, detailed data collection and analyses to understand the complexity and subtlety of any case. That is (i.e., the compromise we choose is), we decide we will look at one case in depth because that will at least give us insight into the case, whereas a survey of many cases will inevitably be too superficial to offer any useful insights.

So how does Awang select the case for this case study?

"This study is a case study of hallucinatory disorders. Therefore, the technique of purposive sampling (purposive sampling [sic]) is chosen so that the selection of the sample can really give a true picture of the information to be explored ….

Among the important steps in a research study is the identification of populations and samples. The large group in which the sample is selected is termed the population. A sample is a small number of the population identified and made the respondents of the study. A case or sample of n = 1 was once used to define a patient with a disease, an object or concept, a jury decision, a community, or a country, a case study involves the collection of data from only one research participant…

Awang, 2022, p.42

Of course, a case study of "a community, or a country" – or of a school, or a lesson, or a professional development programme, or a school leadership team, or a homework policy, or an enrichnment activity, or … – would almost certainly be inadequate if it was limited to "the collection of data from only one research participant"!

I do not think this study actually is "a case study of hallucinatory disorders [sic]". Leading aside the shift from singular ("a case study") to plural ("disorders"), the research does not investigate a/some hallucinatory disorders, but the effect of a soul purification module on academic performance. (Actually, spoiler alert  😉, it does not actually investigate the effect of a soul purification module on academic performance either, but the author seems to think it does.)

If this is a case study, there should be the selection of a case, not a sample. Sometimes we do sample within a case in case study, but only from those identified as part of the case. (For example, if the case was a year group in a school, we may not have resources to interact in depth with several hundred different students). Perhaps this is pedantry as the reader likely knows what Awang meant by 'sample' in the paper – but semantics is important in research writing: a sample is chosen to represent a population, whereas the choice of case study is an acknowledgement that generalisation back to a population is not being claimed).

However, if "among the important steps in a research study is the identification of populations" then it is odd that nowhere in the paper is the population of interest actually specified!

Things slip our minds. Perhaps Awang intended to define the population, forgot, and then missed this when checking the text – buy, hey, that is just the kind of thing the reviewers and editor are meant to notice! Otherwise this looks very like including material from standard research texts to play lip-service to the idea that research-design needs to be principled, but without really appreciating what the phrases used actually mean. This impression is also given by the descriptions of how data (for example, from interviews) were analysed – but which are not reflected at all in the results section of the paper. (I am not accusing Awang of this, but because of the poor standard of peer review not raising the question, the author is left vulnerable to such an evaluation.)

The only one research participant?

So, what do we know about the "case or sample of n = 1 ", the "only one research participant" in this study?

The actual respondents in this case study related to hallucinatory disorders were five high school students. The supportive respondents in the case study related to hallucination disorders were five counseling teachers and five parents or guardians of students who were the actual respondents."

Awang, 2022, p.42

It is certainly not impossible that a case could comprise a group of five people – as long as those five make up a naturally bounded group – that is a group that a reasonable person would recognise as existing as a coherent entiy as they clearly had something in common (they were in the same school class, for example; they were attending the same group therapy session, perhaps; they were a friendship group; they were members of the same extended family diagnosed with hallucinatory disorders…something!) There is no indication here of how these five make up a case.

The identification of the participants as a case might have made sense had the participants collectively undertaken the module as a group, but the reader is told: "This study is in the form of a case study. Each practice and activity in the module are done individually" (p.50). Another justification could have been if the module had been offered in one school, and these five participants were the students enrolled in the programme at that time but as "analysis of  the  respondents'  academic  performance  was conducted  after  the  academic  data  of  all  respondents  were obtained  from  the  respective  respondent's  school" (p.45) it seems they did not attend a single school.

The results tables and reports in the text refer to "respondent 1" to "respondent 4". In case study, an approach which recognises the individuality and inherent value of the particular case, we would usually assign assumed names to research participants, not numbers. But if we are going to use numbers, should there not be a respondent 5?

The other one research participant?

It seems that these is something odd here.

Both the passage above, and the abstract refer to five respondents. The results report on four. So what is going on? No explanation of the discrepancy is provided. Perhaps:

  • There only ever were four participants, and the author made a mistake in counting.
  • There only ever were four participants, and the author made a typographical mistake (well, strictly, six typographical mistakes) in drafting the paper, and then missed this in checking the manuscript.
  • There were five respondents and the author forgot to include data on respondent 5 purely by accident.
  • There were five respondents, but the author decided not to report on the fifth deliberately for a reason that is not revealed (perhaps the results did not fit with the desired outcome?)

The significant point is not that there is an inconsistency but that this error was missed by peer reviewers and the editor – if there ever was any genuine peer review. This is the kind of mistake that a school child could spot – so, how is it possible that 'expert reviewers' and 'editorial staff' either did not notice it, or did not think it important enough to query?

Research instruments

Another section of the paper reports the instrumentation used in the paper.

"The research instruments for this study were Takziyatun Nafs modules, interview questions, and academic document analysis. All these instruments were prepared by the researcher and tested for validity and reliability before being administered to the selected study sample [sic, case?]."

Awang, 2022, p.42

Of course, it is important to test instruments for validity and reliability (or perhaps authenticity and trustworthiness when collecting qualitative data). But it is also important

  • to tell the reader how you did this
  • to report the outcomes

which seems to be missing (apart from in regard to part of the implemented module – see below). That is, the reader of a research study wants evidence not simply promises. Simply telling readers you did this is a bit like meeting a stranger who tells you that you can trust them because they (i.e., say that they) are honest.

Later the reader is told that

"Semi- structured interview questions will be [sic, not 'were'?] developed and validated for the purpose of identifying the causes and effects of hallucinations among these secondary school students…

…this interview process will be [sic, not 'was'] conducted continuously [sic!] with respondents to get a clear and specific picture of the problem of hallucinations and to find the best solution to overcome this disorder using Islamic medical approaches that have been planned in this study

Awang, 2022, pp.43-44

At the very least, this seems to confuse the plan for the research with a report of what was done. (But again, apparently, the reviewers and editorial staff did not think this needed addressing.) This is also confusing as it is not clear how this aspect of the study relates to the intervention. Were the interviews carried out before the intervention to help inform the design of the modules (presumably not as they had already been "tested for validity and reliability before being administered to the selected study sample"). Perhaps there are clear and simple answers to such questions – but the reader will not know because the reviewers and editor did not seem to feel they needed to be posed.

If "Interviews are the main research instrument in this study" (p.43), then one would expect to see examples of the interview schedules – but these are not presented. The paper reports a complex process for analysing interview data, but this is not reflected in the findings reported. The readers is told that the six stage process leads to the identifications and refinement of main and sub-categories. Yet, these categories are not reported in the paper. (But, again, peer reviewers and the editor did not apparently raise this as something to be corrected.) More generally "data  analysis  used  thematic  analysis  methods" (p.44), so why is there no analysis presented in terms of themes? The results of this analysis are simply not reported.

The reader is told that

"This  interview  method…aims to determine the respondents' perspectives, as well as look  at  the  respondents'  thoughts  on  their  views  on  the issues studied in this study."

Awang, 2022, p.44

But there is no discussion of participants perspectives and views in the findings of the study. 2 Did the peer reviewers and editor not think this needed addressing before publication?

Even more significantly, in a qualitative study where interviews are supposedly the main research instrument, one would expect to see extracts from the interviews presented as part of the findings to support and exemplify claims being made: yet, there are none. (Did this not strike the peer reviewers and editor as odd: presumably they are familiar with the norms of qualitative research?)

The only quotation from the qualitative data (in this 'qualitative' study) I can find appears in the implications section of the paper:

"Are you aware of the importance of education to you? Realize. Is that lesson really important? Important. The success of the student depends on the lessons in school right or not? That's right"

Respondent 3: Awang, 2022, p.49

This seems a little bizarre, if we accept this is, as reported, an utterance from one of the students, Respondent 3. It becomes more sensible if this is actually condensed dialogue:

"Are you aware of the importance of education to you?"

"Realize."

"Is that lesson really important?"

"Important."

"The success of the student depends on the lessons in school right or not?"

"That's right"

It seems the peer review process did not lead to suggesting that the material should be formatted according to the norms for presenting dialogue in scholarly texts by indicating turns. In any case, if that is typical of the 'interview' technique used in the study then it is highly inadequate, as clearly the interviewer is leading the respondent, and this is more an example of indoctrination than open-ended enquiry.

Random sampling of data

Completely incongruous with the description of the purposeful selection of the participants for a case study is the account of how the assessment data was selected for analysis:

"The  process  of  analysis  of  student  achievement documents is carried out randomly by taking the results of current  examinations  that  have  passed  such  as the  initial examination of the current year or the year before which is closest  to  the  time  of  the  study."

Awang, 2022, p.44

Did the peer reviewers or editor not question the use of the term random here? It is unclear what is meant to by 'random' here, but clearly if the analysis was based on randomly selected data that would undermine the results.

Validating the soul purification module

There is also a conceptual problem here. The Takziyatun Nafs modules are the intervention materials (part of what is being studied) – so they cannot also be research instruments (used to study them). Surely, if the Takziyatun Nafs modules had been shown to be valid and reliable before carrying out the reported study, as suggested here, then the study would not be needed to evaluate their effectiveness. But, presumably, expert peer reviewers (if there really were any) did not see an issue here.

The reliability of the intervention module

The Takziyatun Nafs modules had three components, and the author reports the second of the three was subjected to tests of validity and reliability. It seems that Awang thinks that this demonstrates the validity and reliability of the complete intervention,

"The second part of this module will go through [sic] the process of obtaining the validity and reliability of the module. Proses [sic] to obtain this validity, a questionnaire was constructed to test the validity of this module. The appointed specialists are psychologists, modern physicians (psychiatrists), religious specialists, and alternative medicine specialists. The validity of the module is identified from the aspects of content, sessions, and activities of the Tazkiyatun Nafs module. While to obtain the value of the reliability coefficient, Cronbach's alpha coefficient method was used. To obtain this Cronbach's alpha coefficient, a pilot test was conducted on 50 students who were randomly selected to test the reliability of this module to be conducted."

Awang, 2022, pp.43-44

Now to unpack this, it may be helpful to briefly outline what the intervention involved (as as the paper is open access anyone can access and read the full details in the report).


From the MGM film 'A Night at the Opera' (1935): "The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced"

The description does not start off very helpfully ("The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced" (p.43) put me in mind of the Marx brothers: "The party of the first part shall be known in this contract as the party of the first part"), but some key points are,

"the Tazkiyatun Nafs module was constructed to purify the heart of each respondent leading to the healing of hallucinatory disorders. This liver purification process is done in stages…

"the process of cleansing the patient's soul will be done …all the subtle beings in the patient will be expelled and cleaned and the remnants of the subtle beings in the patient will be removed and washed…

The second process is the process of strengthening and the process of purification of the soul or heart of the patient …All the mazmumah (evil qualities) that are in the heart must be discarded…

The third process is the process of enrichment and the process of distillation of the heart and the practices performed. In this process, there will be an evaluation of the practices performed by the patient as well as the process to ensure that the patient is always clean from all the disturbances and disturbances [sic] of subtle beings to ensure that students will always be healthy and clean from such disturbances…

Awang, 2022, p.45, p.43

Quite how this process of exorcising and distilling and cleansing will occur is not entirely clear (and if the soul is equated with the heart, how is the liver involved?), but it seems to involve reflection and prayer and contemplation of scripture – certainly a very personal and therapeutic process.

And yet its validity and reliability was tested by giving a questionnaire to 50 students randomly selected (from the unspecified population, presumably)? No information is given on how a random section was made (Taber, 2013) – which allows a reader to be very sceptical that this actually was a random sample from the (un?)identified population, and not just an arbitrary sample of 50 students. (So, that is twice the word 'random' is used in the paper when it seems inappropriate.)

It hardly matters here, as clearly neither the validity nor the reliability of a spiritual therapy can be judged from a questionnaire (especially when administered to people who have never undertaken the therapy). In any case, the "reliability coefficient" obtained from an administration of a questionnaire ONLY applies to that sample on that occasion. So, the statistic could not apply to the four participants in the study. And, in any case, the result is not reported, so the reader has no idea what the value of Cronbach's alpha was (but then, this was described as a qualitative study!)

Moreover, Cronbach's alpha only indicates the internal coherence of the items on a scale (Taber, 2019): so, it only indicates whether the set of questions included in the questionnaire seem to be accessing the same underlying construct in motivating the responses of those surveyed across the set of items. It gives no information about the reliability of the instrument (i.e., whether it would give the same results on another occasion).

This approach to testing validity and reliability is then completely inappropriate and unhelpful. So, even if the outcomes of the testing had been reported (and they are not) they would not offer any relevant evidence. Yet it seems that peer reviewers and editor did not think to question why this section was included in the paper.

Ethical issues

A study of this kind raises ethical issues. It may well be that the research was carried out in an entirely proper and ethical manner, but it is usual in studies with human participants ('human subjects') to make this clear in the published report (Taber, 2014b). A standard issue is whether the participants gave voluntary, informed, consent. This would mean that they were given sufficient information about the study at the outset to be able to decide if they wished to participate, and were under no undue pressure to do so. The 'respondents' were school students: if they were considered minors in the research context (and oddly for a 'case study' such basic details as age and gender are not reported) then parental permission would also be needed, again subject to sufficient briefing and no duress.

However, in this specific research there are also further issues due to the nature of the study. The participants were subject to medical disorders, so how did the researcher obtain information about, and access to, the students without medical confidentiality being broken? Who were the 'gatekeepers' who provided access to the children and their personal data? The researcher also obtained assessment data "from  the  class  teacher  or  from  the  Student Affairs section of the student's school" (p.44), so it is important to know that students (and parents/guardians) consented to this. Again, peer review does not seem to have identified this as an issue to address before publication.

There is also the major underlying question about the ethics of a study when recognising that these students were (or could be, as details are not provided) suffering from serious medical conditions, but employing religious education as a treatment ("This method of treatment is to help respondents who suffer from hallucinations caused by demons or subtle beings", p.44). Part of the theoretical framework underpinning the study is the assumption that what is being addressed is"the problem of hallucinations caused by the presence of ethereal beings…" (p.43) yet it is also acknowledged that,

"Hallucinatory disorders in learning that will be emphasized in this study are due to several problems that have been identified in several schools in Malaysia. Such disorders are psychological, environmental, cultural, and sociological disorders. Psychological disorders such as hallucinatory disorders can lead to a more critical effect of bringing a person prone to Schizophrenia. Psychological disorders such as emotional disorders and psychiatric disorders. …Among the causes of emotional disorders among students are the school environment, events in the family, family influence, peer influence, teacher actions, and others."

Awang, 2022, p.41

There seem to be three ways of understanding this apparent discrepancy, which I might gloss:

  1. there are many causes of conditions that involve hallucinations, including, but not only, possession by evil or mischievousness spirits;
  2. the conditions that lead to young people having hallucinations may be understood at two complementary levels, at a spiritual level in terms of a need for inner cleansing and exorcising of subtle beings, and in terms of organic disease or conditions triggered by, for example, social and psychological factors;
  3. in the introduction the author has relied on various academic sources to discuss the nature of the phenomenon of students having hallucinations, but he actually has a working assumption that is completely different: hallucinations are due to the presence of jinn or other spirits.

I do not think it is clear which of these positions is being taken by the study's author.

  1. In the first case it would be necessary to identify which causes are present in potential respondents and only recruit those suffering possession for this study (which does not seem to have been done);
  2. In the second case, spiritual treatment would need to complement medical intervention (which would completely undermine the validity of the study as medical treatments for the underlying causes of hallucinations are likely to be the cause of hallucinations ceasing, not the tested intervention);
  3. The third position is clearly problematic in terms of academic scholarship as it is either completely incompetent or deliberately disregards academic norms that require the design of a study to reflect the conceptual framework set out to motivate it.

So, was this tested intervention implemented instead of or alongside formal medical intervention?

  • If it was alongside medical treatment, then that raises a major confound for the study.
  • Yet it would clearly be unacceptable to deny sufferers indicated medical treatment in order to test an educational intervention that is in effect a form of exorcism.

Again, it may be there are simple and adequate responses to these questions (although here I really cannot see what they might be), but unfortunately it seems the journal referees and editor did not think to ask for them.  

Findings


Results tables presented in Awang, 2022 (p.45) [Published with a creative commons licence allowing reproduction]: "Based on the findings stated in Table I show that serial respondents experienced a decline in academic achievement while they face the problem of hallucinations. In contrast to Table II which shows an improvement in students' academic achievement  after  hallucinatory  disorders  can  be  resolved." If we assume that columns in the second table have been mislabelled, then it seems the school performance of these four students suffered while they were suffering hallucinations, but improved once they recovered. From this, we can infer…?

The key findings presented concern academic performance at school. Core results are presented in tables I and II. Unfortunately these tables are not consistent as they report contradictory results for the academic performance of students before and during periods when they had hallucinations.

They can be made consistent if the reader assumes that two of the columns in table II are mislabelled. If the reader assumes that the column labelled 'before disruption' actually reports the performance 'during disruption' and that the column actually labelled 'during disruption' is something else, then they become consistent. For the results to tell a coherent story and agree with the author's interpretation this 'something else' presumably should be 'after disruption'.

This is a very unfortunate error – and moreover one that is obvious to any careful reader. (So, why was it not obvious to the referees and editor?)

As well as looking at these overall scores, other assessment data is presented separately for each of respondent 1 – respondent 4. Theses sections comprise presentations of information about grades and class positions, mixed with claims about the effects of the intervention. These claims are not based on any evidence and in many cases are conclusions about 'respondents' in general although they are placed in sections considering the academic assessment data of individual respondents. So,there are a number of problems with these claims:

  • they are of the nature of conclusions, but appear in the section presenting the findings;
  • they are about the specific effects of the intervention that the author assumes has influenced academic performance, not the data analysed in these sections;
  • they are completely unsubstantiated as no data or analysis is offered to support them;
  • often they make claims about 'respondents' in general, although as part of the consideration of data from individual learners.

Despite this, the paper passed peer-review and editorial scrutiny.

Rhetorical research?

This paper seems to be an example of a kind of 'rhetorical research' where a researcher is so convinced about their pre-existant theoretical commitments that they simply assume they have demonstrated them. Here the assumption seem to be:

  1. Recovering from suffering hallucinations will increase student performance
  2. Hallucinations are caused by jinn and devils
  3. A spiritual intervention will expel jinn and devils
  4. So, a spiritual intervention will cure hallucinations
  5. So, a spiritual intervention will increase student performance

The researcher provided a spiritual intervention, and the student performance increased, so it is assumed that the scheme is demonstrated. The data presented is certainly consistent with the assumption, but does not in itself support this scheme without evidence. Awang provides evidence that student performance improved in four individuals after they had received the intervention – but there is no evidence offered to demonstrate the assumed mechanism.

A gardener might think that complimenting seedlings will cause them to grow. Perhaps she praises her seedlings every day, and they do indeed grow. Are we persuaded about the efficacy of her method, or might we suspect another cause at work? Would the peer-reveiewers and editor of the European Journal of Education and Pedagogy be persuaded this demonstrated that compliments cause plant growth? On the evidence of this paper, perhaps they would.

This is what Awang tells readers about the analysis undertaken:

Each student  respondent  involved  in  this  study  [sic, presumably not, rather the researcher] will  use  the analysis  of  the  respondent's  performance  to  determine the effect of hallucination disorders on student achievement in secondary school is accurate.

The elements compared in this analysis are as follows: a) difference in mean percentage of achievement by subject, b) difference in grade achievement by subject and c) difference in the grade of overall student achievement. All academic results of the respondents will be analyzed as well as get the mean of the difference between the  performance  before, during, and after the  respondents experience  hallucinations. 

These  results  will  be  used  as research material to determine the accuracy of the use of the Tazkiyatun  Nafs  Module  in  solving  the  problem  of hallucinations   in   school   and   can   improve   student achievement in academic school."

Awang, 2022, p.45

There is clearly a large jump between the analysis outlined in the second paragraph here, and testing the study hypotheses as set out in the final paragraph. But the author does not seem to notice this (and more worryingly, nor do the journal's reviewers and editor).

So interleaved into the account of findings discussing "mean percentage of achievement by subject…difference in grade achievement by subject…difference in the grade of overall student achievement" are totally unsupported claims. Here is an example for Respondent 1:

"Based on the findings of the respondent's achievement in the  grade  for  Respondent  1  while  facing  the  problem  of hallucinations  shows  that  there  is  not  much  decrease  or deterioration  of  the  respondent's  grade.  There  were  only  4 subjects who experienced a decline in grade between before and  during  hallucination  disorder.  The  subjects  that experienced  decline  were  English,  Geography,  CBC, and Civics.  Yet  there  is  one  subject  that  shows  a  very  critical grade change the Civics subject. The decline occurred from grade A to grade E. This shows that Civics education needs to be given serious attention in overcoming this problem of decline. Subjects experiencing this grade drop were subjects involving  emotion,  language,  as  well  as  psychomotor fitness.  In  the  context  of  psychology,  unstable  emotional development  leads  to  a  decline  in the psychomotor  and emotional development of respondents.

After  the  use  of  the  Tazkiyatun  Nafs  module  in overcoming  this  problem,  hallucinatory  disorders  can  be overcome.  This  situation  indicates  the  development  of  the respondents  during  and  after  experiencing  hallucinations after  practicing  the  Tazkiyatun  Nafs  module.  The  process that takes place in the Tzkiyatun Nafs module can help the respondent  to  stabilize  his  emotions  and  psyche  for  the better. From the above findings there were 5 subjects who experienced excellent improvement in grades. The increase occurred in English, Malay, Geography, and Civics subjects. The best improvement is in the subject of Civic education from grade E to grade B. The improvement in this language subject  shows  that  the  respondents'  emotions  have stabilized.  This  situation  is  very  positive  and  needs  to  be continued for other subjects so that respondents continue to excel in academic achievement in school.""

Awang, 2022, p.45 (emphasis added)

The material which I show here as underlined is interjected completely gratuitously. It does not logically fit in the sequence. It is not part of the analysis of school performance. It is not based on any evidence presented in this section. Indeed, nor is it based on any evidence presented anywhere else in the paper!

This pattern is repeated in discussing other aspects of respondents' school performance. Although there is mention of other factors which seem especially pertinent to the dip in school grades ("this was due to the absence of the  respondents  to  school  during  the  day  the  test  was conducted", p.46; "it was an increase from before with no marks due to non-attendance at school", p.46) the discussion of grades is interspersed with (repetitive) claims about the effects of the intervention for which no evidence is offered.


Respondent 1Respondent 2Respondent 3Respondent 4
§: Differences in Respondents' Grade Achievement by Subject"After the use of the Tazkiyatun Nafs module in overcoming this problem, hallucinatory disorders can be overcome. This situation indicates the development of the respondents during and after experiencing hallucinations after practicing the Tazkiyatun Nafs module. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.45)"After the use of the Tazkiyatun Nafs module as a soul purification module, showing the development of the respondents during and after experiencing hallucination disorders is very good. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better" (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)
§:Differences in Respondent Grades according to Overall Academic Achievement"Based on the findings of the study after the hallucination
disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement.
" (pp.46-7)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module. … This excellence also shows that the respondents have recovered from hallucinations after practicing the methods found in the Tazkiayatun Nafs module that has been introduced.
In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement
." (p.47)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module has successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)
Unsupported claims made within findings sections reporting analyses of individual student academic grades: note (a) how these statements included in the analysis of individual school performance data from four separate participants (in a case study – a methodology that recognises and values diversity and individuality) are very similar across the participants; (b) claims about 'respondents' (plural) are included in the reports of findings from individual students.

Awang summarises what he claims the analysis of 'differences in respondents' grade achievement by subject' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective achievement grades. Therefore, this soul purification module should be practiced by every student to help them in stabilizing their soul and emotions and stay away from all the disturbances of the subtle beings that lead to hallucinations"

Awang, 2022, p.46

And, on the next page, Awang summarises what he claims the analysis of 'differences in respondent grades according to overall academic achievement' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective overall academic achievement. Therefore, this soul purification module should be practiced by every student to help them in stabilizing the soul and emotions as well as to stay away from all the disturbances of the subtle beings that lead to hallucination disorder."

Awang, 2022, p.47

So, the analysis of grades is said to demonstrate the value of the intervention, and indeed Awang considers this is reason to extend the intervention beyond the four participants, not just to others suffering hallucinations, but to "every student". The peer review process seems not to have raised queries about

  • the unsupported claims,
  • the confusion of recommendations with findings (it is normal to keep to results in a findings section), nor
  • the unwarranted generalisation from four hallucination suffers to all students whether healthy or not.

Interpreting the results

There seem to be two stories that can be told about the results:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved.  

Narrative 1

Now narrative 1 relies on a very substantial implied assumption – which is that the numbers presented as school performance are comparable over time. So, a control would be useful: such as what happened to the performance scores of other students in the same classes over the same time period. It seems likely they would not have shown the same dip – unless the dip was related to something other than hallucinations – such as the well-recognised dip after long school holidays, or some cultural distraction (a major sports tournament; fasting during Ramadan; political unrest; a pandemic…). Without such a control the evidence is suggestive (after all, being ill, and missing school as a result, is likely to lead to a dip in school performance, so the findings are not surprising), but inconclusive.

Intriguingly, the author tells readers that "student  achievement  statistics  from  the  beginning  of  the year to the middle of the current [sic, published in 2022] year in secondary schools in Northern Peninsular Malaysia that have been surveyed by researchers show a decline (Sabri, 2015 [sic])" (p.42), but this is not considered in relation to the findings of the study.

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, as a result of undergoing the soul purification module, their school performance improved.  

Narrative 2

Clearly narrative 2 suffers from the same limitation as narrative 1. However, it also demands an extra step in making an inference. I could re-write this narrative:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved. 
AND
the recovery was due to engagement with the soul purification module.

Narrative 2'.

That is, even if we accept narrative 1 as likely, to accept narrative 2 we would also need to be convinced that:

  • a) sufferers from medical conditions leading to hallucinations do not suffer periodic attacks with periods of remission in between; or
  • b) episodes of hallucinations cannot be due to one-off events (emotional trauma, T.I.A. {transient ischaemic attack or mini-strokes},…) that resolve naturally in time; or
  • c) sufferers from medical conditions leading to hallucinations do not find they resolve due to maturation; or
  • d) the four participants in this study did not undertaken any change in life-style (getting more sleep, ceasing eating strange fungi found in the woods) unrelated to the intervention that might have influenced the onset of hallucinations; or
  • e) the four participants in this study did not receive any medical treatment independent of the intervention (e.g., prescribed medication to treat migraine episodes) that might have influenced the onset of hallucinations

Despite this study being supposedly a case study (where the expectation is there should be 'thick description' of the case and its context), there is no information to help us exclude such options. We do not know the medical diagnoses of the conditions causing the participants' hallucinations, or anything about their lives or any medical treatment that may have been administered. Without such information, the analysis that is provided is useless for answering the research question.

In effect, regardless of all the other issues raised, the key problem is that the research design is simply inadequate to test the research question. But it seems the referees and editor did not notice this shortcoming.

Alleged implications of the research

After presenting his results Awang draws various implications, and makes a number of claims about what had been found in the study:

  • "After the students went through the treatment session by using the Tazkiayatun Nafsmodule to treat hallucinations, it showed a positive effect on the student respondents. All this was certified by the expert, the student's parents as well as the  counselor's  teacher." (p.48)
  • "Based on these findings, shows that hallucinations are very disturbing to humans and the appropriate method for now to solve this problem is to use the Tazkiyatun Nafs Module." (p.48)
  • "…the use of the Tazkiyatun Nafs module while the  respondent  is  suffering  from  hallucination  disorder  is very  appropriate…is very helpful to the respondents in restoring their minds and psyche to be calmer and healthier. These changes allow  students  to  focus  on  their  studies  as  well  as  allow them to improve their academic performance better." (p.48)
  • "The use of the Tazkiyatun Nafs Module in this study has led to very positive changes there are attitudes and traits of students  who  face  hallucinations  before.  All  the  negative traits  like  irritability, loneliness,  depression,etc.  can  be overcome  completely." (p.49)
  • "The personality development of students is getting better and perfect with the implementation of the Tazkiaytun Nafs module in their lives." (p.49)
  • "Results  indicate that  students  who  suffer  from  this hallucination  disorder are in  a  state  of  high  depression, inactivity, fatigue, weakness and pain,and insufficient sleep." (p.49)
  • "According  to  the  findings  of  this study,  the  history  of  this  hallucination  disorder  started in primary  school  and  when  a  person  is  in  adolescence,  then this  disorder  becomes  stronger  and  can  cause  various diseases  and  have  various  effects  on  a  person who  is disturbed." (p.50)

Given the range of interview data that Awang claims to have collected and analysed, at least some of the claims here are possibly supported by the data. However, none of this data and analysis is available to the reader. 2 These claims are not supported by any evidence presented in the paper. Yet peer reviewers and the editor who read the manuscript seem to feel it is entirely acceptable to publish such claims in a research paper, and not present any evidence whatsoever.

Summing up

In summary: as far as these four students were concerned (but not perhaps the fifth participant?), there did seem to be a relationship between periods of experiencing hallucinations and lower school performance (perhaps explained by such factors as "absenteeism to school during the day the test was conducted" p.46) ,

"the performance shown by students who face chronic hallucinations is also declining and  declining.  This  is  all  due  to  the  actions  of  students leaving the teacher's learning and teaching sessions as well as  not  attending  school  when  this  hallucinatory  disorder strikes.  This  illness or  disorder  comes  to  the  student suddenly  and  periodically.  Each  time  this  hallucination  disease strikes the student causes the student to have to take school  holidays  for  a  few  days  due  to  pain  or  depression"

Awang, 2022, p.42

However,

  • these four students do not represent any wider population;
  • there is no information about the specific nature, frequency, intensity, etcetera, of the hallucinations or diagnoses in these individuals;
  • there was no statistical test of significance of changes; and
  • there was no control condition to see if performance dips were experienced by others not experiencing hallucinations at the same time.

Once they had recovered from the hallucinations (and it is not clear on what basis that judgement was made) their scores improved.

The author would like us to believe that the relief from the hallucinations was due to the intervention, but this seems to be (quite literally) an act of faith 3 as no actual research evidence is offered to show that the soul purification module actually had any effect. It is of course possible the module did have an effect (whether for the conjectured or other reasons – such as simply offering troubled children some extra study time in a calm and safe environment and special attention – or because of an expectancy effect if the students were told by trusted authority figures that the intervention would lead to the purification of their hearts and the healing of their hallucinatory disorder) but the study, as reported, offers no strong grounds to assume it did have such an effect.

An irresponsible journal

As hallucinations are often symptoms of organic disease affecting blood supply to the brain, there is a major question of whether treating the condition by religious instruction is ethically sound. For example, hallucinations may indicate a tumour growing in the brain. Yet, if the module was only a complement to proper medical attention, a reader may prefer to suspect that any improvement in the condition (and consequent increased engagement in academic work) may have been entirely unrelated to the module being evaluated.

Indeed, a published research study that claims that soul purification is a suitable treatment for medical conditions presenting with hallucinations is potentially dangerous as it could lead to serious organic disease going untreated. If Awang's recommendations were widely taken up in Malaysia such that students with serious organic conditions were only treated for their hallucinations by soul purification rather than with medication or by surgery it would likely lead to preventable deaths. For a research journal to publish a paper with such a conclusion, where any qualified reviewer or editor could easily see the conclusion is not warranted, is irresponsible.

As the journal website points out,

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general."

https://www.ej-edu.org/index.php/ejedu/about

So, why did the European Journal of Education and Pedagogy not subject this submission to meaningful review to help the author of this study meet the standards of the discipline, and of science in general?


Work cited:

Notes:

1 In mature fields in the natural sciences there are recognised traditions ('paradigms', 'disciplinary matrices') in any active field at any time. In general (and of course, there will be exceptions):

  • at any historical time, there is a common theoretical perspective underpinning work in a research programme, aligned with specific ontological and epistemological commitments;
  • at any historical time, there is a strong alignment between the active theories in a research programme and the acceptable instrumentation, methodology and analytical conventions.

Put more succinctly, in a mature research field, there is generally broad agreement on how a phenomenon is to be understood; and how to go about investigating it, and how to interpret data as research evidence.

This is generally not the case in educational research – which is in part at least due to the complexity and, so, multi-layered nature, of the phenomena studied (Taber, 2014a): phenomena such as classroom teaching. So, in reviewing educational papers, it is sometimes necessary to find different experts to look at the theoretical and the methodological aspects of the same submission.


2 The paper is very strange in that the introductory sections and the conclusions and implications sections have a very broad scope, but the actual research results are restricted to a very limited focus: analysis of school test scores and grades.

It is as if as (and could well be that) a dissertation with a number of evidential strands has been reduced to a paper drawing upon only one aspect of the research evidence, but with material from other sections of the dissertation being unchanged from the original broader study.


3 Readers are told that

"All  these  acts depend on the sincerity of the medical researcher or fortune-teller seeking the help of Allah S.W.T to ensure that these methods and means are successful. All success is obtained by the permission of Allah alone"

Awang, 2022, p.43


Acute abstracts correcting Copernicus

Setting the history of science right


Keith S. Taber


I recently read a book of essays by Edward Rosen (1995) who (as described by his publisher) was "the editor and translator of Copernicus' complete works, was the leading authority on this most celebrated of Renaissance scientists". Copernicus is indeed, rightly, highly celebrated (for reasons I summarise below *).

The book was edited by Rosen's collaborator, Erna Hilfstein 1, and although the book was an anthology of reprinted journal articles, none of the chapters (articles) had abstracts. This reflects different disciplinary norms. In the natural and social sciences most journals require abstracts – and some even offer a menu of what should be included – but abstracts are not always expected in humanities disciplines.

Read about the abstract in academic articles

A collection of published papers from various journals – all lacking abstracts

It is not unusual for an academic book to be a compilation of published articles – especially when anthologising a single scholar's work. I was a little surprised to find the different chapters in the same book having different formats and typefaces – it had been decided to reproduce the articles as they had originally appeared in a range of journals (perhaps for authenticity – or perhaps to avoid the costs of new typesetting?)

But it was the absence of article abstracts that most felt odd. The potential reader is given a title, but otherwise little idea of the scope of an article before reading. Perhaps it was my awareness of this 'omission' that led me to thinking that for a number of the chapters it would be possible to offer a very minimal abstract (an acute abstract?) that would do the job! Certainly, for some of these chapters, I thought a sentence each might do.

That is not to dismiss the scholarship that has gone into developing the arguments, but Rosen often wrote on a very specific historical point, set out pertinent ideas from previous scholarship, and then argued for a clear position contrary to some earlier scholars.

So, here are my suggestions for 'acute' abstracts

Six summary encapsulations

Chapter 6: on the priest question

Abstract:

Copernicus has often been described as a priest, but Copernicus was never ordained a priest.

Copernicus was a canon in the Roman Catholic church, but this made him an administrator (and he also acted as physician), but he never became a monk or a priest.


Chapter 7: on the notary question

Abstract:

Copernicus has been described as a 'happy notary' but Copernicus was not a notary.

Although Copernicus had various roles as an administrator, even as something of a diplomat, he never took on the role of a legal notary.


Chapter 8: on the disdain question

Abstract:

Copernicus is sometimes said to have had a dismissive attitude to the common people, but there is no evidence that this was so

A comment of Copernicus on not being concerned with the views of certain philosophers seems to have been misinterpreted.


Chapter 11: on the axioms question

Abstract:

It has been claimed that Copernicus misused the term axioms in his work, but his use was perfectly in line with authorities

Today axioms are usually expected to be the self-evident starting points for developing a deductive argument, but Aristotle's definition of axioms did not require them to seem self-evident.


Chapter 16: on the papal question

Abstract:

It has been claimed that Copernicus' 'Revolutions' was approved by the pope before publication, but the manuscript was never shown to the pope

This seems to be a confusion regarding an anecdote concerning a completely different scholar.


Chapter 17: on the Calvin question

Abstract:

It has been suggested that Calvin was highly critical of Copernicus, but it seems unlikely Calvin had ever heard of him

While Calvin's writing strongly suggest he was committed to a stationary earth and a sun that moved around the earth, there is no evidence he had specifically come across Copernicus.


A manifold chapter

Having noticed how so many of Rosen's articles took one claim or historically contentious idea and developed it in the light of various sources to come to a position, I was a little surprised when I reached Chapter 20, 'Galileo's misstatements about Copernicus', to find that Rosen was dealing with 5 distinct (if related) points at once – several of which he had elsewhere made the unitary focus of an article.

Rather than write my own abstract, I could here suggest a couplet of sentences from the text might have done the job,

"According to Galileo, (1) Copernicus was a priest; (2) he was called to Rome; (3) he wrote the Revolutions by order of the pope; (4) his book was never adversely criticised; (5) it was the basis of the Gregorian calendar. Actually, Copernicus was not a priest; he was not called to Rome; he did not write the Revolutions by order of the pope; the book received much adverse criticism, particularly on the ground that it contradicted the Bible; it was not the basis of the Gregorian calendar."

Rosen, 1958/1995, pp.203-204

I noticed that this was the earliest of Rosen's writings that had been included in the compilation – perhaps he had decided to dispense his ideas more sparingly after this paper?

Actually, there's a lot to be said for abstracts that pithily précise the key point of an article, a kind of tag-line perhaps, acting for a reader as an aide-mémoire (useful at least for readers like me who commonly stare at rows of books thinking 'I read something interesting about this, somewhere here…'). I have also read a lot of abstracts in research journals that would benefit from their own (further) abstracts, so perhaps such acute abstraction might catch on?


* Appendix: A scientific giant

Copernicus is indeed 'celebrated', being seen as one of the scientific greats who helped establish modern ways of thinking about the world – part of what is often perceived as a chain that goes Copernicus – Kelper – Galileo – Newton.

Copernicus is most famous for his book known in English as 'On the Revolutions of the Heavenly Spheres', or just 'Revolutions'. The key point of note is that at a time when it was almost universally agreed that the earth was stationary at the centre of 'the world', i.e., the cosmos, and that everything else revolved around the earth, Copernicus proposed a system that put the sun at the centre and had the earth moving around the sun.


The geocentric model of the cosmos was widely accepted for many centuries
(Image by OpenClipart-Vectors from Pixabay)

From our modern worldview, it is difficult to imagine just how, well yes, revolutionary, that move was (even if Copernicus only moved the centre of the universe from earth to the sun, so our solar system still had a very special status in his system). This is clear from how long it took the new view to become the accepted position, and the opposition it attracted. Newton later realised that strictly the centre of revolution was the centre of mass of the solar system not the sun per se. 2

One problem was that there was no absolute observational test to distinguish between the two models and there were well-established reasons to accept the conventional geocentric model (e.g., we do not feel the earth move, or a great wind as it spins beneath its atmosphere; as the most dense element earth would naturally fall to the centre of the world, beneath water, air, fire, and the ether that filled the heavens {although the Earth was not considered a pure form of the element earth, it was earthy, considered mostly earth in composition 3}; and scriptures, if given a literal interpretation, seemed to suggest the earth was fixed and the sun moved.)

Copernicus' model certainly had some advantages. If the earth is still, the distant sphere with all the fixed stars must be moving about it at an incredible rate of rotation. But if the earth spun on its axis, this stellar motion was just an illusion. 4 Moreover, if everything revolves around the earth, some of the planets behave very oddly, first moving one way, then slowing down to reverse direction ('retrograde' motion), before again heading off in their original sense. But, if the planets are orbiting the sun along with the earth (now itself seen as a planet) but at different rates then this motion can be explained as an optical illusion – "these phenomena…happen on account of the single motion of the earth" – the planets only seem to loop because of the motion of the earth.

Despite this clear improvement, Copernicus model did not entirely simplify the system as Copernicus retained the consensus view that the planets moved in circles: the planets' "motions are circular or compounded of several circles,…since only the circle can bring back the past". With such an assumption the observational data can only be made to fit (either to the heliocentric model or its geocentric alternative) by having a complex series of circles rather than one circle per planet. Today when we call the night sky 'the heavens', we are using the term without implying any supernatural association – but the space beyond the moon was once literally considered as heaven. In heaven everything is perfect, and the perfect shape is a circle.

It was only when Kepler later struggled to match the best observational data available (from his employer Tycho Brahe's observatory) to the Copernican model that, after a number of false starts, he decided to see if ellipses would fit – and he discovered how the system could be described in terms of planets each following a single elliptical path that almost repeated indefinitely.

A well-known story is how by the time Copernicus had finished his work and decided to get it printed he was near the end of his life, and he was supposedly only shown a printed copy brought from the printer as he lay on his deathbed (in 1543). In the printed copy of the book an anonymous foreword/preface 5 had been inserted to the effect that readers should consider the model proposed as a useful calculating system for following the paths of heavenly bodies, and not as a proposal for how the world actually was.

Despite this, the book was later added to the Roman Catholic Church's index of banned works awaiting correction. This only occurred much later – in 1616, after Galileo taught that Copernicus' system did describe the actual 'world system'. But, in the text itself Copernicus is clear that he is suggesting a model for how the world is – "to the best of my ability I have discussed the earth's revolution around the sun" – not just a scheme for calculating purposes. Indeed, he goes as far to suggest that where he uses language implying the sun moves this is only to be taken as adopting the everyday way of talking reflecting appearances (we say 'the sun rises'). For Copernicus, it was the earth, not the sun, that moved.


Sources cited:
  • Copernicus, N. (1543/1978). On the Revolutions of the Heavenly Spheres (E. Rosen, Trans.). Prometheus Books.
  • Rosen, E. (1995). Copernicus and his successors (E. Hilfstein, Ed.). The Hambledon Press.

Notes

1 I discovered from some 'internet research' (i.e., Googling) that Erna was a holocaust survivor, "[husband] Max and Erna, along with their families, were sent first to Płaszów, a slave-labor camp, and then on a death march to Auschwitz".

An article in the Jewish Standard reports how Erna's daughter undertook a charity bike ride "from Auschwitz-Birkenau, the Nazi-run death camp in the verdant Polish countryside, to the" Jewish Community Centre of Krakow (the town where her parents lived before being deported by the Nazis).


2 Newton also wrote as if the solar system was the centre of the cosmos, but of course the solar system is itself moving around the galaxy, which is moving away from most other galaxies…


3 These are not the chemical elements recognised today, of course, but were considered the elements for many centuries. Even today, people sometimes refer to the air and water as 'the elements.'


4 Traditionally, the 'heavenly spheres' were not the bodies such as planets, moons and stars but a set of eight conjectured concentric crystalline spheres that supposedly rotated around the earth carrying the distant stars, Saturn, Jupiter, Mars, the Sun, the Moon, Venus and Mercury.


5 A preface is written by the author of a book. A foreword is written by someone else for the author (perhaps saying how wonderful the author and the work are). Technically then this was a foreword, BUT because it was not signed, it would appear to be a preface – something written by Copernicus himself. Perhaps the foreword did actually protect the book from being banned as, until Galileo made it a matter of very public debate, it is likely only other astronomers had actually scrutinised the long and very technical text in any detail!