Methodological and procedural flaws in published study

A letter to the editor of the Journal of Chemical Education

the authors draw a conclusion which is contrary to the results of their data analysis and so is invalid and misleading

I have copied below the text of a letter I wrote to the editor of the Journal of Chemical Education, to express my concern about the report of a study published in that journal. I was invited to formally submit the letter for consideration for publication. I did. Following peer review it was rejected.

Often when I see apparent problems in published research, I discuss them here. Usually, the journals concerned are predatory, and do not seem to take peer review seriously. That does not apply here. The Journal of Chemical Education is a long-established, well-respected, periodical published by a national learned scientific society: the American Chemical Society. Serious scientific journals often do publish comments from readers about published articles and even exchanges between correspondents and the original authors of the work commented on. I therefore thought it was more appropriate to express my concerns directly to the journal. 𝛂 On this occasion, after peer review, the editor decided my letter was not suitable for publication. 𝛃

I am aware of the irony – I am complaining about an article which passed peer review in a posting which is publishing a letter submitted, but rejected, after peer review. Readers should bear that in mind. The editor will have carefully considered the submission and the referee recommendations and reports, and decided to decline publication based on journal policy and the evaluation of my submission.

However, having read the peer reviewers' comments (which were largely positive about the submission and tended to agree with my critique 𝜸), I saw no reason to change my mind. If such work is allowed to stand in the literature without comment, it provides a questionable example for other researchers, and, as the abstracts and conclusions from research papers are often considered in isolation (so, here, without being aware that the conclusions contradicted the results), it distorts the research literature.

To my reading, the published study sets aside accepted scientific standards and values – though I very much suspect inadvertently. Perhaps the authors' enthusiasm for their teaching innovation affected their judgement and dulled their critical faculties. We are all prone to that: but one would normally expect such a major problem to have been spotted in peer review, allowing the authors the opportunity to put this right before publication.

Read about falsifying research conclusions


Methodological and procedural flaws in published study

Abstract

A recent study reported in the journal is presented as an experimental test of a teaching innovation. Yet the research design does not meet the conditions for an experiment as there is insufficient control of variables and no random assignment to conditions. The study design used does not allow a comparison of student scores in the 'experimental' and 'control' conditions to provide a valid test of the innovation. Moreover, the authors draw a conclusion which is contrary to the results of their data analysis and so is invalid and misleading. While the authors may well feel justified in putting aside the outcome of their statistical analysis, this goes against good scientific norms and practice.

Dear Editor

I am writing regarding a recent article published in J.Chem.Ed. 1, as I feel the reporting of this study, as published, is contrary to good scientific practice. The article, 'Implementation of the Student-Centered Team-Based Learning Teaching Method in a Medicinal Chemistry Curriculum' reports an innovation in pedagogy, and as such is likely to be of wide interest to readers of the journal. I welcome both this kind of work in developing pedagogy and its reporting to inform others; however, I think the report contravenes normal scientific standards.

Although the authors do not specify the type of research methodology they use, they do present their analysis in terms of 'experimental' and 'control' groups (e.g., p.1856), so it is reasonable to consider they see this as a kind of experimental research. There are many serious challenges when applying experimental method to social research, and it is not always feasible to address all such challenges in educational research designs 2 – but perhaps any report of educational experimental research should acknowledge relevant limitations.

A true experiment requires units of analysis (e.g., students) to be assigned to conditions randomly, as this can avoid (or, strictly, reduce the likelihood) of systematic differences between groups. Here the comparison is across different cohorts. These may be largely similar, but that cannot just be assumed. (Strictly, the comparison group should not be labelled as a 'control' group.2 ) There is clearly a risk of conflating variables.

  • Perhaps admission standards are changing over time?
  • Perhaps the teaching team has been acquiring teaching experience and expertise over time regardless of the innovation?

Moreover, if I have correctly interpreted the information on p.1858 about how student course scores after the introduction of the innovation in part derived from the novel activities in the new approach, then there is no reason to assume that the methodology of assigning scores is equivalent with that used in the 'control' (comparison) condition. The authors seem to simply assume the change in scoring methodology will not of itself change the score profile. Without evidence that assessment is equivalent across cohorts, this is an unsupportable assumption.

As it is not possible to 'blind' teachers and students to conditions there is a very real risk of expectancy effects which have been shown to often operate when researchers are positive about an innovation – when introducing the investigated innovation, teachers

  • may have a new burst of enthusiasm,
  • perhaps focus more than usual on this aspect of their work,
  • be more sensitive to students responses to teaching and so forth.

(None of this needs to be deliberate to potentially influence outcomes.) Although (indeed, perhaps because) there is often little that can be done in a teaching situation to address these challenges to experimental designs, it seems appropriate for suitable caveats to be included in a published report. I would have expected to have seen such caveats here.

However, a specific point that I feel must be challenged is in the presentation of results on p.1860. When designing an experiment, it is important to specify before collecting data how one will know what to conclude from the results. The adoption of inferential statistics is surely a commitment to accepting the outcomes of the analysis undertaken. Li and colleagues tell readers that "We used a t test to test whether the SCTBL method can create any significant difference in grades among control groups and the experimental group" and that "there is no significant difference in average score". This is despite the new approach requiring an "increased number of study tasks, and longer preclass preview time" (pp.1860-1).

I would not suggest this is necessarily a good enough reason for Li and colleagues to give up on their innovation, as they have lived experience of how it is working, and that may well offer good grounds for continuing to implement, refine, and evaluate it. As the authors themselves note, evaluation "should not only consider scores" (p.1858).

However, from a scientific point of view, this is a negative result. That certainly should not exclude publication (it is recognised that there is a bias against publishing negative results which distorts the literature in many fields) but it suggests, at the very least, that more work is needed before a positive conclusion can be drawn.

Therefore, I feel it is scientifically invalid for the authors to argue that as "the average score showed a constant [i.e., non-significant] upward trend, and a steady [i.e., non-significant] increase was found" they can claim their teaching "method brought about improvement in the class average, which provides evidence for its effectiveness in medicinal chemistry". Figure 4 reiterates this: a superficially impressive graphic, even if omits the 2018 data, actually shows just how little scores changed when it is noticed that the x-axis has a range only from 79.4-80.4 (%, presumably). The size of the variation across four cohorts (<1%, "an obvious improvement trend"?) is not only found to not be significant but can be compared with how 25% of student scores apparently derived from different types of assessment in the different conditions. 3

To reiterate, this is an interesting study, reporting valuable work. There might be very good reasons to continue the new pedagogic approach even if it does not increase student scores. However, I would argue that it is simply scientifically inadmissible to design an experiment where data will be analysed by statistical tests, and then to offer a conclusion contrary to the results of those tests. A reader who skipped to the end of the paper would find "To conclude, our results suggest that the SCTBL method is an effective way to improve teaching quality and student achievement" (p.1861) but that is to put aside the results of the analysis undertaken.


Keith S. Taber

Emeritus Professor of Science Education, University of Cambridge

References

1 Li, W., Ouyang, Y., Xu, J., & Zhang, P. (2022). Implementation of the Student-Centered Team- Based Learning Teaching Method in a Medicinal Chemistry Curriculum. Journal of Chemical Education, 99(5), 1855-1862. https://doi.org/10.1021/acs.jchemed.1c00978

2 Taber, K. S. (2019). Experimental research into teaching innovations: responding to methodological and ethical challenges. Studies in Science Education, 55(1), 69-119. https://doi.org/10.1080/03057267.2019.1658058

3 I felt there was some ambiguity regarding what figures 4a and 4b actually represent. The labels suggest they refer to "Assessment levels of pharmaceutical engineering classes [sic] in 2017-2020" and "Average scores of the medicinal chemistry course in the control group and the experimental group" (which might, by inspection, suggest that achievement on the medicinal chemistry course is falling behind shifts across the wider programme), but the references in the main text suggest that both figures refer only to the medicinal chemistry course, not the wider pharmaceutical engineering programme. Similarly, although the label for (b) refers to 'average scores' for the course, the text suggests the statistical tests were only applied to 'exam scores' (p.1858) which would only amount to 60% of the marks comprising the course scores (at least in 2018-2020; the information on how course scores were calculated for the 2017 cohort does not seem to be provided but clearly could not follow the methodology reported for the 2018-2020 cohorts). So, given that (a) and (b) do not seem consistent, it may be that the 'average scores' in (b) refers only to examination scores and not overall course scores. If so, that would at least suggest the general assessment methodology was comparable, as long as the setting and marking of examinations are equivalent across different years. However, even then, a reader would take a lot of persuasion that examination papers and marking are so consistent over time that changes of a third or half a percentage point between cohorts exceeds likely measurement error.


Read: Falsifying research conclusions. You do not need to falsify your results if you are happy to draw conclusions contrary to the outcome of your data analysis.


Notes:

𝛂 This is the approach I have taken previously. For example, a couple of years ago a paper was published in the Royal Society of Chemistry's educational research journal, Chemistry Education Research and Practice, which to my reading had similar issues, including claiming "that an educational innovation was effective despite outcomes not reaching statistical significance" (Taber, 2020).

Taber, K. S. (2020). Comment on "Increasing chemistry students' knowledge, confidence, and conceptual understanding of pH using a collaborative computer pH simulation" by S. W. Watson, A. V. Dubrovskiy and M. L. Peters, Chem. Educ. Res. Pract., 2020, 21, 528. Chemistry Education Research and Practice. doi:10.1039/D0RP00131G


𝛃 I wrote directly to the editor, Prof. Tom Holme on 12th July 2022. I received a reply the next day, inviting me to submit my letter through the journal's manuscript submission system. I did this on the 14th.

I received the decision letter on 15th September. (The "manuscript is not suitable for publication in the Journal of Chemical Education in its present form.") The editor offered to consider a resubmission of "a thoroughly rewritten manuscript, with substantial modification, incorporating the reviewers' points and including any additional data they recommended". I decided that, although I am sure the letter could have been improved in some senses, any new manuscript sufficiently different to be considered "thoroughly rewritten manuscript, with substantial modification" would not so clearly make the important points I felt needed to be made.


𝜸 There were four reviewers. The editor informed me that the initial reviews led to a 'split' perspective, so a fourth referee was invited.

  • Referee 1 recommended that the letter was published as submitted.
  • Referee 2 recommended that the letter was published as submitted.
  • Referee 3 recommended major revisions should be undertaken.
  • Referee 4 recommended rejection.

Read more about peer review and editorial decisions

Quasi-experiment or crazy experiment?

Trustworthy research findings are conditional on getting a lot of things right


Keith S. Taber


A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy.

I was looking for research into the effectiveness of P-O-E (predict-observe-explain) pedagogy, a teaching technique that is believed to help challenge learners' alternative conceptions and support conceptual change.

Read about the predict-observe-explain approach



One of the papers I came across reported identifying, and then using P-O-E to respond to, students' alternative conceptions. The authors reported that

The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test).

Kibirige, Osodo & Tlala, 2014, p.302

The references to "in the 80%" did not seem to be explained anywhere. Perhaps only 80% of students responded to the open-ended questions included as part of the assessment instrument (discussed below), so the authors gave the incidence as a proportion of those responding? Ideally, research reports are explicit about such matters avoiding the need for readers to speculate.

The authors concluded from their research that

"This study revealed that the use of POE strategy has a positive effect on learners' misconceptions about dissolved salts. As a result of this strategy, learners were able to overcome their initial misconceptions and improved on their performance….The implication of these results is that science educators, curriculum developers, and textbook writers should work together to include elements of POE in the curriculum as a model for conceptual change in teaching science in schools."

Kibirige, Osodo & Tlala, 2014, p.305

This seemed pretty positive. As P-O-E is an approach which is consistent with 'constructivist' thinking that recognises the importance of engaging with learners' existing thinking I am probably biased towards accepting such conclusions. I would expect techniques such as P-O-E, when applied carefully in suitable curriculum contexts, to be effective.

Read about constructivist pedagogy

Yet I also have a background in teaching research methods and in acting as a journal editor and reviewer – so I am not going to trust the conclusion of a research study without having a look at the research design.


All research findings are subject to caveats and provisos: good practice in research writing is for the authors to discuss them – but often they are left unmentioned for readers to spot. (Read about drawing conclusions from studies)


Kibirige and colleagues describe their study as a quasi-experiment.

Experimental research into teaching approaches

If one wants to see if a teaching approach is effective, then it seems obvious that one needs to do an experiment. If we can experimentally compare different teaching approaches we can find out which are more effective.

An experiment allows us to make a fair comparison by 'control of variables'.

Read about experimental research

Put very simply, the approach might be:

  • Identify a representative sample of an identified population
  • Randomly assign learners in the sample to either an experimental condition or a control condition
  • Set up two conditions that are alike in all relevant ways, apart from the independent variable of interest
  • After the treatments, apply a valid instrument to measure learning outcomes
  • Use inferential statistics to see if any difference in outcomes across the two conditions reaches statistical significance
  • If it does, conclude that
    • the effect is likely to due to the difference in treatments
    • and will apply, on average, to the population that has been sampled

Now, I expect anyone reading this who has worked in schools, and certainly anyone with experience in social research (such as research into teaching and learning), will immediately recognise that in practice it is very difficult to actually set up an experiment into teaching which fits this description.

Nearly always (if indeed not always!) experiments to test teaching approaches fall short of this ideal model to some extent. This does not mean such studies can not be useful – especially where there are many of them with compensatory strengths and weaknesses offering similar findings (Taber, 2019a)- but one needs to ask how closely published studies fit the ideal of a good experiment. Work in high quality journals is often expected to offer readers guidance on this, but readers should check for themselves to see if they find a study convincing.

So, how convincing do I find this study by Kibirige and colleagues?

The sample and the population

If one wishes a study to be informative about a population (say, chemistry teachers in the UK; or 11-12 year-olds in state schools in Western Australia; or pharmacy undergraduates in the EU; or whatever) then it is important to either include the full population in the study (which is usually only feasible when the population is a very limited one, such as graduate students in a single university department) or to ensure the sample is representative.

Read about populations of interest in research

Read about sampling a population

Kibirige and colleagues refer to their participants as a sample

"The sample consisted of 93 Grade 10 Physical Sciences learners from two neighbouring schools (coded as A and B) in a rural setting in Moutse West circuit in Limpopo Province, South Africa. The ages of the learners ranged from 16 to 20 years…The learners were purposively sampled."

Kibirige, Osodo & Tlala, 2014, p.302

Purposive sampling means selecting participants according to some specific criteria, rather than sampling a population randomly. It is not entirely clear precisely what the authors mean by this here – which characteristics they selected for. Also, there is no statement of the population being sampled – so the reader is left to guess what population the sample is a sample of. Perhaps "Grade 10 Physical Sciences" students – but, if so, universally, or in South Africa, or just within Limpopo Province, or indeed just the Moutse West circuit? Strictly the notion of a sample is meaningless without reference to the population being sampled.

A quasi-experiment

A key notion in experimental research is the unit of analysis

"An experiment may, for example, be comparing outcomes between different learners, different classes, different year groups, or different schools…It is important at the outset of an experimental study to clarify what the unit of analysis is, and this should be explicit in research reports so that readers are aware what is being compared."

Taber, 2019a, p.72

In a true experiment the 'units of analysis' (which in different studies may be learners, teachers, classes, schools, exam. papers, lessons, textbook chapters, etc.) are randomly assigned to conditions. Random assignment allows inferential statistics to be used to directly compare measures made in the different conditions to determine whether outcomes are statistically significant. Random assignment is a way of making systematic differences between groups unlikely (and so allows the use of inferential statistics to draw meaningful conclusions).

Random assignment is sometimes possible in educational research, but often researchers are only able to work with existing groupings.

Kibirige, Osodo & Tlala describe their approach as using a quasi-experimental design as they could not assign learners to groups, but only compare between learners in two schools. This is important, as means that the 'units of analysis' are not the individual learners, but the groups: in this study one group of students in one school (n=1) is being compared with another group of students in a different school (n=1).

The authors do not make it clear whether they assigned the schools to the two teaching conditions randomly – or whether some other criterion was used. For example, if they chose school A to be the experimental school because they knew the chemistry teacher in the school was highly skilled, always looking to improve her teaching, and open to new approaches; whereas the chemistry teacher in school B had a reputation for wishing to avoid doing more than was needed to be judged competent – that would immediately invalidate the study.

Compensating for not using random assignment

When it is not possible to randomly assign learners to treatments, researchers can (a) use statistics that take into account measurements on each group made before, as well as after, the treatments (that is, a pre-test – post-test design); (b) offer evidence to persuade readers that the groups are equivalent before the experiment. Kibirige, Osodo and Tlala seek to use both of these steps.

Do the groups start as equivalent?

Kibirige, Osodo and Tlala present evidence from the pre-test to suggest that the learners in the two groups are starting at about the same level. In practice, pre-tests seldom lead to identical outcomes for different groups. It is therefore common to use inferential statistics to test for whether there is a statistically significant difference between pre-test scores in the groups. That could be reasonable, if there was an agreed criterion for deciding just how close scores should be to be seen as equivalent. In practice, many researchers only check that the differences do not reach statistical significance at the level of probability <0.05: that it they look to see if there are strong differences, and, if not, declare this is (or implicitly treat this as) equivalence!

This is clearly an inadequate measure of equivalence as it will only filter out cases where there is a difference so large it is found to be very unlikely to be a chance effect.


If we want to make sure groups start as 'equivalent', we cannot simply look to exclude the most blatant differences. (Original image by mcmurryjulie from Pixabay)

See 'Testing for initial equivalence'


We can see this in the Kibirige and colleagues' study where the researchers list mean scores and standard deviations for each question on the pre-test. They report that:

"The results (Table 1) reveal that there was no significant difference between the pre-test achievement scores of the CG [control group] and EG [experimental group] for questions (Appendix 2). The p value for these questions was greater than 0.05."

Kibirige, Osodo & Tlala, 2014, p.302

Now this paper is published "licensed under Creative Commons Attribution 3.0 License" which means I am free to copy from it here.



According to the results table, several of the items (1.2, 1.4, 2.6) did lead to statistically significantly different response patterns in the two groups.

Most of these questions (1.1-1.4; 2.1-2.8; discussed below) are objective questions, so although no marking scheme was included in the paper, it seems they were marked as correct or incorrect.

So, let's take as an example question 2.5 where readers are told that there was no statistically significant difference in the responses of the two groups. The mean score in the control group was 0.41, and in the experimental group was 0.27. Now, the paper reports that:

"Forty nine (49) learners (31 males and 18 females) were from school A and acted as the experimental group (EG) whereas the control group (CG) consisted of 44 learners (18 males and 26 females) from school B."

Kibirige, Osodo & Tlala, 2014, p.302

So, according to my maths,


Correct responsesIncorrect responses
School A (49 students)(0.27 ➾) 1336
School B (44 students)(0.41 ➾) 1826
pre-test results for an item with no statistically significant difference between groups

"The achievement of the EG and CG from pre-test results were not significantly different which suggest that the two groups had similar understanding of concepts" (p.305).
Pre-test results for an item with no statistically significant difference between groups (offered as evidence of 'similar' levels of initial understanding in the two groups)

While, technically, there may have been no statistically significant difference here, I think inspection is sufficient to suggest this does not mean the two groups were initially equivalent in terms of performance on this item.


Data that is normally distributed falls on a 'bell-shaped' curve

(Image by mcmurryjulie from Pixabay)


Inspection of this graphic also highlights something else. Student's t-test (used by the authors to produce the results in their table 1), is a parametric test. That means it can only be used when the data fit certain criteria. The data sample should be randomly selected (not true here) and normally distributed. A normal distribution means data is distributed in a bell-shaped Gaussian curve (as in the image in the blue circle above).If Kibirige, Osodo & Tlala were applying the t-test to data distributed as in my graphic above (a binary distribution where answers were either right or wrong) then the test was invalid.

So, to summarise, the authors suggest there "was no significant difference between the pre-test achievement scores of the CG and EG for questions", although sometimes there was (according to their table); and they used the wrong test to check for this; and in any case lack of statistical significance is not a sufficient test for equivalence.

I should note that the journal does claim to use peer review to evaluate submissions to see if they are ready for publication!

Comparing learning gains between the two groups

At one level equivalence might not be so important, as the authors used an ANCOVA (Analysis of Covariance) test which tests for difference at post-test taking into account the pre-test. Yet this test also has assumptions that need to be tested for and met, but here seem to have just been assumed.

However, to return to an even more substantive point I made earlier, as the learners were not randomly assigned to the two different conditions /treatments, what should be compared are the two school-based groups (i.e., the unit of analysis should be the school group) but that (i.e., a sample of 1 class, rather than 40+ learners, in each condition) would not facilitate using inferential statistics to make a comparison. So, although the authors conclude

"that the achievement of the EG [taking n=49] after treatment (mean 34. 07 ± 15. 12 SD) was higher than the CG [taking n =44] (mean 20. 87 ± 12. 31 SD). These means were significantly different"

Kibirige, Osodo & Tlala, 2014, p.303

the statistics are testing the outcomes as if 49 units independently experienced one teaching approach and 44 independently experienced another. Now, I do not claim to be a statistics expert, and I am aware that most researchers only have a limited appreciation of how and why stats. tests work. For most readers, then, a more convincing argument may be made by focussing on the control of variables.

Controlling variables in educational experiments

The ability to control variables is a key feature of laboratory science, and is critical to experimental tests. Control of variables, even identification of relevant variables, is much more challenging outside of a laboratory in social contexts – such as schools.

In the case of Kibirige, Osodo & Tlala's study, we can set out the overall experimental design as follows


Independent
variable
Teaching approach:
predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Controlled
variable(s)
Anything other than teaching approach which might make a difference to student learning
Variables in Kibirige, Osodo & Tlala's study

The researchers set up the two teaching conditions, measure learning gains, and need to make sure any other factors which might have an effect on learning outcomes, so called confounding variables, are controlled so the same in both conditions.

Read about confounding variables in research

Of course, we cannot be sure what might act as a confounding variable, so in practice we may miss something which we do not recognise is having an effect. Here are some possibilities based on my own (now dimly recalled) experience of teaching in school.

The room may make a difference. Some rooms are

  • spacious,
  • airy,
  • well illuminated,
  • well equipped,
  • away from noisy distractions
  • arranged so everyone can see the front, and the teacher can easily move around the room

Some rooms have

  • comfortable seating,
  • a well positioned board,
  • good acoustics

Others, not so.

The timetable might make a difference. Anyone who has ever taught the same class of students at different times in the week might (will?) have noticed that a Tuesday morning lesson and a Friday afternoon lesson are not always equally productive.

Class size may make a difference (here 49 versus 44).

Could gender composition make a difference? Perhaps it was just me, but I seem to recall that classes of mainly female adolescents had a different nature than classes of mainly male adolescents. (And perhaps the way I experienced those classes would have been different if I had been a female teacher?) Kibirige, Osodo and Tlala report the sex of the students, but assuming that can be taken as a proxy for gender, the gender ratios were somewhat different in the two classes.


The gender make up of the classes was quite different: might that influence learning?

School differences

A potentially major conflating variable is school. In this study the researchers report that the schools were "neighbouring" and that

Having been drawn from the same geographical set up, the learners were of the same socio-cultural practices.

Kibirige, Osodo & Tlala, 2014, p.302

That clearly makes more sense than choosing two schools from different places with different demographics. But anyone who has worked in schools will know that two neighbouring schools serving much the same community can still be very different. Different ethos, different norms, and often different levels of outcome. Schools A and B may be very similar (but the reader has no way to know), but when comparing between groups in different schools it is clear that school could be a key factor in group outcome.

The teacher effect

Similar points can be made about teachers – they are all different! Does ANY teacher really believe that one can swap one teacher for another without making a difference? Kibirige, Osodo and Tlala do not tell readers anything about the teachers, but as students were taught in their own schools the default assumption must be that they were taught by their assigned class teachers.

Teachers vary in terms of

  • skill,
  • experience,
  • confidence,
  • enthusiasm,
  • subject knowledge,
  • empathy levels,
  • insight into their students,
  • rapport with classes,
  • beliefs about teaching and learning,
  • teaching style,
  • disciplinary approach
  • expectations of students

The same teacher may perform at different levels with different classes (preferring to work with different grade levels, or simply getting on/not getting on with particular classes). Teachers may have uneven performance across topics. Teachers differentially engage with and excel in different teaching approaches. (Even if the same teacher had taught both groups we could not assume they were equally skilful in both teaching conditions.)

Teacher variable is likely to be a major difference between groups.

Meta-effects

Another conflating factor is the very fact of the research itself. Students may welcome a different approach because it is novel and a change from the usual diet (or alternatively they may be nervous about things being done differently) – but such 'novelty' effects would disappear once the new way of doing things became established as normal. In which case, it would be an effect of the research itself and not of what is being researched.

Perhaps even more powerful are expectancy effects. If researchers expect an innovation to improve matters, then these expectations get communicated to those involved in the research and can themselves have an affect. Expectancy effects are so well demonstrated that in medical research double-blind protocols are used so that neither patients nor health professionals they directly engage with in the study know who is getting which treatment.

Read about expectancy effects in research

So, we might revise the table above:


Independent
variable
Teaching approach:
predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Potentially conflating
variables
School effect
Teacher effect
Class size
Gender composition of teaching groups
Relative novelty of the two teaching approaches
Variables in Kibirige, Osodo & Tlala's study

Now, of course, these problems are not unique to this particular study. The only way to respond to teacher and school effects of this kind is to do large scale studies, and randomly assign a large enough number of schools and teachers to the different conditions so that it becomes very unlikely there will be systematic differences between treatment groups.

A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy (Taber, 2019a). Strangely, often this does not seem to preclude publication in research journals. 1

Advice on controls in scientific investigations:

I can probably do no better than to share some advice given to both researchers, and readers of research papers, in an immunology textbook from 1910:

"I cannot impress upon you strongly enough never to operate without the necessary controls. You will thus protect yourself against grave errors and faulty diagnoses, to which even the most competent investigator may be liable if he [or she] fails to carry out adequate controls. This applies above all when you perform independent scientific investigations or seek to assess them. Work done without the controls necessary to eliminate all possible errors, even unlikely ones, permits no scientific conclusions.

I have made it a rule, and would advise you to do the same, to look at the controls listed before you read any new scientific papers… If the controls are inadequate, the value of the work will be very poor, irrespective of its substance, because none of the data, although they may be correct, are necessarily so."

Julius Citron

The comparison condition

It seems clear that in this study there is no strict 'control' of variables, and the 'control' group is better considered just a comparison group. The authors tell us that:

"the control group (CG) taught using traditional methods…

the CG used the traditional lecture method"

Kibirige, Osodo & Tlala, 2014, pp.300, 302

This is not further explained, but if this really was teaching by 'lecturing' then that is not a suitable approach for teaching school age learners.

This raises two issues.

There is a lot of evidence that a range of active learning approaches (discussion work, laboratory work, various kinds of group work) engages and motivates students more than whole lessons spent listening to a teacher. Therefore any approach which basically involves a mixture of students doing things, discussing things, engaging with manipulatives and resources as well as listening to a teacher, tends to be superior to just being lectured. Good science teaching normally involves lessons sequenced into a series of connected episodes involving different types of student activity (Taber, 2019b). Teacher presentations of the target scientific account are very important, but tend to be effective when embedded in a dialogic approach that allows students to explore their own thinking and takes into account their starting points.

So, comparing P-O-E with lectures (if they really were lectures) may not tell researchers much about P-O-E specifically, as a teaching approach. A better test would compare P-O-E with some other approach known to be engaging.

"Many published studies argue that the innovation being tested has the potential to be more effective than current standard teaching practice, and seek to demonstrate this by comparing an innovative treatment with existing practice that is not seen as especially effective. This seems logical where the likely effectiveness of the innovation being tested is genuinely uncertain, and the 'standard' provision is the only available comparison. However, often these studies are carried out in contexts where the advantages of a range of innovative approaches have already been well demonstrated, in which case it would be more informative to test the innovation that is the focus of the study against some other approach already shown to be effective."

Taber, 2019a, p.93

The second issue is more ethical than methodological. Sometimes in published studies (and I am not claiming I know this happened here, as the paper says so little about the comparison condition) researchers seem to deliberately set up a comparison condition they have good reason to expect is not effective: such as asking a teacher to lecture and not include practical work or discussion work or use of digital learning technologies and so forth. Potentially the researchers are asking the teacher of the 'control' group to teach less effectively than normally to bias the experiment towards their preferred outcome (Taber, 2019a).

This is not only a failure to do good science, but also an abuse of those learners being deliberately subjected to poor teaching. Perhaps in this study the class in School B was habitually taught by being lectured at, so the comparison condition was just what would have occurred in the absence of the research, but this is always a worry when studies report comparison conditions that seem to deliberately disadvantage students. (This paper does not seem to report anything about obtaining voluntary informed consent from participants, nor indeed about how access to the schools was negotiated. )

"In most educational research experiments of the type discussed in this article, potential harm is likely to be limited to subjecting students (and teachers) to conditions where teaching may be less effective, and perhaps demotivating…It can also potentially occur in control conditions if students are subjected to teaching inputs of low effectiveness when better alternatives were available. This may be judged only a modest level of harm, but – given that the whole purpose of experiments to test teaching innovations is to facilitate improvements in teaching effectiveness – this possibility should be taken seriously."

Taber, 2019a, p.94

Validity of measurements

Even leaving aside all the concerns expressed above, the results of a study of this kind depends upon valid measurements. Assessment items must test what they claim to test, and their analysis should be subject to quality control (and preferably blind to which condition a script being analysed derives form). Kibirige, Osodo and Tlala append the test they used in the study (Appendix 2, pp.309-310), which is very helpful in allowing readers to judge at least its face validity. Unfortunately, they do not include a mark/analysis scheme to show what they considered responses worthy of credit.

"The [Achievement Test] consisted of three questions. Question one consisted of five statements which learners had to classify as either true or false. Question two consisted of nine [sic, actually eight] multiple questions which were used as a diagnostic tool in the design of the teaching and learning materials in addressing misconceptions based on prior knowledge. Question three had two open-ended questions to reveal learners' views on how salts dissolve in water (Appendix 1 [sic, 2])."

Kibirige, Osodo & Tlala, 2014, p.302

"Question one consisted of five statements which learners had to classify as either true or false."

Question 1 is fairly straightforward.

1.2: Strictly all salts do dissolve in water to some extent. I expect that students were taught that some salts are insoluble. Often in teaching we start with simple dichotomous models (metal-non metal; ionic-covalent; soluble-insoluble; reversible – irreversible) and then develop these to more continuous accounts that recognise difference of degree. It is possible here then that a student who had learnt that all salts are soluble to some extent might have been disadvantaged by giving the 'wrong' ('True') response…

…although[sic] , actually, there is perhaps no excuse for answering 'True' ('All salts can dissolve in water') here as a later question begins "3.2. Some salts does [sic] not dissolve in water. In your own view what happens when a salt do [sic] not dissolve in water".

Despite the test actually telling students the answer to this item, it seems only 55% of the experimental group, and 23% of the control group obtained the correct answer on the post test – precisely the same proportions as on the pre-test!



1.4: Seems to be 'False' as the ions exist in the salt and are not formed when it goes into solution. However, I am not sure if that nuance of wording is intended in the question.

Question 2 gets more interesting.


"Question two consisted of nine multiple questions" (seven shown here)

I immediately got stuck on question 2.2 which asked which formula (singular, not 'formula/formulae', note) represented a salt. Surely, they are all salts?

I had the same problem on 2.4 which seemed to offer three salts that could be formed by reacting acid with base. Were students allowed to give multiple responses? Did they have to give all the correct options to score?

Again, 2.5 offered three salts which could all be made by direct reaction of 'some substances'. (As a student I might have answered A assuming the teacher meant to ask about direct combination of the elements?)

At least in 2.6 there only seemed to be two correct responses to choose between.

Any student unsure of the correct answer in 2.7 might have taken guidance from the charges as shown in the equation given in question 2.8 (although indicated as 2.9).

How I wished they had provided the mark scheme.



The final question in this section asked students to select one of three diagrams to show what happens when a 'mixture' of H2O and NaCl in a closed container 'react'. (In chemistry, we do not usually consider salt dissolving as a reaction.)

Diagram B seemed to show ion pairs in solution (but why the different form of representation?) Option C did not look convincing as the chloride ions had altogether vanished from the scene and sodium seemed to have formed multiple bonds with oxygen and hydrogens.

So, by a process of elimination, the answer is surely A.

  • But components seem to be labelled Na and Cl (not as ions).
  • And the image does not seem to represent a solution as there is much too much space between the species present.
  • And in salt solution there are many water molecules between solvated ions – missing here.
  • And the figure seems to show two water molecules have broken up, not to give hydrogen and hydroxide ions, but lone oxygen (atoms, ions?)
  • And why is the chlorine shown to be so much larger in solution than it was in the salt? (If this is meant to be an atom, it should be smaller than the ion, not larger. The real mystery is why the chloride ions are shown so much smaller than smaller sodium ions before salvation occurs when chloride ions have about double the radii of sodium ions.)

So diagram A is incredible, but still not quite as crazy an option as B and C.

This is all despite

"For face validity, three Physical Sciences experts (two Physical Sciences educators and one researcher) examined the instruments with specific reference to Mpofu's (2006) criteria: suitability of the language used to the targeted group; structure and clarity of the questions; and checked if the content was relevant to what would be measured. For reliability, the instruments were piloted over a period of two weeks. Grade 10 learners of a school which was not part of the sample was used. Any questions that were not clear were changed to reduce ambiguity."

Kibirige, Osodo & Tlala, 2014, p.302

One wonders what the less clear, more ambiguous, versions of the test items were.

Reducing 'misconceptions'

The final question was (or, perhaps better, questions were) open-ended.



I assume (again, it would be good for authors of research reports to make such things explicit) these were the questions that led to claims about the identified alternative conceptions at pre-test.

"The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test)."

Kibirige, Osodo & Tlala, 2014, p.302

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not be a conceptual error

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not (for at least some of these learners) be a conceptual error as much as poor use of terminology. .

To say that salts disappear when they dissolve does not seem to me a misconception: they do. To disappear means to no longer be visible, and that's a fair description of the phenomenon of salt dissolving. The authors may assume that if learners use the term 'disappear' they mean the salt is no longer present, but literally they are only claiming it is not directly visible.

Unfortunately, the authors tell us nothing about how they analysed the data collected form their test, so the reader has no basis for knowing how they interpreted student responded to arrive at their findings. The authors do tell us, however, that:

"the intervention had a positive effect on the understanding of concepts dealing with dissolving of salts. This improved achievement was due to the impact of POE strategy which reduced learners' misconceptions regarding dissolving of salts"

Kibirige, Osodo & Tlala, 2014, p.305

Yet, oddly, they offer no specific basis for this claim – no figures to show the level at which "learners believed that salts 'disappear' when dissolved in water …and that salt 'melts' when dissolved in water" in either group at the post-test.


'disappear' misconception'melt' misconception
pre-test:
experimental group
not reportednot reported
pre-test:
comparison group
not reportednot reported
pre-test:
total
(0.37 x 0.8 x 93 =)
24.5 (!?)
(0.27 x 0.8 x 93 =)
20
post-test:
experimental group
not reportednot reported
post-test:
comparison group
not reportednot reported
post-test:
total
not reportednot reported
Data presented about the numbers of learners considered to hold specific misconceptions said to have been 'reduced' in the experimental condition

It seems journal referees and the editor did not feel some important information was missing here that should be added before publication.

In conclusion

Experiments require control of variables. Experiments require random assignment to conditions. Quasi-experiments, where random assignment is not possible, are inherently weaker studies than true experiments.

Control of variables in educational contexts is often almost impossible.

Studies that compare different teaching approaches using two different classes each taught by a different teacher (and perhaps not even in the same school) can never be considered fair comparisons able to offer generalisable conclusions about the relative merits of the approaches. Such 'experiments' have no value as research studies. 1

Such 'experiments' are like comparing the solubility of two salts by (a) dropping a solid lump of 10g of one salt into some cold water, and (b) stirring a finely powdered 35g sample of the other salt into hot propanol; and watching to see which seems to dissolve better.

Only large scale studies that encompass a wide range of different teachers/schools/classrooms in each condition are likely to produce results that are generalisable.

The use of inferential statistical tests is only worthwhile when the conditions for those statistical tests are met. Sometimes tests are said to be robust to modest deviations from such acquirements as normality. But applying tests to data that do not come close to fitting the conditions of the test is pointless.

Any research is only as trustworthy as the validity of its measurements. If one does not trust the measuring instrument or the analysis of measurement data then one cannot trust the findings and conclusions.


The results of a research study depend on an extended chain of argumentation, where any broken link invalidates the whole chain. (From 'Critical reading of research')

So, although the website for the Mediterranean Journal of Social Science claims "All articles submitted …undergo to a rigorous double blinded peer review process", I think the peer reviewers for this article were either very generous, very ignorant, or simply very lazy. That may seem harsh, but peer review is meant to help authors improve submissions till they are worthy of appearing in the literature, and here peer review has failed, and the authors (and readers of the journal) have been let down by the reviewers and the editor who ultimately decided this study was publishable in this form.

If I asked a graduate student (or indeed an undergraduate student) to evaluate this paper, I would expect to see a response something along these sorts of lines:


Applying the 'Critical Reading of Empirical Studies Tool' to 'The effect of predict-observe-explain strategy on learners' misconceptions about dissolved salts'

I still think P-O-E is a very valuable part of the science teacher's repertoire – but this paper can not contribute anything to support to that view.

Work cited:

Note

1 A lot of these invalid experiments get submitted to research journals, scrutinised by editors and journal referees, and then get published without any acknowledgement of how they fall short of meeting the conditions for a valid experiment. (See, for example, examples discussed in Taber 2019a.) It is as if the mystique of experiment is so great that even studies with invalid conclusions are considered worth publishing as long as the authors did an experiment.

Swipe left, swipe right, publish

A dating service for academics?


Keith S. Taber


A new service offers to match authors and journals without all that messy business of scholars having to spend time identifying and evaluating the journals in their field (Image by Kevin Phillips from Pixabay )

I was today invited to join a new platform that would allow an author "the opportunity to get the best Publishing Offers from different Journals"; and would also allow journal editors to "learn about new scientific results and make Publishing Offers to Authors". Having been an author and an editor my immediate response was, "well how could that work?"



Publishing offers?

I was a little intrigued by the notion of publishing 'offers'. In my experience what matters are 'publication decisions'.

You see, in the world of academic journals I am familiar with,

  • authors choose a journal to submit their manuscript to (they have to choose as journals will only consider work not already published, under consideration or submitted, elsewhere)
  • the editor decides if the manuscript seems relevant to the journal and to be, prima facie, a serious piece of scholarship. If not, it is rejected. If so, it is sent to expert reviewers for careful scrutiny and recommendations.
  • then it is accepted as is (rare in my field); accepted subject to specified changes; returned for revisions that must then be further evaluated; rejected but with a suggestion that a revised manuscript addressing specified issues might be reconsidered; or rejected.1
  • if the editor is eventually satisfied with the manuscript (perhaps after a number of rounds of revision and peer review) it is accepted for publication – this might be considered a publishing offer, but usually by this point the author is not going to decline!
  • if the process does not lead to an accepted manuscript, the author can decide her work is not worth publishing; use the feedback to strengthen the manuscript before submitting elsewhere, or simply move on to another journal and start again with the same manuscript.

Read about the process of submitting work to a research journal

Read about selecting a journal to submit your work to

Read about the peer review process used by serious research journals

Similarly, in the world of academic journals I am familiar with,

  • an editor becomes aware of a paper available for publication because the author submits it for consideration;
  • editors may sometimes offer informal feedback to authors who are not sure if their work fits the scope of the journal – but the editor certainly does not actively seek to check out manuscripts that are not being considered for that journal.

Though editors may engage in general promotion of their journal, this does not usually amount to trawling the web looking for material to make offers on.

So how does the platform work?

So, I looked at the inexsy site to see how the service managed to help authors get published without having to submit their work to journals, and how journals could fill their pages (and, these days, attract those juicy publication fees) even if authors did not fancy submitting their work to their journal.

This is what I learned.


Step 1. Put yourself out there.

(Image by Dean Moriarty from Pixabay)


Make a show of your wares

The process starts with the author uploading their abstract as a kind of intellectual tease. They do not upload the whole paper – indeed at this stage they do not even have to have written it.

"Researchers submit Abstracts of their manuscripts to the INEXSY platform and set their Publishing Statuses:

#1 – "Manuscript in progress" or

#2 – "Manuscript ready, looking for publisher".

https://inexsy.com

(Indeed, it seems an author could think up a number of article ideas; write the abstracts; post them; and wait t0 see which one attracts the most interest. No more of all that laborious writing of papers that no one wants to publish!)


Step 2. Wait to be approached by a potential admirer.

(Image by iqbal nuril anwar from Pixabay)


Wait to be approached

Now the author just has to wait. Journal editors with nothing better to do (i.e., editors of journals that no one seems to be sending any work to) will be going through the abstracts posted to see if they are interested in any of the work.

"All journals from the corresponding science area view the Abstract of the manuscript and determine the relevance of the future article (quick editorial decision)."

https://inexsy.com

The term 'quick editorial decision' is intriguing. This term most commonly refers to a quick decision on whether or not to publish a manuscript, but presumably all it means here is a quick editorial decision on "the relevance of the future article" to the journal.

Editors of traditional journals are used to making quick decisions on whether a manuscript falls within the scope of the journal. I have less confidence in the editors of many of the glut of open-access pay-to-publish journals that have sprung up in recent years. Many of these are predatory journals, mainly concerned with generating income and having little regard for academic standards.

In some cases supposed editors leave the editorial work to administrators who do not have a strong background in the field. Sometimes journals are happy to publish material which clearly has no relevance to the supposed topic of their journal. 2

Read about predatory journals


Step 3. Start dating

(Image by Sasin Tipchai from Pixabay) 


Enter into a dialogue with the editor

inexst acknowledge that even the journals they attract to their platform might not immediately offer to publish an article on the basis of an author's abstract for a paper they may not have written yet.

So, the platform allows the two potential suitors to enter into a dialogue about developing a possible relationship.

 "If the text of the Abstract and supplementary materials (video, figures) are not enough for journals to make Publishing Offers to authors, then the INEXSY platform provides the [sic] Private Chat to discuss the full text of a future article."

https://inexsy.com

Step 4. Get propositioned by the suitor

(Image by bronzedigitals from Pixabay)


4. Consider moving the relationship to the next level

If after some back and forth in the virtual world, the editor likes the author's images and videos they may want to take the relationship to a new level,

 "If the potential article is interesting to journals, these journals make Publishing Offers to authors in 1 click."

https://inexsy.com

Step 5. Choose a keeper

(Image by StockSnap from Pixabay) 


5. Decide between suitors

Now the idea of a 'publishing offer' is clarified. Having had an idea for a paper, and written an abstract and perhaps posted some pics and a video talking about what you want to write, and having been approached by a range of editors not too busy to engage in some social intercourse, the author now find herself subject to a range of propositions.

  • But which suitor does she really have a connection with?
  • Which one is the best prospect for a happy future?

But this is not about good looks, tinderness, pension prospects, or reliably remembering birthdays, but which journal is more prestigious (good luck with expecting prestigious journals to register on such sites), and how quickly the competing journals promise to publish the paper, and, of course, how much will they charge you for this publication escort service.

"Authors choose the optimal offer (best publication time, IF [impact factor], OA [open access] price) and submit their manuscripts to the website of the selected journal."

https://inexsy.com

Do dating services check the details provided by member? Impact factors are useful (if not perfect) indicators of a journal's prestige. But some predatory journals shamelessly advertise inaccurate impact factors. (See, for example, 'The best way to generate an impressive impact factor is – to invent it'). Does inexsy do due diligence on behalf of authors here, or is a matter of caveat emptor?


Step 6. And ride off into the sunset together

(Image by mohamed Hassan from Pixabay)


Live happily ever after with a well-matched journal

So, there it is, the journal dating nightmare solved. Do not worry about reading and evaluating a range of journals to decide where to submit, just put up your work's profile and wait for those journal editors who like what they see to court you.

You do not have to be exclusive. Put the goods on public show. Play the field. See which suitors you like, and what they will offer you for exclusive rights to what you want to put out there. Only when you feel you are ready to settle down do you need to make a choice.

Publish your work where you know it will really be appreciated, based on having entered into a meaningful relationship with the editor and found your article and the journal have much in common. Demonstrate your mutual commitment by publicly exchanging vows (i.e., signing a publishing agreement or license) that means your article will find an exclusive home in that place for ever after.

(Well, actually, if you publish open access, it might seem more like an open marriage as legally you are free to republish as often as your like. However, you will likely find other potential partners will consider an already published work as 'damaged goods' and shun any approaches.)

So, now it is just the little matter of getting down to grindring out the paper.


Back to earth

(Image by Pexels from Pixabay )


Meanwhile, back in the real world

This seems too good to be true. It surely is.

No editor of a responsible journal is going to offer publication until the full manuscript has been (written! and) submitted, and has been positively evaluated by peer review. Even dodgy predatory journals usually claim to do rigorous peer review (so authors can in turn claim {and perhaps sometimes believe} that their publications are in peer reviewed journals).

This leads me to moot a typology of three types of journal editor in relation to a platform such as inexsy:

1.
Absent partners
Editors of well-established and well-regarded journals.


These are busy with the surfeit of submissions they already receive, and are not interested in these kinds of platforms.
2.
Desperate romantics
Principled editors of journals struggling to attract sufficient decent papers to publish, but who are committed to maintain academic standards.


They may well be interested in using this platform in order to attract submissions – but the offers they will make will be limited to 'yes, this topic interests us, and, if you submit this manuscript, we will send the submission to peer review'.

They will happily wait till after a proper legal ceremony before consummating the relationship.
3.
Promiscuous predators
Editors of predatory journals that are only interested in maximising the number of published papers and so the income generated.


They will make offers to publish before seeing the paper, because, to be honest there is not much (if anything) they would reject anyway as long as the author could pay the publication fees. Once they have your money they are off on the prowl again.

So, this may well bring some authors together with some editors who can offer advice on whether a proposed paper would be seriously considered by their journals (category 2) – but this achieves little more than would emailing the editor and asking if the proposed paper is within the scope of that journal.

If any authors find they are inundated by genuine offers to publish in any journals that are worth publishing in, I will be amazed.

Watch this space (well, the space below)

Still, as a scientist, I have to be open to changing my mind. So,

  • if you are a representative of inexsy
  • if you are an author or editor who has had positive experiences using the service

please feel free to share your experiences (and perhaps tell me I am wrong) in the comments below.

I wait with interest for the flood of responses putting me right.


Notes

1 The precise number of categories of decision, and how they are worded, vary a little between journals.


2 Consider some examples of what gets published where in the world of the dubious research journal:

"the editors of 'Journal of Gastrointestinal Disorders and Liver function' had no reservations about publishing a paper supposedly about 'over sexuality' which was actually an extended argument about the terrible threat to our freedoms of…IQ scores, and which seems to have been plagiarised from a source already in the public domain…. That this make no sense at all, is just as obvious as that it has absolutely nothing to do with gastrointestinal disorders and liver function!"

Can academic misconduct be justified for the greater good?

Sadly, some journal editors do not seem to care whether what they publish has any relevance to the supposed field of their journal: 'Writing for the Journal of Petroleum, Chemical Industry, Chemistry Education, Medicine, Drug Abuse, and Archaeology'

A cure for this cancer of stupidity

The scholarly community needs to shame academics who knowingly offer respectability to obviously dishonest practices and the dissemination of fabricated research reports

Keith S. Taber

The article seems to report some kind of experimental study, but I do not know what hypothesis was being tested, and I do not understand the description of the conditions being tested. …as far I can tell, the study (if it really was carried out) is more about workload management than medicinal chemistry… I do not know what the findings were because the results quoted are (deliberately?) inconsistent. I do know that Hajare makes claims about cancer which are totally inappropriate in a scientific context and have no place in the medical literature

According to the title of an article in Organic and Medicinal Chemistry International Journal, "There is no Cure for the Cancer of Stupidity".

Article published in Organic and Medicinal Chemistry International Journal in 2018

The journal claims to be "an open access journal that is committed to publish the papers on various topics of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with drug design, chemical synthesis and development for market of pharmaceutical agents, or bio-active molecules (drugs)". You may wish to make up your own mind about the extent to which the article I discuss below fits this scope.

The journal is presented as peer reviewed, and offers guidelines for reviewers, suggesting

"Juniper Publishers strives hard towards the spread of scientific knowledge, and the credibility of the published article completely depends upon effective peer reviewing process. Reviewing of manuscript is an important part in the process of publication. Reviewers are asked to make an evaluation and provide recommendations to ensure the scientific quality of the manuscript is on par with our standards."

https://juniperpublishers.com/reviewer-guidelines.php

That is as one would expect from a research journal.

What is the cancer of stupidity?

The author of this article presumably has a particular notion of the 'cancer of stupidity'. This particular article is written by Dr Rahul Hajare who gives his affiliation here as Department of Health Research, Ministry of Health and Family Welfare, India. (Perhaps he is the same Rahul Hajare who is listed as an honorary editor of Organic and Medicinal Chemistry International Journal affiliated to Vinayaka Mission University, India? 1)

However, having read the paper, I am not sure what readers are meant to understand the 'cancer of stupidity' actually is. One might well guess that the loaded term 'cancer' is intended metaphorically here, but perhaps not as Hajare talks about both liver 'disorder' and lung cancer in the article.

The article seems to report some kind of experimental study, but I do not know what hypothesis was being tested, and I do not understand the description of the conditions being tested. Of course, unlike someone qualified to referee articles for a journal of organic and medicinal chemistry, I am not an expert in the field. But then, as far as I can tell, the study (if it really was carried out) is more about workload management that medicinal chemistry – but I am not sure of that. I do not know what the findings were because the results quoted are (deliberately?) inconsistent. I do know that Hajare makes claims about cancer which are totally inappropriate in a scientific context and have no place in the medical literature.

A copyright article

'There is no Cure for the Cancer of Stupidity' is copyright of its author, Rahul Hajare, and the article is marked "All rights are reserved". However it is published open-access under creative commons license 4.0 which allows any re-use of the article subject to attribution. So, I am free to reproduce as much of the text as I wish.

I wish to reproduce enough to persuade readers that no intelligent person who reads the article could mistake it for a serious contribution to the scientific literature. If you are convinced I have made my case, then this raises the issue of whether it was published without any editorial scrutiny, or published despite editor(s) and peer reviewers seeing it was worthless as an academic article. This might seem a harsh judgement on Dr Hajare, but actually I suspect he would agree with me. I may be wrong, but I strongly suspect he submitted the article in full knowledge that it was not worth publishing.

The abstract

The abstract of an article should offer a succinct summary of its contents: in the case of an empirical study (which this article seems to report), it should outline the key features of the sample, research design and findings. So what does Dr Hajare write in his abstract?

"The best definition for cancer is the statistic one in six – a reminder that beyond a point, one cannot control or ever completely prepare for the future. Believes cancer afflicts those who have a sinful past, people cannot compensate for the sin against the unseen. But when you see the background, it will be found it was divine justice, nothing else. Lung cancer means no accreditation. Unscientific opinion that illness is only too human to fall back on fantasy, or religion, when there are no rational explanation for random misfortunes."

p.001

So, we have an abstract which is incoherent, and does not seem to be previewing an account of a research study.

Dodgy definition

It starts with a definition of cancer: "The best definition for cancer is the statistic one in six". I would imagine experts differ on the best definition of cancer in the context of medicinal chemistry, but I am pretty sure that 'the statistic one in six' would not be a good contender.

Provocative claims

Next, there is some syntactically challenged material seemingly suggesting that cancer is the outcome of sinning and is divine retribution. An individual is entitled to hold such an opinion – and indeed this view is probably widely shared in some communities – but it has no place in science. Even if a medical scientist believed that at one level this was true- it should have no bearing on their scientific work which should adopt 'methodological naturalism': the assumption that in scientific contexts we look for explanations in terms of natural mechanisms not supernatural ones. 2

'Lung cancer means no accreditation'

Then we have a reference to lung cancer – so an actual medical condition. But it is linked to 'accreditation', without any indication what kind of accreditation is being referred to (accreditation of what, whom?). This does however turn out to be linked to a theme in the main paper (accreditation). Despite that, I doubt any reader coming to this paper fresh would have any idea what it was about from the abstract.

The main text is free of cancer

The main text of the article makes no further reference to cancer, either as a medical condition nor as a metaphor for something else.

The main text is broken up into sections:

  • Short commentary
  • Results
  • Discussion
  • Recommendation
  • Limitation

The first of these section titles seems slightly odd, as this article type (in its entirety) is classed by the journal as a 'short communication' and one might rather expect 'Introduction' and 'Research Design' or 'Methodology' here.

The outcome?

The short commentary starts with what seems an overview of the outcomes of the study:

"On the basis of criteria of assessment allotted for NBA work, the total effect has been carried out, which has shown that 9% staffs were moderately improved (17.65%) and 40% staff (78.43%) were mildly improved, while none of the staffs were completely improved."

p.001

NBA has not been defined (no, it is nothing to do with basketball) and we might wonder "what staff?" as this has not been explained. Some web searching suggests that (this) NBA is the organisation that oversees the quality of academic awards in India – the National Board of Accreditation. It is not clear yet what this has to do with lung cancer as mentioned in the abstract.

The alert (or even half-alert?) reader may also spot discrepancies here, which I suspect have been deliberately included by the author.

The aim of the study

We are next told that

"The trial was conducted to evaluate the efficacy of work flow as compared to replacement therapy in the management, along with the assessment of different initiative" .

p.001

So, there seems to have been a trial, but presumably not a cancer drugs trial as it has something to do with 'work flow' (published in a journal of organic and medicinal chemistry?)

After some brief comments about research design this paragraph concluded with

"NBA work cannot be evaluated in terms of file and paper work because investments of biosafety make a profit of privately managed low level transportation facility pharmacy institution make them different."

p.001

Perhaps this makes sense to some readers, but not me. The next paragraph starts:

"Individuals have the power to prevent the occurrence of these diseases by managing their health care and developing healthier food and lifestyle behaviours. How can they be motivated to do so, without providing them with a basic understanding about the important role the liver, the organ under attack, plays in maintaining their health and life itself?"

p.001

Up to this point no diseases have been discussed apart from lung cancer in the abstract. If the focus is lung cancer – why is the liver 'the organ under attack'? And it is not clear what (if anything) this has to do with the NBA or work flow.

Soon we are told

"A positive result does not necessarily mean that the person has body support, as there are certain conditions that may lead to a false positive result for example lyme disease, bacterial leaching, the paternal negativity but who themselves are not infected with liver disorder."

p.001

So, someone struggling to make sense of this study might understand there is some test for liver dysfunction, that can give false positives in some circumstances – so it the study about liver disease (rather than, or as well as, work flow)?

The science of the liver

Hajare refers to the functions of the liver,

"[the liver] is non-complaining complex organ and its miraculous hard working liver cells convert everything they eat breath and absorb through their skin into hundreds of life sustaining body functions 24/7"

p.001

The liver is a pretty remarkable multi-purpose chemical processing organ. But in the context of the scientific/medical literature, should its cells be described as 'miraculous' 2; and in terms of such everyday analogies as eating and breathing and having skin?

Linking liver disease to NBA accreditation

But then Hajare does suggest a link between liver disease and accreditation,

"Similarly staffs receiving liver therapy may have positive test. While showing a positive we general regarded as conclusive for a body life under attack, a negative test does not necessarily rule out. They need to understand how their food and life style choices can lead to reparable NBA accreditation privately managed in remote areas pharmaceutical Instituions [sic]."

p.001

Now, many researchers report their work in English when it is a second (or subsequent) language and this may explain some minor issues with English in any journals that do not have thorough production procedures. But here Hajare seems to be claiming that there is a causal link between the lifestyle choices of patients with liver disease and "reparable [sic] NBA accreditation".

In case the reader is struggling with this, perhaps wondering if they are misreading, Hajare suggests,

"During the early session, positive testing can be undertaken to exclude NBA. In staffs that are near to positive, the level of negative load is used as markers of the like senior staff and principal of progression to ignored."

p.001

Surely, this is just gibberish?

Hajare continues,

"The NBA accreditation is a 90 90 90 formula organization dedicated to promoting healthy food and lifestyle behaviours and prevention of liver related disease through multifaceted liver health education programs. The mission of NBA accreditation initiative is to make education a priority on national agenda. Promoting an education about the NBA to employer individuals to make informed can improve compliance and treatment outcomes for NBA and reduce the incidences of preventable NBA related thought including obesity, fatty liver, early onset diabetes, high cholesterol and cardiovascular disease. Primary prevention of NBA is the key to saving paper and application of green chemistry additional be benefited with zero Carbon Dioxide (CO2) emission in college area."

pp.001-002

As far as I can ascertain, the mission of the NBA is rigorous accreditation standards for technical education programmes in India to ensure teaching is of as high quality as expected in other major countries. It has no particular focus on liver disease! The reference to '90-90-90' seems to be borrowed from UNAIDS, the United Nations initiative to tackle AIDS worldwide.

The paragraph seems to start by suggesting NBA is a positive thing, supporting health educational programmes, but within a few lines there are references to "preventable NBA related thought" (very 1984) and "Primary prevention of NBA" as an ecological goal.

Population and sample

Hajare does not detail the population sampled. From the unspecified population "A total 18 staffs [staff members?] were selected for the study, out of which 13 staffs completed the study" (p.002). The sample is characterised,

"The staffs tended to be lady staffs in middle adulthood regular health. About 80% mentioned irregular habits, and about 60% were unidentified"

p.002

It is not clear what kind of habits are referred to (irregular bowel movements might be relevant to illness, but could it mean drug abuse, or frequently clicking the heels of shoes together three times and thinking of Kansas?), and it is not clear in what sense 60% were 'unidentified'. It is also not clear if these percentages refer to the 18 selected or the 13 completing, as the numbers do not make good sense in either case:

proportionof n=13of n=18
80%10.4 people
10 would be 77%
11 would be 84%
14.4 people
14 would be 78%
15 would be 83%
60%7.8 people
7 would be 54%
8 would be 62%
10.8 people
10 would be 56%
11 would be 61%
Unless citing to 1 s.f., Hajare's data refer to fractional study participants!

Hajare also tells readers

"A little over half of the staffs (54.17%) were of none of long relation of objective of NBA implementation and 22.92% were of fear with mind."

p.002

It is not clear to me if this nonsensical statement is supposed to be part of a characterisation of the sample, or meant to be a finding. The precision is inappropriate for such a small sample. But none of that matters unless one understands what (if anything) is meant by these statements. I guess that if editors or peer reviewers did read this paper before publication, they felt this made good sense.

The three experimental conditions

We are told that the sample was randomly assigned to three conditions. We are not told how many people completed the study in each condition (it could have been 6 in each of two conditions and only one person in the third condition). The treatments were (p.002):

  • a) Group A: was treated with conjugated staff seen work flow once daily for 45 days.
  • b) Group B: was treated with small conjugated staffs seen work flow but ignored once daily for duration of 45 days.
  • c) Group C: was treated with separately work staffs seen and engaged in their assigned work for 45 days (After 7 days of continuous behavioral objective, a gap of 3 days in between before the next 7 days sitting with 3-3 day's gap after every 7 days).

Surely, at this point, any reader has to suspect that, Hajare is, as they say 'having a laugh'. Although I have no real idea what is meant by any of this, I notice that the main difference between the first two conditions is 'being ignored once daily' – as opposed to what: being observed continuously for 24 hours a day?

The data collection instruments

There is very little detail of the data collection instruments. Of course, this is a 'short communication' which might be a provisional report to be followed up by a fully detailed research report. (I have been looking through a lot of the work Hajare has published in recent years, and typically his papers are no more than about two pages in length.)

Early in the paper we are told that

"Specialized biosafety rating scales like orientation as well as information technology rating scale, were adopted to assess the effect of therapy."

p.001

So that is pretty vague.

Findings

As quoted above, the main text of the paper begins with a preview of findings: "On the basis of criteria of assessment allotted for NBA work, the total effect has been carried out, which has shown that 9% staffs were moderately improved (17.65%) and 40% staff (78.43%) were mildly improved, while none of the staffs were completely improved" (p.001). Perhaps 'mildly' and 'moderately' are understood in specific ways in this study, but that is not explained, and to an uninformed reader it is not clear which, if either, of mild improvement or moderate improvement is a more positive result.

Again, giving results to 4 significant figures is inappropriate (when n<20). But the main issue here is how 9%=17.65% and 40%=78.53%

Later in the article, the results are reported:

"Results of the study based off [not on!] the conjugated staffs rating scale showed that

Group C showed greater relief than the other two groups in flashes (66.66%), sleep problems (80.39%), in depressive mood (72.5%), in irritability (69.81%), and in anxiety (70.90%).

However, Group B showed significant improvement with flashes (62.22%), sleep problems (57.14%), depressive mood (66.66%), irritability (55.31%) and anxiety (50.94%).

Both groups B and C showed a lower benefit in symptoms compared with Group A, which was treated with conjugated staffs but quite unidentified crisis among them."

p.002 (extra line breaks added between sentences)

Again the precision is unjustified: the maximum number of participants in any condition is 6! It is noticeable that large proportions of these adults in "regular health" showed improvements in their (non) conditions. How the "biosafety rating scales like orientation" and "information technology rating scale" measured sleep problems, depressive mood, irritability, and anxiety is left to the imagination of the reader.

Just in case any reader is struggling to interpret all of this, thinking "it must be me, the editor and reviewers clearly understood this paper", Hajare drops in another hint that we should not take this article too seriously: "Group C showed greater relief than the other two groups…[but] Both groups B and C showed a lower benefit in symptoms compared with A Group"

That is: Group C did better than groups A and B, but not as well as group A

Limitations to the study

Hajare points out that 'self-reporting' is a limitation to the study, which is a fair point, but also suggest that "This study was a cross-sectional study; hence, it precludes inferences of causality among such variables." Of course, as it is described, this is not a cross-sectional study but an experimental intervention.

Recommendations

Hajare offers eight recommendations from this study, none of which seem to directly follow from the study (although some are sensible general well-being suggestions such as the value of yoga and education about healthy eating).

In his discussion section Hajare offers a kind of conclusion:

"Due to these limitations in research it is not clear to what degree biosafety treatment may benefit NBA accreditation in sub kind transportation facility remote pharmacy institution, although the smaller studies used in this literary analysis show a definite success rate that supersedes the benefits of biosafety treatment thereby delaying the aging process of staffs in private pharmacy Instituions [sic].

p.002

What literary analysis? Which studies? Hajare only cites 5 other publications: all his own work. He seems to be saying here that

  • is not clear to what degree biosafety treatment may benefit…
  • although the smaller studies show a definite success rate that supersedes the benefits of biosafety treatment

So, for any reader still trying to make some sense of the paper, perhaps this means there is inconclusive, but tentative, evidence that biosafety treatment may have sufficient benefits to suggest it should replace…biosafety treatment?

The cancer of the post-truth journals

If this commentary shows evidence of any metaphorical cancer it is the tumour eating away at the academic body. This consists of the explosion of predatory low quality so-called research journals that are prepared to publish any nonsense as long as the author pays a fee. These journals are nourished by submissions (many of which, I am sure, come from well-meaning researchers simply looking for somewhere to publish and who are misled by websites claiming peer review, impact factors, international editorial boards, and the like), and supported by those academics prepared to give such journals a veer of respectability by agreeing to be named as editors and board members.

Of course, it is an honour to be asked to take up such positions (at least by a genuine research journal) but academics need to do due diligence and make sure they are not associating their name with a journal that will knowing publish gibberish and misleading science.

Open access journals are open to the public as well as specialists, and therefore predatory journals are as likely to be a source of information for lay people as trustworthy ones. Someone looking for information on cancer and cancer treatment or liver disease might find this article in Organic and Medicinal Chemistry International Journal and see the host of editors from many different universities 1 (I have appended the current listing below) and assume such a journal must be checking what it is publishing carefully if it is overseen by such an international college of scholars.

Yet Hajare's paper is nonsense.

A very generous interpretation would be that he is meaning well, trying to communicate his work as best he can, but is confused, and needs help in structuring and writing up his work. If this were so, the journal should have told him to come back when he had accessed and benefited from the help he needed.

I would normally tend to a generous interpretation, but not here.

Hajare's haox

Unlike a casual reader coming across this 'study' I was actually looking across a range of Hajare's work and have found that he has published many papers with similar features, such as

  • being much shorter than traditional research reports
  • provocative titles and statements – especially early in the paper (e.g., cancer is divine justice)
  • titles not reflecting the paper (there is no mention of cancer beyond the abstract)
  • abstracts that do not actually discuss the study
  • conflation of unrelated topics (here, liver disease and course accreditation)
  • irrelevancies (e.g., use of an information technology rating scale to assess liver-related health)
  • nonsensical 'sentences' that any editor or reviewer should ask to be revised/corrected
  • glaring inconsistencies (9%=17.65%; improvement under treatment in people who were in good health; groups C did better than, but also not as well as, group A; biosafety treatment may be superior to biosafety treatment)
  • citing only his own publications

One could explain a few such issues as carelessness, but here there is a multitude of errors that an author should not miss when checking work before submission, and more to the point, that should be easily spotted during editorial and peer review. There are many poor studies in the literature with weaknesses that seem to have been missed – but no one reading "There is no Cure for the Cancer of Stupidity" should think it is ready for publication.

Where is the stupidity? In the people who associate themselves with 'research' of this standard. They seek short term gain by adding a superficially useful affiliation to their curriculum vitae/résumé – but in the longer term these journals and their editorial boards are parasitic on the academic community, and spread low quality, fraudulent and (here) deliberately nonsensical misinformation on scientific and medical matters.

I am pretty convinced that Hajare is a serial hoaxer, who has found it so easy to get below-par material published that he seems to be deliberately testing out just how provocative, incoherent, inconsistent, vague, confusing and apparently pointless an account of a study has to be before a predatory journal will reject it. Clearly, in the case of Organic and Medicinal Chemistry International Journal these particular characteristics are no barrier to publication of a submission.

Hajare throws multiple clues and hints into his work so that a careful reader should not be misled into treating his work as trustworthy. Anybody who reads it should surely see the joke. Does anybody at Organic and Medicinal Chemistry International Journal bother to read material before they publish it? Did anyone read "There is no Cure for the Cancer of Stupidity" before recommending publication?

After all, if it so easy to get published when an author makes it so obvious the work is a hoax, how much easier must it be for authors to publish flawed and fabricated work when they put in a little effort to make it seem coherent and credible.

Organic and Medicinal Chemistry International Journal, at least, seems to have no problem with publishing the incoherent and incredible.

Notes

1 At the time of writing this posting (27th November, 2021) the website of Organic and Medicinal Chemistry International Journal lists a large number of 'honarable editors' from many parts of the world on its website as part of the journal's editorial board. These are academics that have given their name to the journal to give it credence in terms of their reputations as scholars. I have appended the list of honorary editors below.

2 Scientists may be atheists, agnostics or hold any form of religion. A person who holds a view (perhaps based on religious beliefs) that disease is the outcome of personal sin (or indeed the result of human sin more generally or the outcome of Adam and Eve's disobedience, or whatever) can take one of two views about this:

a) sinning is the cause of illness, and no further explanation is necessary

b) sinning is a cause of disease at one (theological) level but divine will works through natural causes (viruses, toxins, etc.)

It would be pointless and inappropriate for someone who took stance (a) to work in a scientific field concerning etiology (causes of diseases).

Someone who took stance (b) could work in such a field as long as they were able to bracket off their personal beliefs and focus on natural causes and scientific explanations in their work (i.e., methodological naturalism).

(Metaphysical naturalism rejects the existence of any supernatural entities, powers or influences and so would not accept sin or divine justice as causes of disease at any level.)

Read about science and religion

Appendix: Dishonarable editors?

Perhaps the colleagues below joined the editorial team of Organic and Medicinal Chemistry International Journal in good faith – but are they doing due diligence in checking the standards of the journal they (nominally) help edit? Are they happy to remain associated with this journal given its publishing (non)standards?

Honorary Editors Editor affiliation
Fernando AlbericioUniversity of Barcelona, Spain
Diego A AlonsoUniversity of Alicante, Spain
Carl E. HeltzelVirginia Polytechnic Institute and State University, USA
Daniel D HolsworthStemnext LLC, USA
Kent AchesonKaplan University Online, USA
Rama Suresh RaviNational Institutes of Health, USA
Syed A A RizviNova Southeastern University, USA
Alireza HeidariCalifornia South University,
USA
Khue NguyenUniversity of California, USA
Sonali KurupRoosevelt University, USA
Vivek KumarJohns Hopkins University,
USA
Subrata DebRoosevelt Universit, USA
Sridhar PrasadCalAsia Pharmaceuticals Inc, USA
Loutfy H MadkourAl Baha University, Saudi Arabia
Gianfranco BalboniUniversity of Cagliari, Italy
Raja Rizwan HussainKing Saud University, Saudi Arabia
Ibrahim Abdel-Karim Ahmed Abdel-RahmanUniversity of Sharjah, UAE
Khalid Hussain TheboInstitute of Metal research, China
Wenjun TangShanghai Institute of Organic Chemistry, China
Ao Zhang Shanghai Institute of Materia Medica,
China
Hengguang LiSichuan University, China
Pavel KocovskyCharles University, Europe
Hai Feng JiDrexel University, Pennsylvania
Wojciech J Kinart University of Lodz, Poland
David Morales MoralesInstituto de Químic, Mexico
Walter Filgueira de Azevedo JrPontifical Catholic University of Rio Grande do Sul, Brazil
Chung Yi ChenKaohsiung Medical Universit, Taiwan
Ilkay YildizAnkara University, Turkey
Mohamed El Sayed El KhoulyKafrelsheikh University, Egypt
Mohamed Nageeb Rashed Aswan University, Egypt
Hanaa Mahrousabd El Ghany Mohamed RadyCairo University, Egypt
Kamal Mohamed DawoodCairo University, Egypt
Waleed Adbelhakeem BayoumiMansoura University, Egypt
Mohammad Emad Azab Ali El-FakharanyAin Shams University, Egypt
Khaled Rashad Ahmed AbdellatifBeni-Suef University, Egypt
Winston F. TintoUniversity of the West Indies, Caribbean
Adnan S Abu-SurrahQatar University, Qatar
Djamila HallicheUniversity of Science and Technology Houari Boumedien, Africa
Maher AljamalAl Quds University / Beit Jala Pharmaceutical Company, Palestine
Anna Pratima NikaljeY. B. Chavan College of Pharmacy,
India
Prabhuodeyara M GurubasavarajRani Channamma University, India
A Jaya ShreeOsmania University, India
Hari N PatiAdvinus Therapeutics Ltd. (A TATA Enterprise), India
P Mosae Selvakumar Karunya University, India
Madhuresh Kumar Sethi Panjab University Chandigarh, India
Sunil KumarPujab Technical University, India
Lallan MishraBHU, India
Pinkibala PunjabiMohanlal Sukhadia University, India
Maya Shankar SinghBanaras Hindu University, India
Ajmal BhatSant Baba Bhag Singh University, India
A Venkat NarsaiahIndian Institute of Chemical Technology,
India
Rahul HajareVinayaka Mission University, India
Anshuman SrivastavaBanaras Hindu University, India
Sadaf Jamal GilaniThe Glocal University, India
Ramakrishna VellalacheruvuSri Krishna Devaraya University, India
Ali GharibIslamic Azad University, Iran
Mohammad S MubarakUniversity of Jordan, Jordan
Vladimir V KouznetsovUniversidad Industrial de Santander, Colombia
Loai Aljerf University of Damascus, Syria
Davidson Egirani Niger Delta University, Nigeria
Branislav RankovicUniversity of Kragujevac, Serbia
Fawzi Habeeb Jabrail University of Mosul, Iraq
Ali A EnsafiIsfahan University of Technology, Iran
Kian NavaeeAmerican Chemical Society, Iran
Rachid TouzaniUniversité Mohammed Premier, Morocco
(Dis?)Honarary Editors of Organic and Medicinal Chemistry International Journal





Spectroscopy for primary school teachers?

Image by Schäferle from Pixabay 

Will Raman spectroscopy provide future primary teachers with "a dynamic and attractive vision of science, technology and innovation"?

Keith S. Taber

a proposal of methodology for the subject of experimental sciences for teachers in training, which will introduce real scientific instrumentation such as Raman spectroscopy, which can be of great interest to perform significant learning and to design teaching-learning activities

Morillas & Etxabe-Urbieta, 2020, p.17

I am going to offer a critical take on a proposal to teach future primary teachers to use Raman spectroscopy. That is, a proposal published in a leading international research journal (well, that is how the journal describes itself).

I do have some reservations about doing this: it is very easy to find fault in others' work (and a cynic might suggest that being an academic is basically a perpetual ongoing training in that skill). And there are features of the proposal that are appealing.

For a start, I like spectroscopy. I sometimes joke that my first degree was in spectroscopy and some of its applications (although the degree certificate refers to this as chemistry). I also like the way the paper refers to principles of models of learning, and refers to "combining concepts of chemistry and physics" (Morillas & Etxabe-Urbieta, 2020: 17).

However, I do wonder how closely (and critically) the editor and peer reviewers (assuming there really was peer review) actually read the submitted manuscript – given the range of questions one would expect to have arisen in review.

I will, below, question whether this contribution, a proposed teaching scheme, should really be considered a 'research' article. Even if one thinks it should be, I suggest the authors could have been better supported by the journal in getting their work ready for publication.

A predatory journal

I have been reading some papers in a journal that I believed, on the basis of its misleading title and website details, was an example of a poor-quality predatory journal. That is, a journal which encourages submissions simply to be able to charge a publication fee (currently $1519, according to the website), without doing the proper job of editorial scrutiny. I wanted to test this initial evaluation by looking at the quality of some of the work published.

Although the journal is called the Journal of Chemistry: Education Research and Practice (not to be confused, even if the publishers would like it to be, with the well-established journal Chemistry Education Research and Practice) only a few of the papers published are actually education studies. One of the articles that IS on an educational topic is called 'Raman Spectroscopy: A Proposal for Didactic Innovation (IKD Model) In the Experimental Science Subject of the 3rd Year of the Primary Education Degree' (Morillas & Etxabe-Urbieta, 2020).

A 'research article' in "a leading International Journal for the publication of high quality articles"

Like other work I have examined in this journal, the published article raises issues and questions which one would imagine should have arisen during peer review – that is when expert evaluators look to see if a manuscript has the importance and quality to be worthy of journal publication.

Below I very briefly outline the nature of the proposed innovation, and then offer some critique.

A proposal for didactic innovation in the primary education degree

Morillas and Etxabe-Urbieta (i) propose a sequence of practical science work for inclusion in the curriculum of undergraduate students who are preparing for primary school teaching, (ii) link this, in broad terms at least, to pedagogic principles, and (iii) make claims about the benefits of the mooted proposal.

The authors consider their proposal has originality, as they could not find other literature recommending the use of Raman spectroscopy in the preparation of primary school teachers,

"…the fact that there are no works related to Raman spectroscopy to work on concepts developed in experimental science class for Teacher training in Primary Education in formation, makes the proposal that is presented more important."

Morillas & Etxabe-Urbieta, 2020: 17

What exactly is proposed?

Morillas and Etxabe-Urbieta suggest an extended sequence of laboratory work with three main stages:

  • students are provided with three compounds (sodium nitrate; potassium nitrate; ammonium dihydrogen phosphate) from which they will prepare saturated solutions, from which crystals will grow ;
  • the resulting crystals will be inspected, and examples of crystals with clear shapes will be selected and analysed in terms of their geometries – showing how different compounds lead to different crystal structures
  • examples of ill-formed crystals will be subjected to Raman spectroscopy, where the three different compounds will give rise to different 'fingerprints'.

Pedagogic theory

Morillas and Etxabe-Urbieta report that their work is based on the 'IKD model' which equates to "new innovative teaching methodologies":

"In recent years, new innovative teaching methodologies have been used in the Basque Country University (IKD model) for experimental science classes for teachers of Primary Education in formation. This IKD model is based on a cooperative and dynamic learning. It is an own [?], cooperative, multilingual and inclusive model that emphasizes that students are the owners of their learning and are formed in a comprehensive, flexible and adapted to the needs of society. Training students according to IKD model requires creating new ways of teaching and learning more active and cooperative (curriculum development). Therefore, the fact of combining more theoretical master classes with more practical classes is a trend that is increasingly used."

Morillas & Etxabe-Urbieta, 2020: abstract

The authors name check constructivism, meaningful learning, and the notion of learning cycles, without offering much detail of what they mean by these terms.

"The students can put into practice the solubility concepts in master classes, through activities based on the IKD didactic model of the University of the Basque Country and in constructivist models of teaching and learning. Learning cycles have been developed and in a group, dynamic and cooperative way, the students explore their previous knowledge about solubility and crystallization, reflect on these ideas, make meaningful learning and apply these new learning in research contexts in the laboratory. In particular it has been discussed in the classroom about the amount of salt (compound) that can be dissolved in water and has been investigated on the factors that influence the solubility and on the crystallization process."

Morillas & Etxabe-Urbieta, 2020: 18

There is very little detail of how these pedagogic principles are built upon in the proposed teaching scheme, and the 'IKD model' is not explained in any more detail (e.g., how does 'multilingual' learning fit with this proposal?) After all, school children have been making saturated solutions and growing crystals for generations without this being understood as part of some innovative educational practice.

What is claimed?

Overall, the sequence is said to help link scientific theory to practice, teach geological concepts and provide hands-on experience of using modern scientific instruments,

"the first part, where the crystallization of various chemical compounds is carried out, will help to pinpoint possible doubts arising in the master classes of the chemistry part. Next, it is analyzed how to differentiate the crystals by means of their type of geometry in its crystallization based on geological concepts. Finally, the crystals are differentiated by another method based on the Raman spectroscopy…where students can observe concepts of light treated in physics class such as lasers, and electromagnetic lengths [sic?], where for the case in which some crystals that are not perfectly crystallized, this portable equipment will be used. In this way, the students have their first experience of this type, and use real scientific instrumentation."

Morillas & Etxabe-Urbieta, 2020: 18
Stage one: preparing crystals

But the authors suggest the approach has further impacts. Dissolving the salts, and then observing the crystals grow, "can help the student

  • to encourage possible scientific vocations,
  • better understanding of theoretical master classes and
  • letting them know how is [what it is like?] working in the scientific field and
  • spreading the importance of crystallography in our society" (p.18)

So, in addition to linking to theory classes ("students will begin to use the laboratory material studied in the theoretical classes, using and observing its characteristics, and in the same way trying to correlate the concepts of chemical saturation previously learnt in master classes", p.18), this simple practical work, is expected to change student views about science careers, give authentic experience of doing science, and increase social awareness of crystallography as a scientific field. Perhaps, but this seems a lot to expect from what is a pretty standard school practical activity.

However, in case we are not convinced, the authors reinforce their claims: students will experience principles they have been taught about saturated solutions, and how solubility [often] changes with temperature, and

"the students begin to experience the first fundamental concepts of crystallography and subsequently the fact of observing week after week the growth of the crystals themselves, can help the student to encourage possible scientific vocations, better understanding of theoretical master classes and letting them know how is working in the scientific field and spreading the importance of crystallography in our society."

Morillas & Etxabe-Urbieta, 2020: 19

Some of this is perfectly reasonable, but some of these claims seem speculative. (Simply repeating an unsupported claim again on the following page does not make it more convincing.) Authentic scientific activity would surely involve extended engagement, developing and testing a procedure over time to hone a process – crystallising solutions does not become an authentic science activity simply because the evaporation takes place over several weeks.

An editor or peer reviewer might reasonably ask "how do you know this activity will have these effects?"

Stage 2: Characterising crystals
Image by Lisa Redfern from Pixabay 

In the second stage, students examine the three types of crystals formed and notice and document that they have different shapes/geometries. This requires careful observation, and measurement (of angles),

In a second phase, once that month passed, the students will observe the crystals that have grown inside their containers. Firstly, one of the objectives will be to observe what kind of crystals have formed. For the observation methodology and subsequent for the description of them, teacher will give some guidelines to distinguish the formed crystals according to their geometry based on the geological morphology.

Morillas & Etxabe-Urbieta, 2020: 19

Growing and examining crystals seems a worthwhile topic in primary school as it can encourage awe and wonder in nature, and close observation of natural phenomena: the kinds of activities one might employ to engage young minds with the world of science (Taber, 2019). The authors expect (undergraduate) students to recognise the different crystal systems ("trigonal … orthorombic … tetragonal") and associated angles between faces. 1 This phase of the work is reasonably said to be able to

  • "promote skills such as visual and spatial perception"

It is the third stage of the work which seems to go beyond the scope of traditional work in preparing primary school teachers.

Stage 3: Using Raman spectroscopy to (i) identify compounds / (ii) appreciate particle movements

In this stage, groups of students are given samples of each of the compounds (from any of the students' specimens that did not crystallise well enough to be identified from the crystal shape), and they obtain Raman spectra from the samples, and so identify them based on being informed that the main spectral peak falls at a different wavenumber for each salt.

An inauthentic activity?

There is a sense that this rather falls down as an inquiry activity, as the students knew what the samples were, because they made the solutions and set up the crystallisations – and so presumably labelled their specimens as would be usual good scientific practice. The only reason they may now need to identity samples is because the teaching staff have deliberately mixed them up. Most school practical work is artificial in that sense, but it seems a little forced as an excuse to use spectroscopy. A flame test would surely have done the job more readily?

From 'Electrons and Wave-Particle Duality'

at http://www.sliderbase.com

A black box

Now the way the procedure is explained in the article, the spectrometer works as a black box that leads to spectra that (if all has gone well) have characteristics peaks at 1067 cm-1, 1048 cm-1 or 921 cm-1 allowing the samples to be distinguished. After all, a forensics expert does not have to understand how and why we all form unique fingerprints to be able to compare fingerprints found at a crime scene with those taken from suspects. (They simply need to know that fingerprints vary between people, and have skills in making valid matches.)

Yet Morillas and Etxabe-Urbieta (p.21) claim more: that undertaking this third part of the sequence will enable students to

  • "relate the type of movements that occur in the materials particles, in this case crystals, where the concept of particles movement…
  • the fact of lasers use in a realistic way helps also students to understand how these kinds of concepts exist in the reality and are not science fiction
  • …the use of this type of instrumentation in television series such as CSI, for example, means that students pay more attention in classrooms
  • and help them to grow a basic scientific curiosity in their professional work, that is, in the Primary Education classrooms"

Again, perhaps, but where is the evidence for this? If one wanted to persuade future teachers that lasers are not just science fiction, one could refer to a laser pointer, or a CD, DVD or Blu-ray player.

Final claims

The authors end their proposal with some final claims

"The methodology proposal presented in this work, based on IKD model explained [sic, I do not think it was – at least not in any detailed way] above, will offer to Primary Education degree students a great possibility of applicability as a teaching resource, in which the fact of using Raman spectroscopy as a real scientific instrumentation can fill them with curiosity, amazement and interest. Moreover, this technique cannot only be used as a complement to this type of work [?], but also for didactic innovation projects and research projects. Thus, the fact of being able to use this type of tools means that the students are stimulated by their curiosity and desire to advance and learn, progressing in their scientific concern and therefore, improving the delivery of their future classes in a more motivated, didactic and rigorous way."

Morillas & Etxabe-Urbieta, 2020: 21

A devil's advocate might counter that an activity to identify poorly crystallised salts by subjecting them to a black box apparatus that produces messy graphs which are interrogated in terms of some mysterious catalogue of spectral lines will do very little to encourage "curiosity, amazement and interest" among any future primary school teachers who already lack confidence and enthusiasm for science. Indeed, without a good understanding of a range of underlying physical principles, the activity can offer about as much insight into science as predicting the outcome of a football match from a guide to interpreting tea leaves.

So, perhaps less like identifying fingerprints, and more like reading palms.

The references to "offer to Primary Education degree students a great possibility of applicability as a teaching resource" and "improving the delivery of their future classes in a more motivated, didactic and rigorous way" seems to mean 1 that the authors are not just suggesting that the undergraduates might benefit from this as learners, but also that they may want to introduce Raman spectroscopy into their own future teaching in primary schools.

That seems ambitious.

Spectroscopy in the school curriculum

Spectroscopy does appear in the upper levels of the secondary school curriculum, but not usually Raman spectroscopy.

Arguably, mass spectrometry 2 is most accessible as a general idea as it can be explained in terms of basic physical principles that are emphasised in school physics – mass, charge, force, acceleration… 'Mass spec.' – the chemist's elemental analyser – also offers a nice context for talking about the evidence for the existence of elements with distinct atomic numbers, and for looking at isotopic composition, as well as distinguishing elements and compounds, and testing for chemical changes (Taber, 2012).

'Mass spec.' is, however, rather different to the other main types of spectroscopy in which samples are subjected to electromagnetic radiation and the outcome of any interaction detected. 2

Image by Daniel Roberts from Pixabay 

Most spectroscopy involves firing a beam of radiation at a sample, shifting gradually through a frequency range, to see which frequencies are absorbed or re-emitted. Visible spectroscopy is perhaps the most accessible form as the principle can initially be introduced with simple hand-held spectroscopes that can be used to examine different visible light sources – rather than having to interpret chart recorder or computer screen graphics. Once students are familiar with these spectroscopes, more sophisticated spectrometers can be introduced. UV-Visible (UV-Vis) spectroscopy can be related to teaching about electronic energy levels, for example in simple models of atomic structure.

Infrared (IR) spectroscopy has similar principles, and can be related to the vibrations in molecules due to the presence of different bonds. Vibrational energy levels tend to be much closer together than discrete 3 electronic levels

In these types of spectroscopy, some broad ranges of frequencies of radiation are largely unaffected by the test sample but within these bands are narrow ranges of radiation that are being absorbed to a considerable extent. These 'spectral peaks' of frequencies* of the radiation being removed (or heavily attenuated) from the spectrum reflect energy transitions due to electrons or bonds being excited to higher energy levels. (Although energy absorbed will often then be re-emitted, it will be emitted in arbitrary directions so very little will end up aligned with the detector.)

[* Traditionally in spectroscopy the peaks are labelled not with radiation frequency by with wavenumber in cm-1 (waves per cm). This is the reciprocal of wavelength, λ, (in cm), and so directly proportional to frequency, as speed of the radiation c = fλ.]

A more subtle kind of spectrocsopy

Raman spectroscopy is inherently more complex, and relies on interactions between the material under test and a very small proportion of the incident radiation. Raman spectroscopy relies on a scattering effect, so as a simple analogy it is like UV/Visible or IR spectroscopy but involving something more like a Doppler shift than simple absorption. Thus the need for a monochromatic light source (the laser) as the detector is seeking shifts from the original frequency.

Figure taken form the open-access article: Xu, Yu, Zois, Cheng, Tang, Harris & Huang, 2021

So, if introducing spectroscopy one would be better advised to start with UV-Vis (or IR) where there is a strong contrast in absorption between unaffected and affected frequencies, and where there is a direct relationship between the energy of the affected radiation and the energy transitions being indirectly detected (rather than Raman spectroscopy where there is only a marginal difference between affected and unaffected frequencies, and the scattered radiation does not directly give the frequencies of the energy shifts being indirectly detected).

Learning quanta – teaching through an Aufbau principle

As learning tends to be an incremental process, building on existing knowledge, it would probably make sense to

  • introduce spectroscopy in terms of UV-Vis, first with hand held spectroscopes, then spectrometers
  • then extend this to IR which is similar in terms of basic principles and so would reinforce that learning.

Only later, once this basic understanding had been sufficiently revisited and consolidated, would it seem to make sense to

  • move onto the more complex nature of Raman spectroscopy (or nuclear magnetic resonance spectroscopy which involves similar complications).

This, at least, would seem to be a constructivist approach – which would align with Morillas and Etxabe-Urbieta's claim of employing "Teaching and Learning processes based on Constructivism theories and IKD model of the Basque Country University" (p.18).

That is, of course, if it is felt important enough to teach primary school teachers about spectroscopy.

…and as if by magic…

Actually, I am not at all convinced that

"thanks to the visualization of these spectra, students can relate the type of movements that occur in the materials particles, in this case crystals, where the concept of particles movement, which is quite abstract, can be understood"

The future teachers could certainly be taught that

"this type of technique consists in that the laser of the equipment (in our case red laser) when striking on the crystals promotes an excitation of the molecules [sic, ions?] of the own crystal, that can vibrate, rotate [sic 4] etc. This type of excitation is translated into a spectrum (different peaks) that is displayed on the screen of a computer connected to the Raman spectrometer. These peaks refer to different vibrational modes of the molecules [sic], so that each of the bands of each spectrum, corresponds to different parts of the molecule [sic], so as it has been mentioned above, each of the crystals has its own fingerprint"

Morillas & Etxabe-Urbieta, 2020: 20

Yet that seems some way short of actually relating the spectra to the "type of movements that occur in the materials particles". (In terms of my fingerprint analogy, this is like being taught that the unique fingerprint reflects the epigenetic development of the individual, and so appreciating why different people have different fingerprints, but still not being able to relate the specific fingerprints of individual to any specific events in their development.)

Not a research paper – or even a practice paper?

I do not think this article would been publishable in a serious research journal, as it does not seem to report any actual research. It discusses educational practice, but it is not clear if this is practice that currently takes place or is simply being proposed. Even if this is reporting actual teaching practice, there is no evaluation of that practice.

The idea that Raman spectroscopy might be beneficial to future primary school teachers seems somewhat speculative. I have no doubt it could potentially be of some value. All other things being equal, the more science that primary school teachers know, understand, and are confident about, the better for them and their future pupils.

But of course, all other things are seldom equal. In general, teaching something new means less time for something else. Either Raman spectroscopy replaces something, or it squeezes the time available, and therefore the engagement and depth of treatment possible, in some other curriculum content.

So, rather than making great claims about how including Raman spectroscopy in the curriculum will help learn theory (will they really understand how a laser produces coherent monochromatic light, and how and why scattering takes place?), provide experience of scientific work (with an artificial exercise?), lead to scientific vocations (instead of becoming primary teachers?), and raise social awareness of crystallography, etc., what is needed is evidence that some of these educational aims and objectives are being met. And, ideally, that there is more educational gain with this activity than whatever it replaced.

I am certainly not looking to reject this proposal out of hand. I can see the sequence could engage students and be enjoyable, and may well have positive outcomes. But simply making a range of unsubstantiated claims is not research. A speculative proposal offering tenuous arguments for knowledge claims is not sufficient for a research paper.

Evaluating these claims would not be that easy (some of the effects claimed are pretty long term and indirect) but it is only when a claim is closely argued, and preferably based on empirical evidence, that it become science and ready for publication in a research journal.

Peer review

Now the editor of Journal of Chemistry: Education Research and Practice may disagree with me (at least, assuming she scrutinised the article before it was published). 5 But supposedly this journal undertakes formal peer review – that is experts in a topic are asked to evaluate submissions for suitability for publication – not only to make a recommendation on whether something should be published, but to raise any issues that need addressing before such publication.

I wonder who reviewed this submission (were they experts in primary teacher education?) and what, if any, suggestions for revisions these referees may have made. There are a good many points where one would expect a referee to ask for something to be explained or justified or corrected (e.g., molecules and rotations in salt crystals). Some of these points should be obvious to any careful reader (like asking what exactly is the IKD model that informs this proposal, and where are different features of the model enacted in the teaching sequence?) There are also places where the authors could have been supported to hone their text to make their intended meanings clearer. (I have considerable respect for authors writing in a second language, but that is not an excuse for journal editors and production staff to ignore incorrect or confusing expressions.)

The editor decided the manuscript was ready for publication about 10 days after initial submission

Yet, based on any peer reviews reports, and the authors' responses to them, the editor was able to decide the manuscript was ready for publication about 10 days after initial submission.

A brave conjecture?

Given that the proposal here is likely to seem, on the face of it, quite bizarre to many of those working in primary teacher education, who are charged with ensuring future primary teachers have a good grounding of the most basic scientific concepts, values and practices, and feel confident about teaching science to children, it risks being dismissed out of hand unless very closely and carefully argued.

"…the fact that there are no works related to Raman spectroscopy to work on concepts developed in experimental science class for Teacher training in Primary Education in formation, makes the proposal that is presented more important [but also puts a high burden on the proposer to make a convincing argument for the proposal]"

Morillas & Etxabe-Urbieta, 2020: 17

So, even if the editor felt that an unproved pedagogic proposal was of itself suitable to be the basis of a research article, there is much that could have been done in editorial and peer review to support the authors in improving their manuscript to give a stronger article. After all, I suspect very few academics working in initial teacher education with future primary teachers would inherently think that Raman spectroscopy is a strong candidate for adding to the curriculum, so the case needs all the argumentation, logic and evidential support it can muster if it is be taken seriously.

Work cited:
  • IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). Online version (2019-) created by S. J. Chalk. ISBN 0-9678550-9-8. https://doi.org/10.1351/goldbook.
  • Morillas, H., & Etxabe-Urbieta, J. M. (2020). Raman Spectroscopy: A Proposal for Didactic Innovation (IKD Model) In the Experimental Science Subject of the 3rd Year of the Primary Education Degree. Journal of Chemistry: Education Research and Practice, 4(1), 17-21.
  • Rajawat, J., & Jhingan, G. (2019). Chapter 1 – Mass spectroscopy. In G. Misra (Ed.), Data Processing Handbook for Complex Biological Data Sources (pp. 1-20): Academic Press.
  • Schmälzlin, E., Moralejo, B., Rutowska, M., Monreal-Ibero, A., Sandin, C., Tarcea, N., Popp, L. and Roth, M.M. (2014). Raman Imaging with a Fiber-Coupled Multichannel Spectrograph. Sensors 14, no. 11: 21968-21980. https://doi.org/10.3390/s141121968
  • Taber, K. S. (2012). Key concepts in chemistry. In K. S. Taber (Ed.), Teaching Secondary Chemistry (2nd ed., pp. 1-47). London: Hodder Education.
  • Taber, K. S. (2019). Exploring, imagining, sharing: Early development and education in science. In D. Whitebread, V. Grau, K. Kumpulainen, M. M. McClelland, N. E. Perry, & D. Pino-Pasternak (Eds.), The SAGE Handbook of Developmental Psychology and Early Childhood Education (pp. 348-364). London: Sage.
  • Xu, J., Yu, T., Zois, C. E., Cheng, J.-X., Tang, Y., Harris, A. L., & Huang, W. E. (2021). Unveiling Cancer Metabolism through Spontaneous and Coherent Raman Spectroscopy and Stable Isotope Probing. Cancers, 13(7), 1718.

Notes

1 Throughout the paper I would have appreciated an indication of which aspects of the activity were intended purely for the education of the future teachers themselves and which aspects were meant to be modelled for future use in primary classrooms.


2 Is spectroscopy the same as spectrometry? Strictly these terms have different meanings. According to the International Union of Pure and Applied Chemistry (IUPAC, 2019-):

  • spectroscopy is "the study of physical systems by the electromagnetic radiation with which they interact or that they produce"

whereas

  • "spectrometry is the measurement of such radiations as a means of obtaining information about the systems and their components."

And

  • mass spectroscopy is "the study of systems by causing the formation of gaseous ions, with or without fragmentation, which are then characterized by their mass-to-charge ratios and relative abundances."
  • mass spectrometry is "the branch of science dealing with all aspects of mass spectroscopes and the results obtained with these instruments"
  • a mass spectrograph is "an instrument in which beams of ions are separated (analysed) according to the quotient mass/charge, and in which the deflection and intensity of the beams are recorded directly on photographic plate or film"

So that has cleared that up!

In practice the terms spectroscopy and spectrometry are often used synonymously, even in relation to mass spectrometry (e.g., Rajawat & Jhingan, 2019) which strictly does not involve the interaction of matter with radiation.


3 Discrete, as this would not apply to the near continuum bands of energy levels found in metals for example.


4 Although I am not convinced that rotational modes of excitation can be detected in a solid crystal.


5 The editor of a research journal is the person who makes publication decisions. However, predatory journals do not always operate like serious research journals – and it may be that sometimes these decisions are made by admin. staff and the editor's name is just used as a sop to respectability. I do not know if that is the case with this journal, but I think by any normal academic standards some very dubious editorial decisions are being made by someone!


Can deforestation stop indigenous groups starving?

One should be careful with translation when plagiarising published texts

Keith S. Taber


The mastering of the art of deforestation is what enables the inhabitants of the Amazon not to die of hunger.


Marcos Aurélio Gomes da Silva, Federal University of Juiz de Fora, Brazil

I have been reading some papers in a journal that I believed, on the basis of its misleading title and website details, was an example of a poor-quality 'predatory' journal. That is, a journal which encourages submissions simply to be able to charge a publication fee, without doing the proper job of editorial scrutiny. I wanted to test this initial evaluation by looking at the quality of some of the work published.

One of the papers I decided to read, partly because the topic looked of particular interest, was 'The Chemistry of Indigenous Peoples'.

Image by 139904 from Pixabay 

It is important to learn and teach about the science of indigenous populations

Indigenous science is a very important topic for science education. In part this is because of the bias in many textbook accounts of science. There are examples of European scientists being seen as discovers of organisms, processes and so on, that had been long known by indigenous peoples. It is not even that the European's re-discovered them as much as that they were informed by people who were not seen to count as serious epistemic agents. Species were often named after the person who could afford to employ collectors (often paid a pittance) to go and find specimens. This is like a more serious case of the PhD supervisor claiming the student's work as the student worked for them!

Indigenous cultures often encompass knowledge and technologies that have worked effectively, and sustainable, for millennia but which do not count as proper science because they are not framed in terms of the accepted processes of science (being passed on orally and by example, rather than being reported in Nature or Science). Of course the situation is more nuanced that that – often indigenous cultures do not (need to) make the discriminations between science, technology, myth, ritual, art, and so forth that have allowed 'modern' science to be established as a distinct tradition and set of practices.

But science education that ignores indigenous contributions to formal science and seems to dismiss cultural traditions and ecological knowledge offers both a distorted account of science's history, and an inherent message about differential cultural worth to children.

That is a rather brief introduction to a massive topic, but perhaps indicates why I was keen to look at the paper in the so-called 'Journal of Chemistry: Education Research and Practice' on 'The Chemistry of Indigenous Peoples' (da Silva, 2019)

Sloppy production values

"The Chemistry of Indigenous Peoples" had moved from submission to acceptance in 4 days, and had been published just over a week later.

Not a lot of time for a careful peer review process

This 'opinion article' was barely more than one page (I wondered if perhaps the journal charges authors by the word – but it seems to charge authors $999 per article), and was a mess. For example, consider the two paragraphs reproduced below: the first starts in lower case, and ends with the unexplained 'sentence', "art of dewatering: cassava"; and the second is announced as being about development (well, 'devel- opment' actually) which seems to be considered the opposite of fermentation, but then moves straight to 'deworming' which is said to be needed due to the toxic nature of some plants, and ends up explaining that deforestation is essential for the survival of indigenous people (rather contrary to the widespread view that deforestation is destroying their traditional home and culture).

The closing three paragraphs of the article left me very confused:

"In this sense, we  [sic – this is a single authored paper] will examine the example of the cassava root in more detail so that we can then briefly refer to other products and processes. The last section will address some of the political implications of our perspective.

In Brazil, manioc (Manihot esculenta) is known under different names in several regions. In the south of the country, it is also called "aipim", in central Brazil, "maniva", "manaíba", "uaipi", and in the north, "macaxeira" or "carim".

In this essay, we intend to show that, to a certain extent, companies,
a process of invention of the Indian Indians of South America, and
still are considerable, as businesses, until today, millions of people and institutions benefit in the Western world. We seek to provide information from a few examples regarding chemical practices and biochemical procedures for the transformation of substances that
are unknown in Europe."

da Silva, 2019, p.2

My first reading of that last paragraph made me wonder if this was just the introduction to a much longer essay that had been truncated. But then I suspected it seemed to be meant as a kind of conclusion. If so, the promised brief references to 'other products and processes' seem to have been omitted after the listing of alternative names in the paragraph about manioc (cassava), whilst the 'political implications' seemed to refer to the garbled final paragraph ("…to a certain extent, companies, a process of invention of the Indian Indians of South America, and still are considerable, as businesses…").

I suspected that the author, based in Brazil, probably did not have English as a first language, perhaps explaining the odd phrasing and incoherent prose. But this paper is published in a (supposed) research journal which should mean that the submission was read by an editor, and then evaluated by peer reviewers, and only published once the editor was convinced it met quality standards. Instead it is a short, garbled, and in places incoherent, essay.

Plagiarism?

But there is worse.

da Silva's article, with the identifed sources (none of which are acknowledged) highlighted. (The paper is published with a licence that allows reproduction.)

I found a paper in the Portuguese language journal Química Nova called 'A química dos povos indígenas da América do Sul (The chemistry of indigenous people of Southamerica)' (Soentgen &  Hilbert, 2016).  This seems to be on a very similar topic to the short article I had been trying to make sense of – but it is a much more extensive paper. The abstract is in English, and seems to be the same as the opening of da Silva's 2019 paper (see the Table below).

That is plagiarism – intellectual theft. Da Silva does not even cite the 2016 paper as a source.

I do not read Portuguese, and I know that Google Translate is unlikely to capture the nuances of a scholarly paper. But it is a pretty good tool for getting a basic idea of what a text is about. The start of the 2016 paper seemed quite similar to the close of da Silva's 2019 article, except for the final sentence – which seems very similar to a sentence found elsewhere in the 'New Chemistry' article.

This same paper seemed to be the source of the odd claims about "deworming" and the desirability of deforestation in da Silva's 2019 piece. The reference to the "opposite process" (there, poisoning) makes sense in the context of the 2016 paper, as there it follows from a discussion of the use of curare in modern medicine – something borrowed from the indigenous peoples of the Amazon.

In da Silva's article the 'opposite process' becomes 'development', and this now follows a discussion of fermentation- which makes little sense. The substitution of 'deworming' and 'deforestation' as alternatives for 'poisoning' ('desenvenenamento') convert the original text into something quite surreal.

So, in the same short passage:

  • desenvenenamento (poisoning) becomes development (desenvolvimento)
  • desenvenenamento (poisoning) becomes deworming (vermifugação – or deparasitamento)
  • desenvenenamento (poisoning) becomes deforestation (desmatamento)

I also spotted other 'similarities' between passages in da Silva's 2019 article and the earlier publication (see the figure above and table below). However, it did not seem that da Silva had copied all of his article from Soentgena and Hilbert.

Rather I found another publication by Pinto (possibly from 2008) which seemed to be the source of other parts of da Silva's 2019 paper. This article is published on the web, but does not seem to be a formal publication (in an academic journal or similar outlet), but rather material prepared to support a taught course. However, I found the same text incorporated in a later extensive journal review article co-written by Pinto (Almeida, Martinez & Pinto, 2017).

This still left a section of da Silva's 2019 paper which did not seem to orignate in these two sources. I found a third Portuguese language source (Cardoso, Lobo-santos, Coelho, Ayres & Martins, 2017) which seemed to have been plagiarised as the basis of this remaining section of the article.

As this point I had found three published sources, predating da Silva's 2019 work, which – when allowing for some variation in translation into English – seemed to be the basis of effectively the whole of da Silva's article (see the table and figure).

Actually, I also found another publication which was even closer to, indeed virtually identical to, da Silva's article in the Journal of Chemistry: Education Research and Practice. It seems that not content with submitting the plagiarised material as an 'opinion article' there, da Silva had also sent the same text as a 'short communication' to a completely different journal.

(Read 'A failure of peer review: A copy of a copy – or plagiarism taken to the extreme')

Incredible coincidence? Sloppy cheating? Or a failed attempt to scam the scammers?

Although da Silva cited six references in his paper, these did not include Cardoso et al. (2017), Pinto (2008)/Almeida et al. (2017) or Soentgena & Hilbert (2016). Of course there is a theoretical possibility that the similarities I found were coincidences, and the odd errors were not translation issues but just mistakes by da Silva. (Mistakes that no one at the journal seems to have spotted.) It would be a very unlikely possibility. So unlikely that such an explanation seems 'beyond belief'.

It seems that little, if anything, of da Silva's text was his own, and that his attempt to publish an article based on cutting sections from other people's work and compiling them (without any apparent logical ordering) into a new peice might have fared better if he too had taken advantage of Google Translate (which had done a pretty good job of helping me identify the Portuguese sources which da Silva seemed to have been 'borrowed' for his English language article). In cutting and pasting odd paragraphs from different sources da Silva had lost the coherence of the original works leading to odd juxtapositions and strangely incomplete sections of text. None of this seems to have been noticed by the journal editor or peer reviewers.

Or, perhaps, I am doing da Silva an injustice.

Perhaps he too was suspicious of the quality standards at this journal, and did a quick 'cut and paste' article, introducing some obvious sloppy errors (surely translating the same word,'desenvenenamento', incorrectly in three different ways in the same paragraph was meant as some kind of clue), just to see how rigorous the editing, peer review and production standards are?

Given that the article was accepted and published in less than a fortnight, perhaps the plan backfired and poor da Silva found he had a rather unfortunate publication to his name before he had a chance to withdraw the paper. Unfortunate? If only because this level of plagiarism would surely be a sacking offence in most academic institutions.

Previously published materialEnglish translation (Google Translate)The Chemistry of Indigenous Peoples (2019)
Marcos Aurélio Gomes da Silva
The contribution of non-European cultures to science and technology, primarily to chemistry, has gained very little attentions until now.[Original was in English]The contribution of non-European cultures to science and technology, primarily to chemistry, has gained very little attentions until now.
Especially the high technological intelligence and inventiveness of South American native populations shall be put into a different light by our contribution.Especially, the high technological intelligence and inventiveness of South American native populations shall be put into a different light by our contribution.
The purpose of this essay is to show that mainly in the area of chemical practices the indigenous competence was considerable and has led to inventions profitable nowadays to millions of people in the western world and especially to the pharmacy corporations.The purpose of this study was to show that mainly in the area of chemical practices; the indigenous competence was considerable and has led to inventions profitable nowadays to millions of people in the western world and especially to the pharmacy corporations.
We would like to illustrate this assumption by giving some examples of chemical practices of transformation of substances, mainly those unknown in the Old World.We would like to illustrate this assumption by giving some examples of chemical practices of transformation of substances, mainly those unknown in the old world.
The indigenous capacity to gain and to transform substances shall be shown here by the manufacture of poisons, such as curare or the extraction of toxic substances of plants, like during the fabrication of manioc flower.The indigenous capacity to gain and to transform substances shall be shown here by the manufacture of poisons, such as curare or the extraction of toxic substances of plants, like during the fabrication of manioc flower.
We shall mention as well other processes of multi-stage transformations and the discovery and the use of highly effective natural substances by Amazonian native populations, such as, for example, rubber, ichthyotoxic substances or psychoactive drugs.
We shall mention as well other processes of multi-stage transformations and the discovery and the use of highly effective natural substances by Amazonian native populations, such as, for example, rubber, ichthyotoxic substances or psychoactive drugs.
Soentgena & Hilbert, 2016: 1141
A partir disso, os povos indígenas da América do Sul não parecem ter contribuído para a química e a tecnologia moderna.From this, the indigenous peoples of South America do not seem to have contributed to modern chemistry and technology.The indigenous peoples of South America do not seem to have contributed to modern chemistry and technology.
Em contraponto, existem algumas referências e observações feitas por cronistas e viajantes do período colonial a respeito da transformação, manipulação e uso de substâncias que exigem certo conhecimento químico como,6 por exemplo: as bebidas fermentadas, os corantes (pau-brasil, urucum), e os venenos (curare e timbó).In contrast, there are some references and observations made by chroniclers and travelers from the colonial period about the transformation, manipulation and use of substances that require certain chemical knowledge,6 for example: fermented beverages, dyes (pau-brasil, annatto), and poisons (curare and timbó).
In contrast, there are some references and observations made by chroniclers and travelers from the colonial period regarding the transformation, manipulation and use of substances that require certain chemical knowledge, such as fermented beverages, dyes (pigeon peas, Urucum), and the poisons (Curare and Timbó).
Mesmo assim, estas populações acabam sendo identificadas como "selvagens primitivos" que ainda necessitam de amparo da civilização moderna para que possam desenvolver-se.Even so, these populations end up being identified as "primitive savages" who still need the support of modern civilization so that they can develop.Even so, these populations end up being identified as "primitive savages" who still need the support of modern civilization in order for them to develop.
(Soentgena & Hilbert, 2016: 1141)
A pintura corporal dos índios brasileiros foi uma das primeiras coisas que chamou a atenção do colonizador português.The body painting of Brazilian Indians was one of the first things that caught the attention of the Portuguese colonizer.  Body painting of the Brazilian Indians was one of the first things that caught the attention of the Portuguese colonizer.  
Pero Vaz de Caminha, em sua famosa carta ao rei D.
Manoel I, já falava de uns "pequenos ouriços que os índios traziam nas mãos e da nudeza colorida das índias.
Pero Vaz de Caminha, in his famous letter to King D. Manoel I, already spoke of "small hedgehogs that the Indians carried in their hands and the colorful nudity of the Indians.Pero Vaz de Caminha, in his famous letter to King D. Manoel I, already talked about little hedgehogs that the Indians carried in their hands.
Traziam alguns deles ouriços verdes, de árvores, que na cor, quase queriam parecer de castanheiros; apenas que eram mais e mais pequenos.They brought some of them green hedgehogs, from trees, which in color, almost they wanted to look like chestnut trees; only that they were smaller and smaller.They brought some of them green hedgehogs, trees, who in color almost wanted to appear of chestnut trees; just that they were more and more small.
E os mesmos eram cheios de grãos vermelhos, pequenos, que, esmagados entre os dedos, faziam tintura muito vermelha, da que eles andavam tintos; e quando se mais molhavam mais vermelhos ficavam"And they were full of small, red grains, which, crushed between the fingers, made a very red tincture, from which they were red; and when they got more wet the redder they turned"And the same were filled with red, small [sic], which, crushed between the fingers, made very red dye from the [sic] that they walked red [sic]; and when the more they wet the more red they stayed.
(Pinto, 2008: pp1.1-2; also Almeida, Martinez & Pinto, 2017)
Os índios do Alto Xingú pintam a pele do corpo com desenhos de animais, pássaros e peixes.The Indians of Alto Xingu paint the skin of their bodies with drawings of animals, birds and fish.The Indians of Alto Xingú paint thebody [sic] skin with animal drawings, birds and fish.
 Estes desenhos, além de servirem para identificar o grupo social ao qual pertencem, são uma
maneira de uní-los aos espíritos, aos quais creditam sua felicidade.
These drawings, in addition to serving to identify the social group to which they belong, are a way to unite them with the spirits, to whom they credit their happiness.These drawings besides serving to identify the social group at thewhich [sic] they belong, are a way of unite them with the spirits, to whom they credit their happiness.
A tinta usada por esses índios é preparada com sementes de urucu, que se colhe nos meses de maio e junho.The ink used by these Indians is prepared with annatto seeds, which are harvested in May and June.The ink used by these Indians is prepared with urucu seeds , which is collected in the monthsof [sic] May and June.
As sementes são raladas em peneiras finas e fervidas em água para formar uma pasta.The seeds are grated into fine sieves and boiled in water to form a paste.The seeds are grated in fine [sic] and boiledwater [sic] to form a paste.
Com esta pasta são feitas bolas que são envolvidas em folhas, e guardadas durante todo o ano para as
cerimônias de tatuagem.
This paste is used to make balls that are wrapped in sheets, and kept throughout the year for the
tattoo ceremonies.
With this paste balls are made which, involved in sheets, are stored throughout the year for the tattoo ceremonies.
A tinta extraída do urucu também é usada para tingir os cabelos e na confecção de máscaras faciais.The dye extracted from the annatto is also used to dye hair and make facial masks.The ink extracted from Urucu is also used dyeing hair and making tion [sic] of facial masks.
(Pinto, 2008: p.4; also Almeida, Martinez & Pinto, 2017)
O urucu é usado modernamente para colorir manteiga, margarina, queijos, doces e pescado defumado, e o seu corante principal – a bixina – em filtros solares.  
Annatto is used in modern times to color butter, margarine, cheeses, sweets and smoked fish, and its main coloring – bixin – in sunscreens.Urucu is used coloring page [sic] butter, margarine, cheeses, sweets andsmoked [sic] fish, and its colorant main – bixina – in solar filters.
(Pinto, 2008: p.4; also Almeida, Martinez & Pinto, 2017)
Assim, foram identificados possíveis conteúdos de Química que poderiam estar relacionados com a preparação do Tarubá, como misturas, separação de misturas e processos de fermentação.  Thus, possible contents of Chemistry were identified that could be related to the preparation of Tarubá, such as mixtures, separation of mixtures and fermentation processes.  it was possible to identify possible contents of Chemistry that could be related to the preparation of Tarubá, such as mixtures, separation of mixtures and fermentation processes.
O processo de preparação da bebida feita da mandioca ralada, envolve a separação da mistura entre o sólido da massa da mandioca e o líquido do tucupi, feito através do processo de filtração com o tipiti, instrumento tradicional indígena.The process of preparing a drink made from grated cassava involves separating the mixture between the solid of the cassava mass and the liquid from the tucupi, made through the filtration process with tipiti, a traditional indigenous instrument.The process of preparation of the beverage made from grated cassava involves the separation of the mixture between the solid of the cassava mass and the liquid of the tucupi, made through the filtration process with the tipiti, a traditional Indian instrument.
A massa é peneirada, assada e colocada em repouso por três dias, quando ocorre o processo de fermentação, em que o açúcar, contido na mandioca, é processado pelos microrganismos e transformado em outras substâncias, como álcool e gases.The dough is sifted, baked and put to rest for three days, when the fermentation process takes place, in which the sugar, contained in the cassava, is processed by microorganisms and transformed into other substances, such as alcohol and gases.The dough is sieved, roasted and put to rest for three days, when the fermentation process occurs, in which the sugar contained in cassava is processed by microorganisms and transformed into other substances such as alcohol and gas.
Após esse período, se adicionam água e açúcar à massa coada, estando a bebida pronta para ser consumida.After this period, water and sugar are added to the strained mass, and the drink is ready to be consumed.After this period, water and sugar are added to the batter, and the beverage is ready to be consumed.
(Cardoso, Lobo-santos, Coelho, Ayres, Martins, 2017).
art of dewatering: cassava
Agora gostaríamos de voltar a atenção para o processo oposto, o desenvenenamento.  Now we would like to turn our attention to the opposite process, the poisoning.  Now we would like to turn our attention to the opposite process, the devel- opment [sic].  
Ainda que não exija técnicas tão sofisticadas quanto a produção de substâncias, o desenvenenamento é um proce- dimento fundamental para as pessoas que vivem e queiram sobreviver na floresta tropical amazônica, tendo em vista que muitas plantas de lá produzem veneno em virtude de seu metabolismo secundário.Although it does not require such sophisticated techniques as the production of substances, poisoning is a fundamental procedure for people who live and want to survive in the Amazon rainforest, considering that many plants there produce poison due to their secondary metabolism.Although it does not require techniques as sophisticated as the production of substances, the deworming is a fundamental procedure for the people who live and want to survive in the rainforest Amazon, since many plants of there produce poison by virtue of its secondary metabolism.
Afinal, a forma que muitas espécies de plantas possuem para evitar a mordida de insetos é a produção de recursos químicos defensivos.After all, the way that many plant species have to avoid insect bites is the production of defensive chemical resources.After all, the way that many plant species have to avoid insect bite is the production of defensive chemical resources.
Quem quer sobreviver na floresta tropical precisa saber como neu- tralizar ou afastar essas substâncias tóxicas produzidas pelas próprias plantas.Anyone who wants to survive in the rainforest needs to know how to neutralize or remove these toxic substances produced by the plants themselves.Whoever wants to survive in the rainforest needs to know how to neutralize or ward off these toxic substances produced by the plants themselves.
O domínio da arte do desenvenenamento é o que possibilita os habitantes da Amazônia a não morrerem de fome. Mastering the art of poisoning is what makes it possible for the inhabitants of the Amazon not to starve.The mastering of the art of deforestation is what enables the inhabitants of the Amazon not to die of hunger.
(Soentgena & Hilbert, 2016: 1145)
Nesse sentido, examinaremos o exemplo da raiz de mandioca de maneira mais detalhada para então, na sequência, fazermos referência sumária a outros produtos e processos.  In this sense, we will examine the cassava root example in more detail and then, in the sequence, make a brief reference to other products and processes.In this sense, we will examine the example of the cassava root in more detail so that we can then briefly refer to other products and processes.  
A última seção tratará de algumas implicações políticas de nossa perspectiva.
The last section will deal with some policy implications from our perspective.The last section will address some of the political implications of our perspective.
No Brasil, a mandioca (Manihot esculenta) é conhecida sob diversos nomes em diversas regiões.In Brazil, cassava (Manihot esculenta) is known under several names in different regions.In Brazil, manioc (Manihot esculenta) is known under different names in several regions.
No sul do país, ela também se chama "aipim", no Brasil central, "maniva", "manaíba", "uaipi", e no norte, "macaxeira" ou "carim".In the south of the country, it is also called "casino" [sic], in central Brazil, "maniva", "manaíba", "uaipi", and in the north, "macaxeira" or "carim".In the south of the country, it is also called "aipim", in central Brazil, "maniva", "manaíba", "uaipi", and in the north, "macaxeira" or "carim".
(Soentgena & Hilbert, 2016: 1145)
Neste ensaio, pretendemos mostrar que, no que concerne ao conhecimento relativo às práticas químicas, a criatividade e a inteli- gência técnica dos povos indígenas da América do Sul, são compe- tências consideráveis até os dias de hoje.  
Os povos ameríndios, em especial os da bacia amazônica, desenvolveram práticas que levaram a invenções das quais, até hoje, milhões de pessoas se beneficiam.
In this essay, we intend to show that, with regard to knowledge related to chemical practices, creativity and technical intelligence of the indigenous peoples of South America are considerable competences to this day.  
The Amerindian peoples, especially those from the Amazon basin, developed practices that led to inventions from which, to this day, millions of people benefit.
In this essay, we intend to show that, to a certain extent, companies,
a process of invention of the Indian Indians of South America, and
still are considerable, as businesses, until today, millions of people and institutions benefit in the Western world.
(Soentgena & Hilbert, 2016: 1141)
Gostaríamos de documentar essas afirmações com alguns exemplos, limitando-nos a apresentar apenas produtos feitos a partir de substâncias que eram inteiramente desconhecidos na Europa.We would like to document these claims with a few examples, limiting ourselves to presenting only products made from substances that were entirely unknown in Europe.We seek to provide information from a few examples regarding chemical practices and biochemical procedures for the transformation of substances that
are [sic!] unknown in Europe.
(Soentgena & Hilbert, 2016: 1142)
Text of da Silva's 2019 article (in its published sequence) is juxtaposed against material that seems to have been used as unacknowledged sources (paragraphs have been broken up to aid comparisons).

Works cited:
  • Almeida, M. R., Martinez, S. T &  Pinto, A. C.(2017) Química de Produtos Naturais: Plantas que Testemunham Histórias. Revista Virtual de Química, 9 (3), 1117-1153.
  • Cardoso, A.M.C., Lobo-santos, V., Coelho, A.C.S., Ayres, J.L., Martins, M.M.M.(2017) O Processo de preparação da bebida indígena tarubá como tema gerado para o ensino de química. 57th Congresso Brasileiro de Química. http://www.abq.org.br/cbq/2017/trabalhos/6/11577-25032.html
  • da Silva, M. A. G. (2019) The Chemistry of Indigenous Peoples. Journal of Chemistry: Education Research and Practice, 3 (1), pp.1-2
  • Pinto, A. C. (2008) Corantes naturais e culturas indígenas: http://www.luzimarteixeira.com.br/wp-content/uploads/2010/04/corantes-curiosidades.pdf
  • Soentgena, J. & Hilbert, K. (2016) A química dos povos indígenas da América do Sul. Química Nova, 39 (9), pp.1141-1150