The publisher who cried 'wolf!'

Can one blog post bring about "substantial financial detriment" to a global publishing corporation?


Keith S. Taber


"…our marketing strategies, particularly the use of alternate identities by our editors and reviewers to engage potential authors
our editors' and reviewers' marketing strategies …Thus, the use of virtual identities for initial outreach efforts"

email from legals@globaljournals.org


Immediate Cease and Desist Demand – Defamatory and Harmful Content

One regular theme of these posts is the questionable behaviour of some publishers of academic journals, especially when I consider they have been behaving in dishonest ways in order to mislead scholars.

Last month I received an email with the subject heading "Immediate Cease and Desist Demand – Defamatory and Harmful Content". The email came from the address <legals@globaljournals.org> and was signed by someone claiming to be the Chief Legal Officer of "Global Journals Incorporated, a conglomerate with operational bases in the United States, United Kingdom, and India".


A solicitor peruses a document

"Ah, a 'Cease and Desist' notice…do you want us to fight it?" (Actor John Stride as solicitor David Maine in Yorkshire Television's 'The Maine Chance')


The email complained about a post on this website, "specifically at the URL: https://science-education-research.com/earning-a-higher-doctorate-without-doing-any-research/". According to the email from <legals@globaljournals.org>, this page:

  • "contains unfounded, derogatory statements that malign our business and overall reputation"

The email explained

  • what they objected to in my post
  • why they considered it mattered to them
  • what they wanted me to do about it
  • and what the consequences would be for me if I did not do as they asked

The complaint

Global Journals complained that

"Your publication unjustly criticizes our marketing strategies, particularly the use of alternate identities by our editors and reviewers to engage potential authors…Specifically, your blog post criticizing our editors' and reviewers' marketing strategies casts malicious aspersions on their integrity and wrongly implies unethical conduct." 

email from legals@globaljournals.org, 13th February, 2024

My post "Earning a higher doctorate without doing any research?" asked the question: Is it possible that a publisher might be using fictitious academics to attract submissions to its journals?

It then discussed some emails I had received from the address <chiefauthor@socialscienceresearch.org> claiming to be written by a Dr Nicoleta Auffahrt, Managing Editor of the 'Department of Humanities and Social Science' 'at Global Journals'. 'Dr Auffahrt' wrote to me, so the email claimed, because she had been impressed by my work, and had discussed it with colleagues who were also impressed, and she wanted to network with me. The email claimed that 'Dr Auffahrt' had a D.Litt in Teaching Education (and the publisher's website suggested she also held a Ph.D. from University of Pennsylvania and a Master of Arts from Ottawa University, USA [sic]).

However, when I did some checking-up (details are given in the original post), as far as I could tell, there was no such person as Dr Nicoleta Auffahrt.

Now, in the email from <legals@globaljournals.org>, Global Journals Incorporated were not denying that they were sending out letters from non-existent academics, they readily acknowledged that, but they still seemed to think it was bad form of me to highlight this as if it was in some sense questionable. According to Global Journals,

"…the use of alternate identities by our editors and reviewers to engage potential authors. This practice is not only commonplace for privacy preservation on the internet but is also legally sanctioned in jurisdictions such as Delaware and [sic] the United States, where our corporation is duly registered. The allegations you posit, suggesting unethical conduct on the part of our representatives, are devoid of factual basis and amount to a direct assault on our distinguished reputation, painstakingly cultivated over two decades.

Specifically, your blog post criticizing our editors' and reviewers' marketing strategies casts malicious aspersions on their integrity and wrongly implies unethical conduct.  Thus, the use of virtual identities for initial outreach efforts is lawful in Delaware, United States, where our company is incorporated and is commonly employed worldwide for privacy and safety."

email from legals@globaljournals.org, 13th February, 2024

The 'dispute', then, was not over whether Global Journals sent emails signed by non-existent editors – they freely agreed they did so (and suggested they also sent marketing emails from non-existent reviewers!) – but whether, or not, it was unfair of me to suggest that such deception amounted to something dishonest, inappropriate or unethical.

The damage done

Now anyone who writes a blog (or anything else for public consumption) is likely to hope some people will read it and that it might in some small way influence them. I was aware of people commenting on my post to the extent that they had already been sceptical about approaches from 'Dr Auffahrt' and other imaginary Global Journals editors, and had found it useful that I had looked into the (non)existence of Auffahrt.

So, I can readily believe that perhaps Global Journals have lost a few 'customers' who might share my view that it is not desirable to do business with a publisher that seeks to deceive potential authors by pretending imaginary editors have a particular interest in their work. Even if Global Journals thinks that sending such invitation emails is "commonplace", "legally sanctioned" and "lawful"(and even if editors who work for Global Journals for some reason feel a need to hide their identities 'for privacy and safety' when academic editors of most academic journals do their best to advertise their appointments to such positions), I can well believe there are other scholars out there who might share my view that misrepresenting yourself to someone is not a promising way to initiate a meaningful, productive relationship.

However, according to Global Journals,

"Your allegations are baseless and directly harm our company's reputation, resulting in substantial financial losses.

The implications of your actions have been far-reaching, causing substantial financial detriment to our corporation, quantified in significant revenue losses."

email from legals@globaljournals.org, 13th February, 2024

So, supposedly, enough people

  • (i) seeking outlets for their manuscripts and
  • (ii) receiving the emails from fictitious editors had
  • (iii) read my blog, and
  • (iv) accordingly decided to give Global Journals' publications a miss,

for them to claim that I had caused:

  • substantial financial losses
  • substantial financial detriment
  • significant revenue losses

And this was supposedly due to one post on a retired teacher's personal blog?

Somehow, I felt it was, let me suggest, unlikely that this claim was correct, and I felt that it was even more unlikely that Global Journals would be able to produce any convincing evidence to substantiate it (for example in a Court of Law – see below).

'Demands'

The email claimed I had defamed Global Journals by calling-out their (in my view, dubious) practices:

"Given your role as the editor of several journals that are in direct competition with our publications, your statements could be construed as defamatory, motivated by competitive bias, and, thus, carry severe legal consequences….

This behavior not only contravenes professional ethics but also breaches UK defamation law.

email from legals@globaljournals.org, 13th February, 2024

Their letter specified a number of journals they considered me to be an editor of. I have not been a journal editor for some years. It seems that, despite representing an international publisher of academic journals, the author of the email did not appreciate the difference between formally contracted editors (who could be therefore considered to have a financial interest in a journal they edit) and those who serve unpaid on journal boards in a purely advisory capacity.

A section of the email headed 'Demands' told me:

  • Cease & Desist: You must immediately stop publishing defamatory content about Global Journals, our editors, and our practices
  • Content Removal: The offending blog post must be entirely deleted from your website within 48 hours.
  • Formal Retraction: We strongly recommend issuing a retraction on your website to mitigate damages.

Of course, if Global Journal's email had persuaded me that my post had been unfair to them (and certainly if it had persuaded me that it was defamatory) I would have been very keen to quickly take action to put matters right.

But, to my mind, the most relevant part of their email was the confirmation that the reason that I had not been able to find any evidence of an academic record for 'Dr Nicoleta Auffahrt' was that she had never existed. She was a fiction, or as Global Journals prefer to phrase the matter, one of the 'alternate identities' they employ to disguise (in order to 'protect') the actual identifies of their editors (and reviewers).

The threat

This would all have been mildly amusing, had it not been for the threat of legal action. The email from <legals@globaljournals> warned me that

"Failure to comply will immediately initiate legal action in the United Kingdom. We will seek substantial damages for losses incurred and decisively pursue all legal costs.

This letter constitutes a formal legal notice, and non-compliance will necessitate legal action in the UK, USA, and India, with all associated costs, including but not limited to legal fees, being recovered from you.

We strongly advise you to take this notice with the utmost seriousness and to seek legal counsel to fully comprehend the ramifications of your published content and the potential legal proceedings that may ensue.

We anticipate your prompt action to rectify this situation, and we expect your full compliance."

email from legals@globaljournals.org, 13th February, 2024

Now I will happily admit that was quite scary. I am lucky that, even though I had to retire early on health grounds, I had built up sufficient pension to be able to live comfortably enough. But here was a global corporation claiming that I had caused it significant and substantial financial losses which it intended to recover by suing me. I imagine that substantial financial losses of a global publisher are some orders of magnitude greater than any funds I may have left in savings for a rainy day.

The sensible, pragmatic part of me thought that it would be very easy to take down one web-page, apologise, and hopefully all would be forgotten. Surely that is the obvious thing to do, even if one thinks that any such legal action has a small chance of succeeding? What is, say, a 1% chance of being financially ruined against deleting one post from a blog?

Global Journals' email suggested that I take legal advice – which might imply that they were confident in having a case against me (why send me to a lawyer who would tell me otherwise?) but of course legal advice costs money, and 'unpublishing' a blog post does not. I suspected that was a bluff.

Moreover, there is another part of me which is the self-righteous, campaigning, principled me that really hates such ploys as lying and bullying and is naive enough to beleive the world would be a better place without 'those two impostors'. As 'alternate' identities are in play; if taken to court, I might want the fictitious lawyer David Maine from Castelton & Maine handling my case: someone who could be just as arrogantly pompous and self-righteous as myself!

A defence?

I have already suggested that I did not think it was at all likely that any damages I had caused to Global Journals could really be large enough to substantially damage their business (certainly, unless it was really very, very flaky to start with, such that a proverbial final straw might be enough the break the poor camel's back); and that it seemed incredible that they might be able to produce evidence to persuade a Court that enough people reading my blog had been sufficiently influenced to bring about any such significant losses.

However, the critical factor in my thinking was what is meant by defamation. Global Journals helpfully informed me that:

"Under the Defamation Act 2013, a statement is considered defamatory if it:

  • Causes or is likely to cause serious harm to an individual or company's reputation.
  • Refers to even an unidentifiable [*] person connecting with an entity.
  • Is published (communicated) to a third party."
email from legals@globaljournals.org, 13th February, 2024

(* And of course a person would be 'unidentifiable' if they disguised their identify behind a fake name and qualifications.)

My post certainly referred to a person pretending to be one highly qualified Dr Nicoleta Auffahrt who claimed an association with Global Journals, and it was a form of publication. So, would a court consider my post "causes or is likely to cause serious harm to an individual or company's reputation"?

It might be reasonable to suggest it led to some very small harm (loss of a few submissions, perhaps), certainly. But serious harm? To an established global corporation?

The best defence to a defamation claim

Of course, Global Journals failed to mention one key criterion for any published statement to be considered defamatory: it has to be untrue. No matter how bad the things you accuse someone of, that is not defamation unless you are wrong. You cannot defame Adolf Hitler by claiming he was the leader of an evil regime which carried out genocide, and arranged the murder of a great many men, women and children simply because of a hateful and unscientific belief in human 'races' and racial 'purity'.

Global Journals could only successfully sue me, and potentially ruin me, if they could show I had made claims about their corporation that were both damaging and untrue. Yet, Global Journals confirmed my exposé was correct: editors who sign (at least some of) their emails do not exist.


It is a defence to an action for defamation for the defendant to show that the imputation conveyed by the statement complained of is substantially true.

Defamation Act 2013

Any case Global Jounrals brought would therefore presumably rest, not on that agreed fact, but on what I suggested about this being unethical, improper, and misleading. These were my interpretations and I think anyone reading the blog could either agree with them or not. The factual basis of the post was that Global Journals were sending out emails from a Dr Nicoleta Auffahr who claimed to have a personal interest in my works, and to have discussed them with her colleagues, when such a person did not seem to exist; and Global Journals were not disputing that fact – rather they were confirming in writing that this was indeed how they proceeded. This was part of Global Journals' "marketing strategies…our editors' and reviewers' [sic] marketing strategies".

Surely, anyone reading the blog who, like Global Journals Incorporated, thought it was fine to send out such fictional invitation emails would have no reason to change their attitude to Global Journals, and only those, who agreed with me, that this was inappropriate for an academic publisher would be likely to behave accordingly and avoid sending them submissions.

A revised approach

So, I decided not to take down my post (at least, not yet) but to spend time writing a robust response to the Global Journals' legal officer – that is, to 'call their bluff' as it were. (I've reproduced my message below, in 'Annexe 1'.) This took up time and energy, but if Global Journals' legal team thought an 'Immediate Cease and Desist Demand' was well-motivated, then it deserved a considered response.

My reply led to a response within hours, which had a rather different tone. So, the next morning I faced a new communication from <legals@globaljournals.org>, again signed by the Chief Legal Officer of Global Journals Incorporated. This reiterated a key point from the original 'cease and desist' notice,

"The practice of using alternate identities, as mentioned in our initial letter, is a measure taken strictly for privacy and security reasons on the internet. We ensure that all communications, including those from alternate identities, are truthful and transparent about the nature and purpose of the outreach. Contrary to the allegation, we do not endorse or engage in the dissemination of false or misleading information."

email from legals@globaljournals.org, 14th February, 2024

Now I could see myself getting into an involved argument here. The original approach sent from <chiefauthor@socialscienceresearch.org> and supposedly from a Dr. Nicoleta Auffahrt, did invite me to submit work to a journal, but this was presented almost as a "oh, and by the way…" clause:

"I am writing this email with regard to your research paper, 'Secondary Studentso [sic] Values and Perceptions of Science-Related Careers Responses to Vignette-Based Scenarios' I read it and felt that your work is worthy of admiration. I have shared the finding of the paper with my colleagues. Other scholars of our research community have also commended them. It shows your potential to influence and inspire fellow researchers and scholars.

Your quest to explore dimensions in your field that matches our journal's scope compels me to know more about your current research work. I can also connect you with our network of eminent researchers of your stream, along with recognizing your university.

Additionally, as I am also Managing Editor at Global Journals, I cordially invite you to send your future research articles/papers for publication in Global Journal of Human-Social Science, CrossRef DOI: 10.34257/GJHSS."

email from chiefauthor@socialscienceresearch.org, 20th January 2023

Now I accept that I was not fooled by this (which is why I investigated the supposed author), and in any case I suspect that my university (i.e., the University of Cambridge) probably does not need recognition from Global Journals.

Perhaps, this was never intended to mislead the recipient. Perhaps, now that we all live in a post-truth world, any recipient should have simply smiled at the conceit, realising that even if Auffret existed, we were not meant to take the claims about her reading and admiring the recipient's work seriously.

How spamming works

But I suspect that the whole point of seemingly personalised approaches like this (apart from disguising an email mail shot which is in breach of UK regulations on mass marketing) is that if one in ten, or one in twenty, or even one in a hundred, of the recipients are fooled (in the sense of thinking someone really has read their work, and really does think is it of sufficient merit to seek out that scholar, and really wants to network with them), then this hooks enough potential customers to justify the effort commercially. The minimal cost of sending thousands of such invitations is easily justified if one recipient submits some work to the publisher and pays the cost of US$ 1126 * for publication. That is how spam emails work – most people know they are not to be taken at face value, but it only needs a few people to be taken-in to generate profit.

At least Global Journals were no longer explicitly threatening court action (perhaps, bluff called?), but,

"Our concern remains that the content published on your blog, which criticizes our marketing strategies and operational practices, could be interpreted as defamatory under this legal standard. While we acknowledge your right to express personal opinions and critiques, we must also protect our corporate reputation against statements that we believe to be unfounded and potentially damaging."

email from legals@globaljournals.org, 14th February, 2024

They were still looking towards a "resolution". But now they wanted to invite me to a meeting to disucss the 'issues' and referred to "Collaborative Efforts" whereby "we can work together to promote ethical practices in academic publishing and contribute positively to the scholarly community".

This was a clever strategy: I was relieved that immediate court proceedings were not being explicitly threatened now, and, as an academic who claims to value dialogue, I was being invited to talk – and it was even being hinted that perhaps the corporation could benefit from my advice on how to ensure their procedures were ethical.

Having replied immediately to the first email from <legals@globaljournals.org>, I decided that now I needed some 'time out' to think. I wrote back to acknowledge receipt of their message, and to tell them I would be replying, but not immediately.


If a publisher acknowledges that it sends out emails from fictitious editors, why accept the authenticity of an email claiming to be from its 'legal department'? (original images by Peggy_Marco and Gordon Johnson, from Pixabay)


The publisher who cried wolf

The story of the boy who cried wolf tells of a young shepherd looking after the sheep who called-out the villagers to defend against the wolf without good cause. Eventually, when the wolf actually came along to feed on the sheep, the boy again cried "wolf!" – but no one came to help, because he had lied before and was no longer considered trustworthy. [https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf]

Over the next few days I was composing possible responses in my head. An initial feeling that "at least they are looking to be reasonable, I suppose I should give them the benefit" soon hardened.

  • Would a global corporation "cultivated over two decades" really need, or want, to engage in authentic dialogue to learn from me why it might not be best to promote their journals with fictitious editors?
  • Were they looking for a genuine meeting of minds, or did they want to talk only to try to bamboozle me with lawyerese – looking to get concessions out of me by putting me under pressure?

This brought to mind a short period when I had acted as the National Union of Teachers representative at the Comprehensive School where I taught. On a Friday afternoon, I had a message from the head teacher that he would like to see me before I went home. I went to his office (pretty tired at the end of the working week) to find him there flanked by two deputies. He made some points about why the school management wanted such-and-such. I politely explained why the teaching staff had decided they did not agree to whatever-it-was. He then explained why we were wrong (from his perspective) and repeated his initial points. I politely pointed out that I understood his perspective, but that what he wanted did not look desirable from the teachers' perspective. He then told me, again, why we were wrong and, again, why his position was the one to be adopted.

We went through this cycle several times: my respectfully accepting that what he wanted made sense to management but not to the teaching staff; and his then explaining how I must be wrong because I did not accept his obviously correct opinion. It seemed clear to me that there was no intention to have a meaningful discussion, just an attempt to wear me down by outnumbering me at a point in the week when I was especially vulnerable. So, I made my excuses and went home. Since then I have been wary of mooted meetings with people who do not seem to have any flexibility in the outcome they seek.


Homer Simpson has a moment of insight

A moment of insight

(source: 20th Century Fox)


A 'doh' moment

I decided to leave my reply to the weekend, although I still found myself mentally drafting possible points to include. But then I (rather belatedly!) had a moment of insight:

  • My first contact with Global Journals was a marketing email that came from the email address: <chiefauthor@socialscienceresearch.org>
  • That email was signed by a Dr Nicoleta Auffahrt who claimed to be a highly qualified managing editor – but who did not exist
  • I was now receiving emails from an address <legals@globaljournals.org>…
  • …signed by someone who claimed to be the corporation's Chief Legal Officer

But, ('doh!') if Global Journals use misleading email addresses and fictional employee identities, then

  • how did I know the recent emails were really from a legal department and not just the marketing people again?
  • how did I know the email was written by someone who had the name and title used in the signature?

I thought it was worth doing an email search to see if I could confirm the name of Global Journal's Chief Legal Officer – perhaps someone with such a senior position would be reported somewhere on the web? I was not over-hopeful, as India (where the Chief Legal Officer was supposed to be based) is the most populous country in the world, and I suspected I would find numerous lawyers there with that name.

What I actually found was no record at all of anyone with the name of the supposed Chief Legal Officer. That did not prove my supposed correspondent was not a real person, but it was highly suggestive. Lawyers may not tend to be as obvious on the web as academics, but there cannot be many senior professionals (such as a chief legal officer of an international company) that do not leave some digital trace that can be found in a web-search?



This hardened my resolve. If I suspected that the emails from <legals@globaljournals.org> were also not open about who I was really corresponding with, then I would write back to close the correspondence pending any good evidence that I really was being contacted by the legal department (see 'Annexe 2'!)

I was rather disappointed at myself. I had been contacted by a corporation that was happy to use fictional identifies, and even readily admitted it, but I then took communication at face value for being what it claimed.

I was brought up to be honest and truthful, and believe lying and deceit is only justifiable in extremis. Society can only work harmoniously – indeed, at all – if our default assumption is people we interact with are who they say they are and that they at least believe the claims they make. If, we strictly follow the advice of another fictional character, the investigative agent Fox Mulder, and 'trust no one', we soon come to a complete state of inaction and paranoia.

On the other hand, as the proverb suggests,

  • fool me once, shame on you
  • fool me twice, shame on me

* as according to the web-page https://globaljournals.org/journals/human-social-science/author-charges, accessed on 2nd March 29024


Annexe 1

Threat of legal action by Global Journals Incorporated

Dear Mr ********

Thank you for your email of 13th instant, entitled "Immediate Cease and Desist Demand – Defamatory and Harmful Content". I note the contents, and that you consider this a "formal legal notice".


Factual inaccuracies

Firstly, may I point that, contrary to your letter, I am not currently an editor of any academic journals [and to avoid any possibility of misunderstanding, this includes International Journal of Science Education (Routledge/Taylor & Francis); Foundations of Chemistry (Springer); Teacher Development (Routledge/Taylor & Francis); Centre for Education Policy Studies Journal (Faculty of Education, University of Ljubljana); Disciplinary and Interdisciplinary Science Education Research (Springer)] and my only active editorial position is as a book series editor-in-chief. You seem to be confusing journal editors (who have a formal contractual role, for which they normally receive consideration), with other senior academics who in roles such as editorial board members offer free advice to journal editors and publishers. I currently have no financial arrangements with any of the journals that you list.

I do indeed have a bias, though I would not consider this a prejudice. My bias is towards those journals which follow honest and open processes led by named academics with well-established reputations; and against journals which use dubious practices, such as those that send untruthful approaches to potential contributors, and hide their actual editors behind imaginary personas with faked academic qualifications. I would hope most other serious scholars would share this bias.  


Breach of professional ethics

If you truly believe, as you suggest, that my behaviour "contravenes professional ethics" then, as I am a Fellow of two Learned Scientific Societies that also operate as Professional Bodies (namely, the Royal Society of Chemistry and the Institute of Physics) you should refer the matter to these two institutions so they can investigate whether I have indeed broken the professional codes of ethics that I am expected to uphold. I would expect to offer a strong defence of my actions. I actually believe that as a senior academic who has worked as a journal editor and has taught novice researchers about publication ethics on post-graduate courses there is an ethical imperative for me to call-out examples of scholarly malpractice, such as those you acknowledge being part of Global Journals Incorporated's businesses practices, where I come across them.


Damage caused by my publicising Global Journals Incorporated's dishonest communications

I note you suggest I may be subject to court action under the Defamation Act 2013 because my blog posting has been "resulting in substantial financial losses…far-reaching, causing substantial financial detriment to our corporation, quantified in significant revenue losses". I find it very unlikely that a retired academic's personal blog can have caused such a substantive effect (and even more unlikely this could be demonstrated to the satisfaction of a court).

To the extent that my personal blog may have influenced some authors to avoid engaging with your company, this is not due to my claims about your company's dishonest practice in isolation, but is a due to a combination of factors: that is,

a) your company choosing to use dishonest and untrue communications (as you acknowledge in your email); AND
b) my pointing this out in my blog; AND
c) readers agreeing that they consider such practices as unethical and inappropriate in scholarly publishing.

Anyone reading my blog who feels 'that's okay, companies are allowed to use fictitious editors with made-up qualifications who pretend to have liked and recommended my work' is not going to change their submission intentions. Global Journals Incorporated's use of marketing emails containing false claims may indeed, as you suggest, be strictly legal in some jurisdictions such as "Delaware and [sic] the United States", but it is dishonest, and to my mind (if not yours) that makes it unethical. The academic literature will be worth nothing if scholars do not adhere to principles of openness and honesty – as who can trust anything in journals that do not hold truthfulness to be an important value? In any case, I suspect your marketing emails are in breach of UK regulations governing electronic mail marketing which both (i) do not allow you to send such email to "prospective customers or new contacts" unless they have opted in, and (ii) specify that even then "you must not disguise or conceal your identity".


Defamation Act 2013

I am struggling to understand the basis of your complaint as you seem to object to my criticising "the use of alternate [sic., i.e., fictitious] identities by our editors and reviewers to engage potential authors", yet you confirm that Global Journals Incorporated is doing that just that. I am not a lawyer (but I would assume that you, as a company's Chief Legal Officer, are legally qualified), and I do not understand how you expect to make a claim under the law of defamation if you accept that Global Journals Incorporated is indeed sending out invitations to academics from fictitious editors with imaginary doctorates who supposedly have read and been impressed by the work of those they are contacting. These emails contain lies, as you seem to acknowledge. There would seem to no basis there for an action of defamation!

Perhaps I have misunderstood, and you feel that there is/are one or more other statement(s) in my blog post which is/are both:

a) factually inaccurate; and
b) potentially damaging.

If this is so, I WOULD CERTAINLY BE PREPARED TO ADDRESS THIS, AND TO DO SO AS A PRIORITY. But you would need to specify where I have made a false claim, and convince me it is false. So far, you have only objected to (i) statements about Global Journals Incorporated's practices that you seem to acknowledge are true, and to (ii) my opinion (which I am fairly sure I am entitled to) on the ethical status of those practices.

Otherwise, I do not see any validity underpinning your 'Immediate Cease and Desist Demand' request. I would assume that you, as a Chief Legal Officer, would appreciate that true statements are not considered defamatory in English Law (as "It is a defence to an action for defamation for the defendant to show that the imputation conveyed by the statement complained of is substantially true"), in which case your communication (confirming, as I imagine any court would recognise, the truth of 'the statement complained of') would seem to be an attempt to tempt me into acting out of fear of malicious and spurious court action ("Failure to comply will immediately initiate legal action in the United Kingdom. We will seek substantial damages for losses incurred and decisively pursue all legal costs"…"We will seek substantial damages for losses incurred and decisively pursue all legal costs"), rather than in accord with the facts of the case at hand.

I strongly suspect that even if you could find an officer of the English courts who would take on a case on the basis you have outlined, the case would, on request to the court, be summarily struck out under rule 3.4 of the procedure rules for civil courts. I therefore think that your threat by a corporation to come after an individual retired teacher for "substantial damages" and "legal costs" for speaking truth about unethical practices paints Global Journals Incorporated in a very poor light, suggesting that its legal team either does not understand English law, or is prepared to make misleading representations of it to try to cover-up the company's dubious publishing practices.

I look forward to your clarification of any public statements of mine regarding Global Journals Incorporated that you feel are factually incorrect, and that I should look to address. I can assure you that I will make all reasonable efforts to ensure apparent factual claims on my website are indeed in accord with the facts. Perhaps Global Journals Incorporated might consider adopting a similar policy in regard with its communications with potential authors?

Best wishes

Keith


Annexe 2

Dear Mr. ********

Thank you again for your follow-up email of 14th instant.

I have now taken time to carefully re-read the blog post that you refer to, i.e., at https://science-education-research.com/earning-a-higher-doctorate-without-doing-any-research/. I am confident that any factual statements made there are accurate to the best of my knowledge. I am not infallible of course, and repeat that I am open to considering evidence that might persuade me that a correction is needed due to an error of fact. I have also checked on the reasonableness of those statements that are clearly intended to be understood as my own opinions and interpretations rather than objective statements of fact.

However, the crux of my blog post is suggesting that communications I received from Global Journals were signed by a fictitious academic. You have in your recent emails (13th, 14th inst.) confirmed that Global Journals Incorporated uses 'alternate identities' in its communications. You suggest that such a policy is both legal and justified ("a measure taken strictly for privacy and security reasons"). In reviewing my published post, I find no suggestion that I was claiming this practice was illegal or unlawful. (As I suggested in my previous reply, I do suspect that Global Journals Incorporated is breaking UK regulations in sending unsolicited email marketing, as I had not signed up for your marketing emails and had no previous business with your organisation. I suspect that is relevant, as this may be why Global Journals Incorporated chooses to seek to disguise these emails as not being the widely broadcast marketing they are {i.e., what most recipients might consider spam} but a personal contact by a fellow academic {from an organisation with the domain "socialscienceresearch.org"} who is impressed by a specific scholar's work and wishes to engage with that individual at a personal level {something I believe may also be breach of UK regulations on email marketing which do not allow you to "disguise or conceal your identity"}.) However, to reiterate and avoid any possible doubt, I do not claim that falsifying the identify of an email author is, in itself, unlawful: I am not qualified to comment on that, and I offer no opinion on the legal status of this dubious practice.


Dialogue

I took some time to reflect on your offer to enter into further dialogue. As you will have suspected, this appeals in principle to a scholar. However, I considered (a) that I wrote twice in response to the initial email from chiefauthor@socialscienceresearch.org (on 20th January and 24th January 2023, that is before composing the blog post) without receiving clarification from Global Journals that I was corresponding with an 'alternate identity'; (b) your own initial contact earlier this was week was not focused on dialogue, but comprised a threat of legal action (albeit one that any lawyer would surely have realised was hollow). Entering into a meaningful dialogue would require trust on behalf of both parties, and Global Journals Incorporated's behaviour to date – misrepresentation and threat – does not encourage me to assume Global Journals Incorporated's good faith.


Collaborative efforts

I retired from my teaching role in 2020 on health grounds and although I still do some pro bono work for journals that I hold in high regard and have an association with, I am not seeking further consultancy opportunities. However, I would recommend that if Global Journals Incorporated is serious about adopting ethical publishing practices, then it should consider the work of COPE, the Committee on Publication Ethics (https://publicationethics.org), which is an organisation of publishers and others involved in academic publishing. This organisation offers a forum for sharing practices and seeking guidance on best practice.


Defamation

In your follow-up letter (14th instant.), you once again suggest that my blog post "could be interpreted as defamatory", again ignoring the fundamental legal principle that true statements cannot defame (which is why I am confident that it could NOT be REASONABLY interpreted as defamatory). To reiterate, I believe that factual statements in my public post are accurate, and that opinions and interpretations are reasonable and are not presented as if facts.

Truthful communications

You claim that "all communications [from Global Journals], including those from alternate identities, are truthful and transparent about the nature and purpose of the outreach". To be persuaded of that, you would need to demonstrate to me that the email that I was sent on 20th January 2023, although not written by the person who signed it:

a) was written by a person holding that status (managing editor) in the organisation;
b) was written by someone who held the same (or substantially the same) academic qualifications (M.A., Ph.D., D.Litt in Teaching Education) as claimed for the fictitious 'alternate identify';
c) was written by someone who had actually read the work cited;
d) was written by someone who had actually discussed the merits of the work with her(/his) colleagues, including others who had specifically commended the work;
e) was written by someone who genuinely held the work, and through the work its author, in the esteem suggested.

Unless that is so, then clearly these were misrepresentations that were not truthful and were designed to misdirect the recipient into believing they had been contacted by someone with a strong personal interest in their work, when any personal interest was actually phoney. (That is a very common practice which is indicative of predatory journals.) You will appreciate that the comments from colleagues around the world, who took time to respond to my post to the effect that they have received substantially identical emails, only reinforces my suspicions about the truthfulness of these statements. However, if I was provided with a suitable, properly notarised, affidavit from the true author, affirming all of these points (a-e) I would he prepared to add a statement to acknowledge this clarification at the end of the post. 

Transparency

One of the key quality indicators of a good journal is that its editors are highly qualified and recognised as leading academics in a relevant field. I cannot consider any email from an academic publisher that claims editor qualifications which have never been awarded (clearly no university has awarded anyone called Nicoleta Auffahrt the degrees Global Journals claimed she had earned) to be 'transparent'.


Privacy

You justify the use of false identities for "editors and reviewers" as offering "privacy and protection" to your colleagues. Academic reputations and careers are based on certifiable scholarly contributions such as publications (sic, which by definition are in the public record and open to public interrogation) and publicly verifiable roles in academic journals. There is no logic to a genuine academic not wishing to be publicly associated with a journal where they have an editorial role (unless they believe association with that specific journal would actively harm their academic reputation because they know it is what is commonly called a predatory journal). Academic journal editorship is a PUBLIC role. Global Journals is a PUBLIsher PUBLIshing academic PUBLICations, where disguising the identify of editors should be considered anathema. Privacy, in that regard, would not be desirable for the scholar. Similarly, reviewers get kudos for their work for academic journals, and offer their specific referee reports through a procedure that can be (and usually is) anonymised to anyone outside the editorial office, so there would be no rationale for hiding the identity of those acting as referees unless they were ill-qualified for the role.


Moving forward

Finally, I am aware that when communicating with an organisation that believes it is acceptable to use 'alternate identities' in its communications, I have no assurance of the identity of my correspondent. I have no more reason to trust your emails were written by a 'Mr ******* *******' than that Global Journal's earlier email was written by a 'Dr Nicoleta Auffahrt'; nor do I have any better reason to believe that that your emails were written by someone who is actually the company's Chief Legal Officer, than I had to believe the earlier email was written by someone who really was Managing Editor. Evidence of a real lawyer called Mr ******* ******* seems to be as lacking on the internet as evidence of the fictitious scholar Dr Nicoleta Auffahrt.

Of course, the usual convention is to believe that people who contact us will be honest and open, until we have reason to doubt that. I have good reason to doubt whether communications from Global Journals can be trusted as honest in this sense. After all, if an email from the address "chiefauthor@socialscienceresearch.org" was actually written and sent by someone in Global Journal's marketing department, what reason do I have to trust that an email from "legals@globaljournals.org" does indeed originate from the legal department, rather than being another attempt at misdirection? If you really are Mr *******, and you really are Chief Legal Officer of Global Journals, then please excuse my suspicions – but I believe they must be judged reasonable in the circumstances.


Closing the correspondence

To reiterate, I have not, and do not, claim that Global Journals use of fictitious correspondents is in itself necessary illegal (I offer no view on that) but I maintain that it is a dishonest practice; one that cannot be justified in academic publishing where a concern for openness and the truth is paramount; and one which deliberately seeks to misrepresent a mass marketing email as a personal approach based on a close professional engagement with a scholar's work. In that, it reflects a very common practice of predatory journals. You have offered me no reasons to revise my opinions of this practice. 

I am now considering this correspondence closed, at least unless and until I am offered compelling reasons to trust the identity and role of my correspondent, along with good grounds to consider any remedial action is needed on my part.

I have notified the Legal Services Division of my University (https://www.legal.admin.cam.ac.uk) of the 'Immediate Cease and Desist Demand' sent earlier this week, and will inform them of any further approaches supposedly from Global Journals' legal team.


Best wishes

Keith


The sugger strikes back!

An update on the 'first annual International Survey of Research Centres and Institutes'


Keith S. Taber (masquerading as a learned academic)


if he wanted me to admit I had been wrong, Hussain could direct me to the released survey results and assure me that the data collected for the survey was not being used for other purposes. That is, he should given me grounds to think the survey was a genuine piece of research and not 'sugging'


Some months ago I published an article in this blog about a message I received from an organisation called Acaudio, that has a website where academics can post audio recordings promoting their research, that invited me to participate in "the first annual International Survey of Research Centres and Institutes". I was suspicious of this invitation for a number of reason as I discuss at 'The first annual International Survey of Gullible Research Centres and Institutes')

Several things suggested to me that this was not a genuine piece of academic research, including the commitment that "We will release the results over the next month" which seemed so unrealistic as to have been written either by someone with no experience of collecting and analysing large scale survey data – or someone with no intention of actually following through on the claim.

Sugging?

Having taken a look at the survey questions, I felt pretty sure thus was an example of what has been labelled as 'sugging'. Sugging is a widely recognised, and indeed widely adopted, unethical practice of collecting marketing information by framing it as a survey. The Market Research Society explains that,

Sugging is a market research industry term, meaning 'selling under the guise of research'. Sugging occurs when individuals or companies pretend to be market researchers conducting a research, when in reality they are trying to build databases, generate sales leads or directly sell product or services….

The practices of sugging and frugging [fundraising under the guise of market research] bring discredit on the profession of research… and mislead members of the public when they are being asked for their co-operation…

Failing to clearly specify the purpose for which the data is being collected is also a breach of…the first principle of the Data Protection Act 1998.

https://www.mrs.org.uk/standards/suggingfaq

Although I thought the chances of the results of the first annual International Survey of Research Centres and Institutes actually being released within the month, or even within a few months to allow for a modest level of over-promising, were pretty minuscule, I did think I should wait a few months and then do a search to see if such a report had appeared. I did not think I was likely to find such a report released into the public domain, but any scientist has to be open-minded enough to consider they might be wrong – and certainly in my own case I've collected enough empirical evidence over the years to know I am not just, in principle, fallible.

Acaudio doth protest too much, methink

But (being fallible) I'd rather forgotten about this and had not got round to doing a web search. Until, that is, I was prompted to do so by receiving an email from the company founder, Hussain Ayed, who had had his attention drawn to my blog, and was – understandably perhaps – not happy about my criticisms:



Hussain's letter did not address my specific points from the blog (as he did not want to "get into the nitty gritty of it all"), but assured me his company was genuinely trying to do useful work, and there was no scamming.

Of course, I had not suggested Acaudio, the organisation, was itself a 'scam': in my earlier article I had pointed that Acaudio was offering a free, open-access, service which was likely to be useful to academics – and even briefly pointed out some positive features of their website.

But Acaudio's 'survey' was a different matter. It did not meet the basic requirements for a serious academic study, and it asked questions that seemed to be clearly designed as linked to potential selling points for a company that was offering services to increase research impact (so, perhaps, Acaudio).



And it promised a fantastic time-scale. Perhaps a very large organisation, with staff fully dedicated to analysis and reporting could have released international survey results within a month of collecting data – perhaps? But Acaudio was a company with one company officer that reported employing one person.

Given the scale of the organisation, what Acaudio have achieved with their website in a relatively short time is highly impressive. But…

…where is that survey report?

I replied to Hussain, as below.

Dear Hussain Ayed

Thank you for your message.

I have not written "a comprehensive attack on [your] company" and do not have a sufficient knowledge-base to have done so. I have indeed, however, published a blog article criticising your marketing techniques based on the direct evidence in messages you have sent me. In particular, I claimed that,

(i) (despite being registered as a UK based company) you did not adhere to the UK regulations concerning direct marketing. (I assume you are not seeking to challenge this given the evidence of your own emails)

(ii) that you were also 'sugging': undertaking marketing under the guise of carrying out a survey.

If I understand your complaint, you are suggesting in regard to point (ii) that you really were carrying out a survey for the public good (rather than to collect information for your own commercial purposes) and that any apparent failure of rigour in this regard actually resulted from a lack of relevant expertise within the company. If so, perhaps you will send me, or tell me where I can access, the published outcome of the survey (due to be available by the middle of June 2023 according to your earlier message). I have looked on line for this, but a Google search (using the term "International Survey of Research Centres and Institutes") failed to locate the report.

Can you offer me an assurance that information collected for the survey was ONLY used for the analysis that led to the published survey report (assuming there is one you can point me to), and that this information was not retained by your organisation as a basis for contacting individuals with regard to your company's services? If you can offer appropriate assurances then I will be happy to add an inserted edit into the blog to include a statement along the lines that the company assures me that all information collected was only used for the purposes of producing a survey report, and was not retained or used in any other way by the company.

So, to summarise regarding point (ii), if this survey was not a scam, please (a) point me to the outcomes, and (b) give me these assurances about not collecting information under false premises.

You also have the right to reply directly. If you really think anything in my article amounted to "misleading bits of 'evidence' " then please do correct this. You are free to submit a response in the comments section at the bottom of the page. If you wish to do that, I will be happy to publish your reply (subject to my usual restrictions which I am sure should not be any impediment to you – so, I will not publish anything I think might be libellous of a third party, nor anything with obscenity/profanity etc. Sadly, I do sometimes have to reject comments of these kinds.)

I recognise that comments have less prominence than the blog article they follow, and that indeed some readers may not get that far in their engagement with an article. Therefore, if you do submit a reply I am happy to also add a statement at the HEAD of my article to point out out to readers that there is a reply on behalf of the company beneath the article, so my readers see that notice BEFORE proceeding to read my own account. I am not looking for people/organisations to criticise for the sake of it, but have become concerned about the extent of unethical practice in the name of academic work (such as the marketing of predatory journals and conferences) and do point out some of the examples that come my way. I believe such bad practice is very damaging, and especially so for students who are new to the academic world, and for those working working in under-resourced contexts who may be under extreme pressure to achieve 'tenure'. People spend their limited funds on getting published in journals that have no serious peer review (and so are not taken seriously by most academics), or presenting at conferences which 'invite' contributions from anyone prepared to pay the fees. I do not spend time looking for such bad practice: it arrives in my inbox on a fairly frequent basis.

Perhaps your intentions are indeed honourable, and perhaps you are doing good work. Perhaps you are indeed "working to tackle inequality in higher education and academia", which obviously would be valuable, although I am not sure how this is achieved by working with groups at Cambridge such as the Bioelectronic Systems Tech Group – unless you perhaps charge fees to those in wealthy institutions to allow you to offer a free service for those elsewhere? If you do: good on you. Even so, I would strongly suggest you 'clean up your act' as far as your marketing is concerned, and make sure your email campaigns are within the law. By failing to follow the regulations you present your organisation as either being unprofessional (giving the impression no one knows what they are doing) or dodgy (if you* know the regulations, but are choosing not to follow them). *I assume you are responsible for the marketing strategy, but even if someone else is doing this for you, I suspect you (as the only registered company officer) would be considered ultimately responsible for not following the regulations.

If you are genuine about wishing to learn more about undertaking quality surveys, there are many sources of information. My pages on research methods might be a place to get some introductory background, but if this to be a major part of your company's activity I would really suggest you should employ someone with expertise, or retain a consultant who works in that area.

Thank you for the offer to work with you, but I am retired and have too many existing projects to work on – and in any case you should work with someone you genuinely respect, not someone that you consider only to "masquerade as a learned academic" and who has "shaky morals".

Best wishes

Keith

My key point was that if he wanted me to admit I had been wrong, Hussain could direct me to the released survey results and assure me that the data collected for the survey was not being used for other purposes. That is, he should given me grounds to think the survey was a genuine piece of research and not 'sugging'.

The findings of the survey are 'reserved'

Later that day, I got the following reply:



So, it seems the research report that was supposed to have been released ("over the next month" – according to Acaudio's email dated 15th May 2023) was not available, and – furthermore – would not be made available to me.

  • A key principle of scientific research is that the outcomes are published – that is made available to the public: and not "reserved" for certain people the researchers select!
  • A key feature of ethical research is that a commitment is made to make outcomes available (as Acaudio did) and this is followed through (as Acaudio did not).
What is the research data being used for?

Hussain also failed to offer any assurances that the data collected under the claim (pretence, surely) of carrying out survey research was not being used for commercial purposes – as a basis for evaluating the potential merits of approaching different respondents to tender for services. I cannot prove that Acaudio was using the collected information for such purposes, but if my suspicions were misplaced (and if Hussain really wanted to persuade me that the survey was not intended as a scam) it would have been very easy to simply include a sentence in his response to that effect – to have assured me that the research data was being analysed anonymously and handled separately from the company's marketing data with a suitable 'ethical wall' between.1

That is, Hussain could have simply got into enough of the "nitty gritty" to have offered an assurance of following an ethical protocol, instead of choosing to insult me…as I pointed out to him:-


Dear Hussain

Thank you for your message.

So, the 'survey' results (if indeed any such document actually exists) that you indicated to me would be released by mid-June are still not actually available in the public domain. As you say: 'Hmm'.

You are right, that I would have no right to ask you to provide me with anything – except that YOU ASKED ME to believe I misjudged you, and to withdraw my public criticisms; and so I ASKED YOU to provide the evidence to persuade me by (i) proving there was a survey analysis with published results, and (ii) giving an assurance that you did not use, for your company's marketing purposes, data supposedly collected for publishable research. There is of course no reason why you should have provided either the results or the assurances, unless you actually did feel I had judged Acaudio too harshly and you wanted to give me reason to acknowledge this. The only thing that might give me "some sort of power over [you]" in this regard is your suggestion to me that I might wish to "take back the claims that [I] made". Can I remind you: you contacted me. You contacted me, unsolicited, in December 2022, and then again in May 2023. This morning, you contacted me again specifically to suggest my suggestions of wrong-doing were misjudged. But you will not back that up, so you have simply reinforced my earlier inferences.

For some reason that is not clear to me, you think that my mind is on money – that is presumably why I spend some of my valuable time highlighting poor academic practices on a personal website that brings in no income and is financed from my personal funds. Perhaps that is the company director finding it hard to get inside the mind of a retired teacher who worked his entire career in the public sector? (That is not meant as an insult – I probably have the reverse difficulty in understanding the motivations of the commercial mind. Perhaps that is why these are "things that are beyond [my] understanding"?) I do not have any problem with you setting up a company to make money (good luck to you if you work hard and treat people with due respect), and think it is perfectly possible for an organisation to both make money and produce public goods – I am not against commercial organisations per se. My 'vested interests' relate to commitments to certain values that I think underpin both good science and academic activities more broadly. A key one is honesty (which is one fundamental aspect of treating people with due respect). We are all entitled (perhaps even have a duty?) to make the strongest arguments for our positions, but when people knowingly misrepresent (e.g., "We will release the results over the next month" but no publication is forthcoming) in order to to advance their interests, this undermines the scholarly community. Anyone can be wrong. Anyone can be mistaken. Anyone can fail in a venture. (Such as promising a report, genuinely intending to produce one, but finding the task was more complex than anticipated. Had that been your response, I might have found this feasible. Instead, you promised to release the results, but now you claim you have "every right to ignore [my] request for the outcomes". Yes, that is so – if the commitment you made means nothing.) As long as we can trust each other to be open and honest the system will eventually self-correct in cases when there are false (but honestly motivated) claims. Yet, these days, academics are flooded with offers and claims that are not mistaken, but deliberately misleading. That is what I find so troublesome that I take time to call out examples. That may seem strange to you, but you have to remember I have worked as a school, college, and university, teacher all my working life, so I identify at a very deep level with the basic values underpinning the quest for knowledge and learning. When I get an email from someone claiming they are doing a survey, but which seems to be an attempt to market services, I do take it personally. I do not like to be lied to. I do not like to be treated as a fool. And I do not like the thought that perhaps less experienced colleagues and graduate students may take such approaches at face value and not appreciate they are being scammed. Can does not equate to should: you may have "the ability to write and say what [you] want", but that does not mean you have the right to deliberately mislead people. You say you will not be engaging with me any more. Fine. You started this correspondence with your unsolicited approaches. I will be very happy if you remove me from your marketing list (that I did not sign up for) and do not contact me again. That might be in both our interests.

And despite all this, I wish you well. Whatever your mistakes in the past, if you do genuinely wish to make a difference in the way you suggest, then I hope you are successful. But please, if you believe in your company and the contribution it can make, seek to be totally honest with potential clients. If you are in this for the long term, then developing trust and a strong reputation for ethical business practices will surely create a fund of social capital that will pay dividends as you build up the organisation. Whereas producing emails of the kind you have sent me today is likely to be counter-productive and just alienate people: using ad hominem points – I am masquerading as a learned academic, out of touch, arrogant, unfit and entitled; with shaky morals and vested interests; things are beyond my understanding; I write nonsense – simply suggests you have no substantive points to support your position. By doing this you automatically cede the higher ground. And, moreover, is that really the way you want your company represented in its communications?

Best wishes

Keith 


As I wrote above, Acaudio seem to be doing a really good job in setting up a platform where researchers can post accounts of their research – and given the scale of the organisation – I assume much (if not all) of that is down to Hussain. That, he can be proud of.

However, using the appearance of an international survey as a cover for collecting data that can be used to market a company's services is widely recognised as a dishonest and unethical (if not illegal 2) practice. I think he should less proud of himself in that regard.

If Hussain still wants to maintain that his request for contributions to the first annual International Survey of Research Centres and Institutes was intended as a genuine attempt at academic research, rather than just a marketing scam, then he still has the option of publishing a report of the study so that the academic community can evaluate the extent to which the survey meets the norms of genuine research; and so that, at very least, he will have met one key criterion of academic research (publication).

This would also show that Acaudio are prepared to meet their side of the contract they offered to potential respondents (i.e., please contribute to this survey – in consideration we will release the results over the next month). Any reputable business should be looking to make good on its promises.


Notes

1 The idea of an ethical wall (sometimes referred to as a 'Chinese wall') is important in businesses where there is the potential for conflicts of interest. Consider, for example, firms of lawyers that may have multiple clients, and where information offered in confidence by one client could have commercial value for another. The firm is expected to have protocols in place so that information about one client is not either leaked to another client, or (deliberately or inadvertently) influences advice given to another client. To avoid inadvertent influence, it may be necessary to ensure staff working with one client are not involved in work for another client that may be seen to have conflicting interests.

A company may hire a market research organisation to carry out market research to inform then about future strategies – so the people analysing the data have no bias due to preferred outcomes, and no temptation to misuse the data for direct marketing purposes. The commissioned report will not identify particular respondents. Then there is an ethical wall between the market researchers who report on the overall state of the market, and the client company's marketing and sales section.

My reference to the small size of Acaudio is not intended as an inherent criticism. My original point was that such a small company was unlikely to have the capacity to carry out a meaningful international survey (which does not imply the intention to do so was necessarily inauthentic – Acaudio might have simply overstretched itself).

However, a very small company might well have inherent difficulties in carrying out genuine research which did not leak information about specific respondents to those involved in sales.

Many surveys invite people to offer their email if they wish for feedback or to make themselves available for follow-up interviews – but offer an assurance the email address will not be used for other purposes, and need not be given to participate. Acaudio's survey required identifying information.2 This is a strong indicator that the primary purpose was not scholarly research.



2 The Data Protection Act 2018 concerns personal information:

"Everyone responsible for using personal data has to follow strict rules called 'data protection principles'. They must make sure the information is:

  • used fairly, lawfully and transparently
  • used for specified, explicit purposes
  • used in a way that is adequate, relevant and limited to only what is necessary
  • accurate and, where necessary, kept up to date
  • kept for no longer than is necessary
  • handled in a way that ensures appropriate security, including protection against unlawful or unauthorised processing, access, loss, destruction or damage"
GOV.UK

Acaudio's survey is nominally about research institutes not individual people.

However, it asks questions such as

  • "How satisfied are you with…"
  • "How much time do you spend…"
  • "Do you feel like…"
  • "What are the biggest challenges you face…"
  • "Who do you feel is…"
  • "How effective do you think…"
  • "Do you agree…"
  • "What would you consider..."
  • "How much would you consider…"
  • "Would you be interested in…"
  • "How do you decide…"
  • "What do you hope…"

This is information about a person, moreover a person of known email address:

" 'personal data' means any information relating to an identified or identifiable natural person ('data subject'); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier…"

Information Commissioner's Office

So, if information collected by this survey was used for purposes other than the survey itself –

  • say perhaps for identifying sales leads {e.g., "How satisfied are you with the level of awareness people have of your centre / institute?" "How effective do you think your current promotion methods are?"; "How important is building an audience for the work of the research centre / institute?"};
  • and/or profiling potential clients
    • in terms of level of resource that might be available to buy services {e.g., "How much would you consider to be a reasonable amount to spend on promotional activities?"},
    • or priorities for research impact strategies {e.g., "What mediums [sic] would you consider using to promote your research centre / institute?"; "Do you agree it is important to have a dedicated person to take care of promotional activities?"}

– would that not be a breach of UK data protection law?


The best science education journal

Where is the best place to publish science education research?


Keith S. Taber



OutletDescriptionNotes
International Journal of Science EducationTop-tier general international science education journalHistorically associated with the European Science Education Research Association
Science EducationTop-tier general international science education journal
Journal of Research in Science TeachingTop-tier general international science education journalAssociated with NARST
Research in Science EducationTop-tier general international science education journalAssociated with the Australasian Science Education Research Association
Studies in Science EducationLeading journal for publishing in-depth reviews of topics in science education
Research in Science and Technological Education Respected general international science education journal
International Journal of Science and Maths EducationRespected general international science education journalFounded by the National Science and Technology Council, Taiwan
Science Education InternationalPublishes papers that focus on the teaching and learning of science in school settings ranging from early childhood to university educationPublished by the International Council of Associations for Science Education
Science & EducationHas foci of historical, philosophical, and sociological perspectives on science educationAssociated with the International History, Philosophy, and Science Teaching Group
Journal of Science Teacher EducationConcerned with the preparation and development of science teachersAssociated with the Association for Science Teacher Education
International Journal of Science Education, Part B – Communication and Public EngagementConcerned with research into science communication and public engagement / understanding of science
Cultural Studies of Science EducationConcerned with science education as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon
Journal of Science Education and TechnologyConcerns the intersection between science education and technology.
Disciplinary and Interdisciplinary Science Education ResearchConcerned with science education within specific disciplines and between disciplines.Affiliated with the Faculty of Education, Beijing Normal University
Journal of Biological Education For research specifically within biology educationPublished for the Royal Society of Biology.
Journal of Chemical EducationA long-standing journal of chemistry education, which includes a section for Chemistry Education Research papersPublished by the American Chemical Society.
Chemistry Education Research and Practice The leading research journal for chemistry educationPublished by the Royal Society of Chemistry
Some of the places to publish research in science education

I was recently asked which was the best journal in which to seek publication of science education research. This was a fair question, given that I had been been warning of the large number of low quality journals now diluting the academic literature.

I had been invited to give a seminar talk to the Physics Education and Scholarship Section in the Department of Physics at Durham University. I had been asked to talk on the theme of 'Publishing research in science education'.

The talk considered the usual processes involved in submitting a paper to a research journal and the particular responsibilities involved for authors, editors and reviewers. In the short time available I said a little about ethical issues, including difficulties that can arise when scholars are not fully aware of, or decide to ignore, the proper understanding of academic authorship 1 . I also discussed some of the specific issues that can arise when those with research training in the natural sciences undertake educational research without any further preparation (for example, see: Why do natural scientists tend to make poor social scientists?), such as underestimating the challenge of undertaking valid experiments in educational contexts.

I had not intended to offer advice on specific journals for the very good reasons that

  • there are a lot of journals
  • my experience of them is very uneven
  • I have biases!
  • knowledge of journals can quickly become out of date when publishers change policies, or editorial teams change

However, it was pointed out that there does not seem to be anywhere where such advice is readily available, so I made some comments based on my own experience. I later reflected that some such guidance could be useful, especially to those new to research in the area.

I do, in the 'Research methodology' section of the site, offer some advice to the new researcher on 'Publishing research', that includes some general advice on things to consider when thinking about where to send your work:

Read about 'Selecting a research journal: Selecting an outlet for your research articles'

Although I name check some journals there, I did not think I should offer strong guidance for the reasons I give above. However, taking on board the comment about the lack of guidance readily available, I thought I would make some suggestions here, with the full acknowledgement that this is a personal perspective, and that the comments facility below will allow other views and potential correctives to my biases! If I have missed an important journal, or seem to have made a misjudgement, then please tell me and (more importantly) other readers who may be looking for guidance.

Publishing in English?

My focus here is on English language journals. There are many important journals that publish in other languages such as Spanish. However, English is often seen as the international language for reporting academic research, and most of the journals with the greatest international reach work in the English language.

These journals publish work from all around the world, which therefore includes research into contexts where the language of instruction is NOT English, and where data is collected, and often analysed, in the local language. In these cases, reporting research in English requires translating material (curriculum materials, questions posed to participants, quotations from learners etc.) into English. That is perfectly acceptable, but translation is a skilled and nuanced activity, and needs to be acknowledged and reported, and some assurance of the quality of translation offered (Taber, 2018).

Read about guidelines for good practice regarding translation in reporting research

Science research journal or science education journal?

Sometime science research journals will publish work on science education. However, not all science journals will consider this, and even for those that do, this tends to be an occasional event.

With the advent of open-access, internet accessible publishing, some academic publishers are offering journals with very wide scope (presumably as it is considered that in the digital age it is easier to find research without it needing to be in a specialist journal), however, authors should be wary of journals that have titles implying a specialist scientific focus but which seem to accept material from a wide range of fields, as this is one common indicator of predatory journals – that is, journals which do not use robust peer review (despite what they may claim) and have low quality standards.

Read about predatory journals

There are some scientific journals with an interdisciplinary flavour which are not education journals per se, but are open to suitable submissions on educational topics. I am most familiar (disclosure of interest, being on the Editorial Board) is Foundations of Chemistry (published by Springer).



Science Education Journal or Education Journal?

Then, there is the question of whether to publish work in specialist science education journals or one of the many more general education journals. (There are too many to discuss them here.) General education journals will sometimes publish work from within science education, as long as they feel it is of high enough general interest to their readership. This may in part be a matter of presentation – if the paper is written so it is only understandable to subject specialists, and only makes recommendations for specialists in science education, it is unlikely to seem suitable for a more general journal.

On the other hand, just because research has been undertaken in science teaching and learning context, this may not make it of particular interest to science educators if the research aims, conceptualisation, conclusions and recommendations concern general educational issues, and anything that may be specific to science teaching and learning is ignored in the research – that is, if a science classroom was chosen just as a matter of convenience, but the work could have been just as well undertaken in a different curriculum context (Taber, 2013).

Research Journal or Professional Journal?

Another general question is whether it is best to send one's work to an academic research journal (offering more kudos for the author{s} if published) or a journal widely read by practitioners (but usually considered less prestigious when a scholar's academic record is examined for appointment and promotion). These different types of output usually have different expectations about the tone and balance of articles:

Read about Research journals and practitioner journals

Some work is highly theoretical, or is focussed on moving forward a research field – and is unlikely to be seen as suitable for a teacher's journal. Other useful work may have developed and evaluated new educational resources, but without critically exploring any educational questions in any depth. Information about this project would likely be of great interest to teachers, but is unlikely to meet the criteria to be accepted for publication in a research journal.

But what about a genuine piece of research that would be of interest to other researchers in the field, but also leads to strong recommendations for policy and practice? Here you do not have to choose one or other option. Although you cannot publish the same article in different journals, a research report sent to an academic journal and an article for teachers would be sufficiently different, with different emphases and weightings. For example, a professional journal does not usually want a critical literature review and discussion of details of data analysis, or long lists of references. But it may value vignettes that teachers can directly relate to, as well as exemplification of how recommendation might be followed through – information that would not fit in the research report.

Ideally, the research report would be completed and published first, and the article for the professional audience would refer to (and cite) this, so that anyone who does want to know more about the theoretical background and technical details can follow up.

Some examples of periodicals aimed at teachers (and welcoming work written by classroom teachers) include the School Science Review, (published by the Association for Science Education), Physics Education (published by the Institute of Physics) and the Royal Society of Chemistry's magazine Education in Chemistry. Globally, there are many publications of this kind, often with a national focus serving teachers working in a particular curriculum context by offering articles directly relevant to the specifics of the local education contexts.

The top science education research journals

Having established our work does fit in science education as a field, and would be considered academic research, we might consider sending it to one of these journals

  • International Journal of Science Education (IJSE)
  • Science Education (SE)
  • Journal of Research in Science Teaching (JRST)
  • Research in Science Education (RiSE)


To my mind these are the top general research journals in the field.

IJSE is the journal I have most worked with, having published quite a few papers in the journal, and have reviewed a great many. I have been on the Editorial Board for about 20 years, so I may be biased here.2 IJSE started as the European Journal of Science Education and has long had an association with the European Science Education Research Association (ESERA – not to be confused with ASERA).

Strictly this journal is now known as IJSE Part A, as there is also a Part B which has a particular focus on 'Communication and Public Engagement' (see below). IJSE is published by Taylor and Francis / Routledge.

SE is published by Wiley.

JRST is also published by Wiley, and is associated with NARST.

RISE is published by Springer, and is associated with the Australasian Science Education Research Association (ASERA – not to be confused with ESERA)

N.A.R.S.T. originally stood for the National Association for Research in Science Teaching, where the Nation referred to was the USA. However, having re-branded itself as "a global organization for improving science teaching and learning through research" it is now simply known as NARST. In a similar way ESERA describes itself as "an European organisation focusing on research in science education with worldwide membership" and ASERA clams it "draws together researchers in science education from Australia, New Zealand and more broadly".


The top science education reviews journal

Another 'global' journal I hold in high esteem in Studies in Science Education (published by Taylor & Francis / Routledge) 3 .

This journal, originally established at the University of Leeds and associated with the world famous Centre for Studies in Science Education 4, is the main reviews journal in science education. It publishes substantive, critical reviews of areas of science education, and some of the most influential articles in the field have been published here.

Studies in Science Education also has a tradition of publishing detailed scholarly book reviews.


In my view, getting your work published in any of these five journals is something to be proud of. I think people in many parts of the world tend to know IJSE best, but I believe that in the USA it is often considered to be less prestigious than JRST and SE. At one time RISE seemed to have a somewhat parochial focus, and (my impression is) attracted less work from outside Australasia and its region – but that has changed now. 'Studies' seems to be better known in some contexts than other, but it is the only high status general science education journal that publishes full-length reviews (both systematic, and thematic perspectives), with many of its contributions exceeding the normal word-length limits of other top science education journals. This is the place to send an article based on that literature review chapter that thesis examiners praised for its originality and insight!



There are other well-established general journals of merit, for example Research in Science and Technological Education (published by Taylor & Francis / Routledge, and originally based at the University of Hull) and the International Journal of Science and Maths Education (published by Springer, and founded by the National Science and Technology Council, Taiwan). The International Council of Associations for Science Education publishes Science Education International.

There are also journals with particular foci with the field of science education.

More specialist titles

There are also a number of well-regarded international research journals in science education which particular specialisms or flavours.


Science & Education (published by Springer) is associated with the International History, Philosophy, and Science Teaching Group 5, which as the name might suggest has a focus on science eduction with a focus on the nature of science, and "publishes research using historical, philosophical, and sociological approaches in order to improve teaching, learning, and curricula in science and mathematics".


The Journal of Science Teacher Education (published by Taylor & Francis / Routledge), as the name suggests is concerned with the preparation and development of science teachers. The journal is associated with the USA based Association for Science Teacher Education.


As suggested above, IJSE has a companion journal (also published by Taylor & Francis / Routledge), International Journal of Science Education, Part B – Communication and Public Engagement


Cultural Studies of Science Education (published by Springer) has a particular focus on  science education "as a cultural, cross-age, cross-class, and cross-disciplinary phenomenon".


The Journal of Science Education and Technology (published by Springer) has a focus on the intersection between science education and technology.


Disciplinary and Interdisciplinary Science Education Research has a particular focus on science taught within and across disciplines. 6 Whereas most of the journals described here are now hybrid (which means articles will usually be behind a subscription/pay-wall, unless the author pays a publication fee), DISER is an open-access journal, with publication costs paid on behalf of authors by the sponsoring organisation: the Faculty of Education, Beijing Normal University.

This relatively new journal reflects the increasing awareness of the importance of cross-disciplinary, interdisciplinary and transdisciplinary research in science itself. This is also reflected in notions of whether (or to what extent) science education should be considered part of a broader STEM education, and there are now journals styled as STEM education journals.


Science as part of STEM?

Read about STEM in the curriculum


Research within teaching and learning disciplines

Whilst both the Institute of Physics and the American Institute of Physics publish physics education journals (Physics Education and The Physics Teacher, respectively) neither publishes full length research reports of the kind included in research journals. The American Physical Society does publish Physical Review Physics Education Research as part of its set of Physical Review Journals. This is an on-line journal that is Open Access, so authors have to pay a publication fee.


The Journal of Biological Education (published by Taylor and Francis/Routledge) is the education journal of the Royal Society of Biology.


The Journal of Chemical Education is a long-established journal published by the American Chemical Society. It is not purely a research journal, but it does have a section for educational research and has published many important articles in the field. 7


Chemistry Education Research and Practice (published by the Royal Society of Chemistry, RSC) is purely a research journal, and can be considered the top international journal for research specifically in chemistry education. (Perhaps this is why there is a predatory journal knowingly called the Journal of Chemistry Education Research and Practice)

As CERP is sponsored by the RSC (which as a charity looks to use income to support educational and other valuable work), all articles in CERP are accessible for free on-line, but there are no publication charges for authors.


Not an exhaustive list!

These are the journals I am most familiar with, which focus on science education (or a science discipline education), publish serous peer-reviewed research papers, and can be considered international journals.

I know there are other discipline-based journals (e.g, biochemistry education, geology education) and indeed I expect there are many worthwhile places to publish that have slipped my mind or about which I am ignorant. Many regional or national journals have high standards and publish much good work. However, when it comes to research papers (rather than articles aimed primarily at teachers) academics usually get more credit when they publish in higher status international journals. It is these outlets that can best attract highly qualified editors and reviewers, and so peer review feedback tends to be most helpful 8, and the general standard of published work tends to be of a decent quality – both in terms of technical aspects, and its significance and originality.

There is no reason why work published in English is more important than work published in other languages, but the wide convention of publishing research for an international audience in English means that work published in English language journals probably gets wider attention globally. I have published a small number of pieces in other languages, but am primarily limited by my own restricted competence to only one language. This reflects my personal failings more than the global state of science education publishing!

A personal take – other viewpoints are welcome

So, this is my personal (belated) response to the question about where one should seek to publish research in science education. I have tried to give a fair account, but it is no doubt biased by my own experiences (and recollections), and so inadvertently subject to distortions and omissions.

I welcome any comments (below) to expand upon, or seek to correct, my suggested list, which might indeed make this a more useful listing for readers who are new to publishing their work. If you have had good (or bad) experiences with science education journals included in, or omitted from, my list, please share…


Sources cited:

Notes

1 Academic authorship is understood differently to how the term 'author' is usually used: in most contexts, the author is the person who prepared (wrote, types, dictated) a text. In academic research, the authors of the research paper are those who made a substantial direct intellectual contribution to the work being reported. That is, an author need not contribute to the writing-up phase (though all authors should approve the text) as long as they have made a proper contribution to the substance of the work. Most journals have clear expectations that all deserving authors, and only those people, should be named as authors.

Read about academic authorship


2 For many years the journal was edited by the late Prof. John Gilbert, who I first met sometime in the 1984-5 academic year when I applied to join the University of Surrey/Roehampton Institute part-time teachers' programme in the Practice of Science Education, and he – as one of course directors – interviewed me. I was later privileged to work with John on some projects – so this might be considered as a 'declaration of interest'.


3 Again, I must declare an interest. For some years I acted as the Book Reviews editor for the journal.


4 The centre was the base for the highly influential Children's Learning in Science Project which undertook much research and publication in the field under the Direction of the late Prof. Ros Driver.


5 Another declaration of interest: at the time of writing I am on the IHPST Advisory Board for the journal.


6 Declaration of interest: I am a member of the DISER's Editorial Board


7 I have recently shown some surprise at one research article published in JChemEd where major problems seem to have been missed in peer review. This is perhaps simply an aberration, or may reflect the challenge of including peer-reviewed academic research in a hybrid publication that also publishes a range of other kinds of articles.


8 Peer-review evaluates the quality of submissions, in part to inform publication decisions, but also to provide feedback to authors on areas where they can improve a manuscript prior to publication.

Read about peer review


Download this post


The first annual International Survey of Gullible Research Centres and Institutes

When is a 'survey' not really a survey? Perhaps, when it is a marketing tool.


Keith S. Taber


A research survey seeks information about a population by collecting data from a sample.
Acaudio's 'survey' seems to seek information about whether particular respondents might be persuaded to buy their services.

Today I received an invitation to contribute to something entitled "the first annual International Survey of Research Centres and Institutes". Despite this impressive title, I decided not to do so.

This was not because I had some doubts that whether it really was 'the first…' (has there never previously been an annual International Survey of Research Centres and Institutes?) Nor was it because I had been invited to represent 'The Science and Technology Education Research Group' which I used to lead – but not since retiring from my Faculty duties.

My main reason for not participating was because I suspected this was a scam. I imagined this might be marketing apparently masquerading as academic research. I include the provisos 'suspected' and 'apparently' as I was not quite sure whether this was actually a poor attempt to mislead participants or just a misjudged attempt at witty marketing. That is, I was not entirely sure if recipients of the invitation were supposed to think this was a serious academic survey.



There is a carpet company that claims that no one knows more about floors than … insert here any of a number of their individual employees. Their claims – taken together – are almost logically impossible, and certainly incredible. I am sure most people let this wash over them – but I actually find it disconcerting that I am not sure if the company is (i) having a logical joke I am supposed to enjoy ('obviously you are not meant to believe claims in adverts, so how about this…'), or (ii) simply lying to me, assuming that I will be too stupid to spot the logical incoherence.

Read 'Floored or flawed knowledge?: A domain with a low ceiling'

Why is this not serious academic research?

My first clue that this 'survey' was not a serious attempt at research was that the invitation was from an email address of 'playlist.manager@acaudio.com', rather than from an academic institute or a learned society. Of course, commercial organisations can do serious academic research, if usually when they are hired to do so on behalf of a named academically-focussed organisation. The invitation made no mention of any reputable academic sponsor.

I clicked on the link to the survey to check for the indicators one finds in quality research. Academic research is subject to ethical norms, such as seeking voluntary informed consent, and any invitation to engage in bone fide academic research will provide information to participants up front (either on the front page of the survey or via a link that can be accessed before starting to respond to any questions). One would expect to be informed, at a minimum:

  • who is carrying out the research (and who for, if it is commissioned) and for what purpose;
  • how data will be used – for example, usually it is expected that any information provided with be treated as confidential, securely stored, and only used in ways that protect the anonymity of participants.

This was missing. Commercial organisations sometimes see information you provide differently, as being a resource that they can potentially sell on. (Thus the recent legislation regulating what can or cannot be done with personal information that is collected by organisations.)

Hopefully, potential participants will be informed about the population being sampled and something of the methodology being applied. In an ideal world an International Survey of Research Centres and Institutes would identify and seek data from all Research Centres and Institutes, internationally. That would be an immense undertaking – and is clearly not viable. Consider:

  • How many 'research centres' are initiated, and how many close down or fade away, internationally, each year?
  • Do they all even have websites? (If not, how are they to be identified?)
  • If so, spread over how many languages?

Even attempting a meaningful annual survey of all such organisations would require a substantive, well-resourced, research team working full-time on the task. Rather, a viable survey would collect data from a sample of all research centres and research institutes, internationally. So, some indication of how a sample has been formed, or how potential participants identified, might be expected.

Read about sampling a population of interest

One of the major limitations of many surveys of large populations is that even if a decent sample size is achieved, such surveys are unlikely to reach a representative sample, or even provide any useful indicators of whether the sample might be representative. For example, information provided by 'a sample of 80 science teachers' tells us next to nothing about 'science teachers' in general if we have no idea how representative that sample is.

It can be a different matter when surveys are undertaken of small, well-defined, populations. A researcher looking to survey the students in one school, for example (perhaps for a consultation about a mooted change in school dress policy), is likely to be in a position to make sure all in the population have the opportunity to respond, and perhaps encourage a decent response rate. They may even be able to see if, for example, respondents reflect the wider population in some important ways (for example, if one got responses from 400/1000 students, one would usually be reasonably pleased, but less so if hardly any of the responses were in, say, the two youngest year groups).

In such a situation there is likely to be a definitive list of members of the population, and a viable mechanism to reach them all. In more general surveys, this is seldom the case. One might see a particular type of exception as elections (which can be considered as akin to surveys). The electoral register potentially lists all enfranchised to vote, and includes a postal address where each voter can be informed of a forthcoming poll. In this situation, there is a considerable administrative cost of maintaining the register – considered worth paying to support the democratic process – and a legal requirement to register: yet, even here, no one imagines the roll is ever complete and entirely up-to-date.)

  • How many of the extant Research Centres and Research Institutes, internationally, had been invited to participated in this survey?
  • And did these invitations reflect the diversity of Research Centres and Institutes, internationally?
    • By geographical location?
    • By discipline?

No such information was provided.

The time-scale for an International Survey of Research Centres and Institutes

To be fair the invitation email did suggest the 'researchers' would share outcomes with the participants:

"We will release the results over the next month".

But that time-scale actually seemed to undermine the possibility that this initiative was meant as a serious survey. Anyone who has ever undertaken any serious research knows: it takes time.

When planning the stages of a research project, you should keep in mind that everything will likely take longer than you expect…

even when you allow for that.

Not entirely frivolous advice given to research students

Often with surveys, the initial response is weak (filling in other people's questionnaires is seldom anyone's top priority), and it becomes necessary to undertake additional rounds of eliciting participation. It is good practice to promise to provide feedback; but to offer to do this within a month seems, well, foolhardy.

Except, of course, Acaudio are not a research organisation, and the purpose of the 'survey' was, I suggest, not academic research. As becomes clear from the questions asked, this is marketing 'research': a questionnaire to support Acaudio's own marketing.

What does this company do?

Acaudio offer a platform for allowing researchers to upload short audio summaries of their research. Researchers can do this for free. The platform is open-access, allowing anyone to listen. The library is collated with play-lists and search functions. The company provides researchers data on access to their recordings.

This sounds useful, and indeed 'too good to be true' as there are no charges for the service. Clearly, of itself, that would be a lousy business model.

The website explains:

"We also collaborate with publishers and companies. While our services are licensed to these organizations, generating revenue, this approach is slightly different from our collaboration with you as researchers. However, it enables us to maintain the platform as fully open access for our valued users."

https://acaudio.com/faq

So, having established the website, and built up a library of recordings hosted for free (the 'loss leader' as they say 1), the company is now generating income by entering into commercial arrangements with organisations. Another page on their website claims the company has 'signed' 1000 journals and 2000 research centers [sic]. So, alongside the free service, the company is preparing content on behalf of clients to publicise, in effect advertise, their research for them. Nothing terrible there, although one would hope that the research that has the most impact gets that impact on merit, not because some journals and research centres can pay to bring more attention to their work. This business seems similar to those magazines that offer to feature your research in a special glossy article – for a price.

Read 'Research features…but only if you can afford it'

One would like to think that publicly funded researchers, at least, spend the public's money on the actual research, not on playing the impact indicators game by commissioning glossy articles in magazines which would not be any serious scholar's preferred source of information on research. Sadly, since the advent of the Research Assessment Exercise (and its evolution into the 'Research Excellence Framework') vast amounts of useful resource have been spent on both rating research and in playing the games needed to get the best ratings (and so the consequent research income). As is usually the case with anything of this kind (one could even include formal school examinations!), even if the original notion is well-intentioned,

  • the measurement process comes to distort what it is measuring;
  • those seen as competing spend increasing resources in trying to out do each other in terms of the specifics of the assessment indicators/criteria

So, as research impact is now considered measurable, and as it is (supposedly) measured, and contributes to university income, there is a temptation to spend money on things that might increase impact. It becomes less important whether a study has the potential to increase human health and happiness; and more important to get it the kind of public/'end user' attention that might ultimately lead to evidence of 'impact' – as this will increase income, and allow the research to continue (and, who knows, perhaps eventually even increase human health and happiness).

What do Acaudio want to know?

Given that background, the content of the survey questionnaire makes perfect sense. After collecting some information on your research centre, there are various questions such as

  • How satisfied are you with the level of awareness people have of your centre / institute?
  • How important is it that the general public are aware of the work your centre / institute does?

I suspect most heads of research centres think it is important people know of their work, and are not entirely satisfied that enough people do. (I suspect academic researchers generally tend to think that their own research is actually (i) more important than most other people realise and (ii) deserves more attention than it gets. That's human nature, surely? Any self-effacing and modest scholars are going to have to learn to sell themselves better, or, if not, they are perhaps unlikely to be made centre/institute heads.

There are questions about how much time is spent promoting the research centre, and whether this is enough (clearly, one would always want to do more, surely?), and the challenges of doing this, and who is responsible (I suspect most heads of centres feel some such responsibility, without considering it is how they most want to spend their limited time for research and scholarship).

Perhaps the core questions are:

  • Do you agree it is important to have a dedicated person to take care of promotional activities?
  • How much would you consider to be a reasonable amount to spend on promotional activities?

These questions will presumably help Acaudio decide whether you can easily be persuaded to sign up for their help, and what kind of budget you might have for this. (The responses for the latter include an option for spending more than $5000 each year on promotional activities!)

I am guessing that at even $5000+ p.a., they would not actually provide a person dedicated to 'take care of promotional activities' for you, rather than a person dedicated to adding your promotional activities to their existing portfolio of assigned clients!

So, this is a marketing questionnaire.

Is this dishonest?

It seems misleading to call a marketing questionnaire 'the first annual International Survey of Research Centres and Institutes' unless Acaudio are making a serious attempt to undertake a representative survey of Research Centres and Institutes, internationally, and they do intend to publish a full analysis of the findings. "We will release the results over the next month" sounds like a promise to publish, so I will look out with interest for an announcement that the results have indeed been made available.

Lies, delusional lies, and ill-judged attempts at humour

Of course, lying is not simply telling untruths. A person who claims to be Napoleon or Joan of Arc is not lying if that person actually believes that is who they are. Someone who claims they are the best person to run your country is not necessarily lying simply because the claim is false. If the Acaudio people genuinely think they are really doing an International Survey of Research Centres and Institutes then their invitation is not dishonest even if it might betray any claim to know much about academic research.


"I'm [an actor playing] Spartacus";"I'm [an actor playing another character who is not Spartacus, but is pretending to be] Spartacus"; "I'm [another actor playing another character who is also not Spartacus, but is also pretending to be] Spartacus"… [Still from Universal Pictures Home Entertainment movie 'Spartacus']


Nor is it lying, when there is no intent to deceive. Something said sarcastically or as a joke, or in the context of a theatrical performance, is not a lie as long as it is expected that the audience share the conceit and do not confuse it for an authentic knowledge claim. Kirk Douglas, Tony Curtis, and their fellow actors playing rebellious Roman slaves, all knew they were not Spartacus, and that anyone in a cinema watching their claims to be the said Spartacus would recognise these were actors playing parts in a film – and that indeed in the particular context of a whole group of people all claiming to be Spartacus, the aim even in the fiction was actually NOT to identify Spartacus, but to confuse the whole issue (even if being crucified as someone who was only possibly Spartacus might be seen as a Pyrrhic victory 2).

So, given that the claim to be undertaking the first annual International Survey of Research Centres and Institutes was surely, and fairly obviously, an attempt to identify research centres that (a) might be persuaded to purchase Acaudio's services and (b) had budget to pay for those services, I am not really sure this was an attempt to deceive. Perhaps it was a kind of joke, intended to pull in participants, rather than a serious attempt to fool them.

That said, any organisation hoping for credibility among the academic community surely needs to be careful about its reputation. Sending out scam emails that claim to be seeking participants for a research survey that is really a marketing questionnaire seems pretty dubious practice, even if there was no serious attempt to follow through by disguising the questionnaire as a serious piece of research. You might initially approach the questionnaire thinking it was genuine research, but as you worked through it SHOULD have dawned that this information was being collected because (i) it is of commercial value to Acaudio, and not (ii) to answer any theoretically motivated research questions.

  • So, is this dishonest? Well, it is not what it claims to be.
  • Does this intend to deceive? If it did, then it was not well designed to hide its true purpose.
  • Is it malpractice? Well, there are rules in the U.K. about marketing emails:

"You're only allowed to send marketing emails to individual customers if they've given you permission.

Emails or text messages must clearly indicate:

  • who you are
  • that you're selling something

Every marketing email you send must give the person the ability to opt out of (or 'unsubscribe from') further emails."

https://www.gov.uk/marketing-advertising-law/direct-marketing

The email from Hussain Ayed, Founder, Acaudio, told me who he, and his organisation, are, but

  • did not clearly suggest he was selling something: he was inviting me to contribute to a research survey (illegal?)
  • Nor was there any option to opt out of further messages (illegal?)
  • And I am not aware of having invited approaches from this company – which might be why it was masquerading as a request to contribute to research (illegal?)

I checked my email system to see if I'd had any previous communication with this company, and found in my junk folder a previous approach,"invit[ing Keith, again] to talk about some of the research being done at The Science and Technology Education Research Group on Acaudio…". It seems my email software can recognise cold calling – as long as it does not claim to be an invitation to respond to a research study.



The earlier email claimed it was advertising the free service…but then invited me to arrange a time to talk to them for 'roughly' 20 minutes. That seems odd, both because the website seems to provide all the information needed; and then why would they commit 20 minutes of their representative's time to talk about a free service? Presumably, they wanted to sell me their premium service. The email footer also gave a business address in E9, London – so the company should know about the UK laws about direct marketing that Acaudio seems to be flouting.

Perhaps not enough people responded to give them 20 minutes of their time, so the new approach skips all that and asks instead for people to "give us 2-3 minutes of your time to fill in the survey [sic 3]".


Original image by Mohamed Hassan from Pixabay


Would you buy a second hand account of research from this man?

In summary, if someone is looking to buy in this kind of support in publicising their work, and has the budget(!), and feels it is acceptable to spend research funds on such services, then perhaps they might fill in the questionnaire and await the response. But I am not sure I would want to get involved with companies which use marketing scams in this way. After all, if they cannot even start a conversation by staying within the law, and being honest about their intentions, then that does not bode well for being able to trust them going forward into a commercial arrangement.


Update (15th October, 2023): Were the outcomes of the first annual International Survey of Research Centres and Institutes published? See 'The sugger strikes back! An update on the 'first annual International Survey of Research Centres and Institutes'


Notes

1 When a shop offers a product at a much discounted price, below the price needed to 'break even', so as to entice people into the shop where they will hopefully buy other goods (at a decent mark-up for the seller), the goods sold at a loss are the 'loss leaders'.

Goods may also be sold at a loss when they are selling very slowly, to make space on the shop floor and in the storeroom for new produce that it is hoped will generate profit. Date-sensitive goods may be sold at a loss because they will soon not be saleable at all (such as perishables) or only at even greater discounts (such as models about to be replaced by updated versions by manufacturers – e.g., iPhones). But loss leader goods are priced low to get people to view other produce (so they might be displayed dominantly in the window, but only found deep in the shop).


2 In their wars against the armies of King Pyrrhus of Epirus, the Romans lost battles, but in doing so inflicted such heavy and unsustainable losses on the nominally victorious invading army that Pyrrhus was forced to abandon his campaign.

At the end of the slave revolt (a historical event on which the film 'Spartacus' is based) the Romans are supposed to have decided to execute the rebel leader, the escaped gladiator Spartacus, and return the other rebels to slavery. Supposedly, when the Roman official tried to identify Spartacus, each of the recaptured slaves in turn claimed he was Spartacus, thus thwarting identification. So, the ever pragmatic Romans crucified them all.


3 The set of questions is actually a questionnaire which is used to collect data for the survey. Survey (a type of methodology) does not necessarily imply using a questionnaire (a data collection technique) as a survey could be carried out using an observation schedule (i.e., a different data collection technique), for example.

Read about surveys

Read about questionnaires


Earning a higher doctorate without doing any research?

Is it possible that a publisher might be using fictitious academics to attract submissions to its journals?


Keith S. Taber


An obvious discrepancy is that the University of Ottawa is not Ottawa University, USA. One is in Ontario, in Canada – the other is in Kansas, in the United States. Someone who has attended one of these universities would be unlikely to be confused about which one they studied at, and graduated from.

I received an email from a journal managing editor claiming to be a highly qualified scholar (two doctorates)- for whom I can find absolutely no evidence on the web of her having ever published anything, or having any association with any university, research group, or learned society. Suspicious?


Wanted!

Information on the academic research of this woman

(Additional image elements by No-longer-here and OpenClipart-Vectors from Pixabay)


I received one of those dodgy emails from a publisher that are now part of the normal noise one has to navigate through in the Academy. The email was signed by a Dr Nicoleta Auffahrt, as Managing Editor of the 'Department of Humanities and Social Science' 'at Global Journals'. So far, nothing too suspicious.

Dr Auffahrt wrote that she had read my [actually co-authored] research paper "Secondary Students' Values and Perceptions of Science-Related Careers Responses to Vignette-Based Scenarios". She told me:

  • She had read it and felt it was worthy of admiration
  • She had shared 'the finding' with her [unspecified] colleagues
  • She reported that other [unspecified] scholars of our [sic] research community had also commended 'them' [?]
  • She suggested that this paper demonstrated my potential to influence and inspire fellow researchers and scholars.
Two classes of academics

Now, perhaps there was a time when I might have taken some of this at face value, being naïve enough to believe that most people are basically honest, and that at least in the world of scholarship people value truth and honesty and would not casually lie.

One might expect such compliments to often hit home with academics: after all, isn't academia made up of two classes of scholars

  • those who suffer imposter symptom and are waiting to be found out as not belonging;
  • those who know their work is important and ground-breaking, deserving of being more widely known, and a sufficient cause to bring them attention, prestige, admiration, acolytes, and prizes?

The latter group, at least, would not find anything odd in receiving such unsolicited praise.

However, I've had too many emails of this kind that praise my work but which are clearly not truthful: often they either

The reference to 'our research community' was intriguing, as the letter (appended below) was structured so as to

  • (i) first praise me as though Dr Auffahrt was so impressed with my work that she needed to tell me; and then,
  • (ii) by the way, incidentally, as she was writing – she thought she would mention, "also" her role working for a publisher that led her to invite me to submit some work.

So, was it feasible that Dr Auffahrt did consider us part of the same research community? When I checked her email signature I saw she signed herself as Dr. Nicoleta Auffahrt "D.Litt in Teaching Education".



Now I was intrigued. Clearly 'teaching' and 'education' suggest that at least 'Dr' Nicoleta has a background in my general field of teaching and learning which makes a nice change from being invited to contribute on topics such as nanotechnology and various medical specialisms. Yet, this also raised some questions: what exactly is meant by 'teaching education' (cf. e.g., science education) as an academic area – was her work in teaching the subject of education or…?

Moreover, I was surprised that someone with a higher doctorate was acting as a 'managing editor' for a publisher. A D.Litt. was only likely to be awarded to a highly productive and influential scholar, and such a person might well take on editorial roles (as an editor, an associate editor, an editor-in-chief), but probably not as a managing editor.

A managing editor is employed by a publisher to oversee the administration and business side of a journal, unlike an editor who would normally being doing the intellectual work of evaluating the quality of submissions and directing the peer-review process (work which would often be seen as taking a leadership role in a research field) – and then usually only as a subsidiary post undertaken alongside an academic appointment. The prestige of the journal is often in part seen to be reflected in the university affiliations of its editors and associate editors.

There is, of course, no reason why someone who has achieved eminence in their academic field, recognised ultimately by being awarded a higher doctorate such as a D.Litt., might not decide to then make a career change and move into publishing; and, similarly, there is no reason why a publisher should not employ such a person if they were available – but it seemed an unlikely scenario. Unlikely enough for me to dig a little.

So, I did a web-search for Dr Nicoleta Auffahrt. I found her listed on one of the publisher's web-pages as part of an editorial board for social science. Her listing was:

"Ph.D. University of Pennsylvania, Master of Arts in Ottawa University, USA"

So, a Ph.D., but no mention of the much rarer and more prestigious D.Litt. degree. Of course, the web-page may be out of date, whereas perhaps Nicoleta had updated her email signature, so that proves nothing.

What was very odd though, was the limited number of web-pages that a search turned up.


QualificationsM.A., Ph.D., D.Litt
Search term "Dr Nicoleta Auffahrt"/ "Dr. Nicoleta Auffahrt"1 hit on Google (https://globaljournals.org)
Search terms Dr + "Nicoleta Auffahrt"4 hits on Google
(https://www.facebook.com;
https://globaljournals.org;
https://globaljournals.us;
https://beta.globaljournal)
How can a successful scholar who has two doctorates and is (or, perhaps, was until recently) working in a top US university be virtually invisible on the web?

Apart from three pages for the publisher, Google only provided one other 'hit' – a 'Facebook' page. Dropping the reference to 'Dr' still only found "Nicoleta Auffahrt" on Facebook and Global Journals webpages. That seemed very strange.

I also tried using Google Scholar. Google Scholar is a specialist search engine used by academics to find research reported in journal articles, academic books, conference proceedings, on websites, and so forth. Google Scholar did not suggest a single publication (and Google Scholar is pretty liberal in what it counts as a 'publication'!) that had been authored, co-authored or edited by "Nicoleta Auffahrt".

Typically searching for an academic brings up myriad references to their publications, conference talks, involvement in research groups, links with university departments, and so forth. A search for an experienced and successful academic would be expected to turn up, at least, hundreds, indeed – likely – thousands, of hits.

This is unavoidable if you work in academia – even if for some reason a scholar chooses not to have a specific Google Scholar listing (this does not stop your work being included in the database and returned in a search – it just means you do not get a personal profile page); not to have an Academia listing; not to post on ResearchGate; not to be on Linked-In (which is a common place for those working in publishing to seek to make contacts); and does not upload their dissertations/theses to University repositories…they still cannot prevent their books and papers and conference talks being referred to here and there.

Academic prestige is, after all, largely based on publications, and publications are by definition public documents. Assessment for a higher doctorate such as a D.Litt. is usually largely in terms of a scholar's published work being judged to be highly influential in their discipline or field (something that is usually only possible to judge some years after publications first appear). Moreover, one of the principle ways in which any academic is evaluated is in terms of the influence of their publications, as judged by citations – but Nicoleta Auffahrt's work does not seem to be cited anywhere. At least, Google Scholar had not found any. (No publications, and no citations.1)

So, here the only evidence I had of a person called Nicoleta Auffahrt really existing that was independent of the publisher who had contacted me was…Facebook, and that offered limited pickings.

I responded to the email (text appended below), asking Dr. Nicoleta Auffahrt about her area of work and where she had been awarded the prestigious D.Litt.

The next morning, I found I had a reply – but from someone else at the publishers. I say 'someone' else, as the email account was linked to the name Dr. Stacey J. Newman but the email was signed Dr. Nellie K. Neblett. Stacey, or was it Nellie, had ignored my questions to Nicoleta (but sent me an interesting brochure which revealed how the publisher calculated its own impact factor, but using data from the very catholic listings in Google Scholar – so vastly inflating the value compared with properly audited impact factors).


The response to my email reply to Nicoleta Auffahrt

Then a couple of days later, I had another email (appended below) from 'Dr' Auffahrt "following-up" on her earlier email, but written as if I had not replied to her – and repeating the information that she was going to be in Sydney 'next week' (it should have been 'this week' by then, if her earlier email had been correct) and again wishing my (non-existent) Christmas candles would be glorious. Given that she had asked me to confirm my affiliation with the University of Cambridge in Cambridge Uk, United Kingdom [sic, we in the U.K. tend to capitalise both letters, something one might expect an editor to appreciate – but perhaps she thought 'Cambridge Uk' was a place in the United Kingdom?], and I had done so, I was not entirely sure why she thought it useful for me to know she would be in Sydney, unless this was just 'small talk'.

I replied pointing out that,

"If you have checked your emails and seen my reply, you will have found I was asking about your research, as I wondered how it might link with mine. You have me at a disadvantage(!) as you tell me you have read some of my work, but I've not had a chance to read yours – perhaps you could direct me to some of it?"

My reply to 'Dr' Auffahrt 's second email.

So far, no response to that.

Nicoleta Auffahrt's Facebook presence

The Facebook page had not been updated since Christmas day 2021 (when a video from the University of Pennsylvania about the student-run Medical Emergency Response Team was re-posted.) According to this page: 'Dr' Auffahrt

  • Works at University of Pennsylvania
  • Worked at University of Ottawa
  • Studied at University of Ottawa
  • Studied at University of Pennsylvania
  • Went to Emma Hart Willard School
  • Lives in, and is from, Ottawa, Ontario

(I was unable to verify that there is a Emma Hart Willard School in Canada, and it is unlikely any current school would not have a website that could be picked up in a Google search, but, of course, it may have closed down or changed its name since Nicoleta studied there.)

Her Facebook page 'cover' picture (see below) is an image of 'Canada's University'.


A photograph of the Université d'Ottawa (Canada) – and the profile picture on 'Dr' Nicoleta Auffahrt's Facebook page.


The wrong Ottawa?

An obvious discrepancy is that the University of Ottawa is not Ottawa University, USA (where the publisher's site claimed 'Dr' Auffahrt was awarded her M.A. degree). One is in Ontario, in Canada – the other is in Kansas, in the United States. Someone who has attended one of these universities would be unlikely to be confused about which one they studied at, and graduated from.

'Dr' Nicoleta Auffahrt's Facebook cover picture was of the Canadian version (if from before some trees had been cut down – possibly as part of the removal of 50 trees as part of development work in 2015).

Friends and family?

Some people use a Facebook page extensively to connect with friends and family. Not everyone does. Some people start a Facebook page and either abandon it, or seldom update it. So, limited information on someone's Facebook page is not of itself evidence of any wrongdoing.

Nicoleta's account was linked to two 'friends' – the University of Pennsylvania, and a Canadian ice hockey player Brendan Jacome. (His Facebook page was even less informative than Nicoleta's – but unlike her, he has quite a web presence – Google made over 2000 returns for "Brendan Jacome").

Nocolata's Facebook activity was limited to

  • posting a picture of 'her baby' (see below) as her profile picture, updating her cover photo, and posting a message that "Real education is only obtained through self- education" – on the same day just before before Christmas 2016;
  • posting that she had "Started New Job at Global Journals Incorporated" in 2o19;
  • and updating her profile picture and reposting the University of Pennsylvania video on Christmas day 2021.
Nicoleta's baby?

It looked like the only real clues on the Facebook page were the photographs of Nicoleta and the woman she described as her 'baby' – so, perhaps her daughter?

I tried to find any other photos that matched the image supposed to be of Nicoleta Auffahrt. I failed, so that lead did not help.

However, I soon found an image matching the picture of the other woman.




The two images above are taken from Nicoleta Auffahrt's facebook page and a public profile for one Alessandra Manganelli when she was a Ph.D. student in Brussels (this page has no new content beyond a conference attended in the UK in November 2017). A smaller version of the same photograph appears on another page at the Vrije Universiteit Brussel site.



Unlike 'Dr' Auffahrt, a quick Google for "Alessandra Manganelli" gives over 2500 hits. That is much more in line with what one might expect for an academic.

According to the web, Dr Manganelli, having completed her doctorate at the the Universities of KULeuven and Vrije Universiteit Brussel, moved to a post-doctoral position in Hamburg. It seems she undertook her doctoral studies in Engineering Science with input in Architecture from one institution (KULeuven) and in Sciences from the other (VUB). She works on areas such as Urban Governance, Social Innovation, Urban Agriculture, Local Food Policies, Urban Environmental and Climate Governance. Google Scholar lists a range of publications she has written or co-authored. That is, unlike in the case of 'Dr' Auffahrt, there is a lot of publicly available information about Dr Manganelli's research activities to support her claim to be a genuine scholar.

A Canadian connection?

Although Dr Manganelli seems to have done most of her work in Europe, one might imagine that, if she was indeed Nicoleta Auffahrt's baby, then she was perhaps born and brought up in Canada, before moving to Europe to study? However, not so: it appears Dr Manganelli is from Sienna in Italy.

Intriguingly, however, one of her publications is a book for a major academic publisher: "The Hybrid Governance of Urban Food Movements. Learning from Toronto and Brussels". She also co-authored a paper in the journal 'Critical perspectives on food guidance' on "… FoodShare Toronto´s approach to critical food guidance…". So, there is a Canadian connection.

Dr Manganelli's has her own Facebook page. (Dr Manganelli has 840 friends listed on Facebook, but Nicoleta Auffahrt is not one of them.) Her page suggests she did work at Toronto Metropolitan University from January/February to June 2017 (so, at the time when Nicoleta Auffahrt was supposedly in Ottawa before moving to Pennsylvania) – and Toronto is 'only' about 350 km from Ottawa. (Dr Manganelli's Facebook site tells visitors that she visited Toronto again in November 2022.) But Nicoleta Auffahrt seems to have posted Alessandra Manganelli's picture on her Facebook page just before Alessandra Manganelli arrived in Canada.

One would presumably have to have a strong connection with another person to use their photograph as your social media profile picture for five years (Nicoleta Auffahrt used Alessandra Manganelli's image as her profile picture from 22nd December 2016 till 25th December 2021). But, perhaps I was being too literal in my reading of the term 'my baby'.

Perhaps

  • Nicoleta and Alessandra had met somewhere (a conference, a holiday, on line?) and formed a close friendship which may even have influenced Dr Manganelli's decision to spend some time in Canada (fairly) near Nicoleta Auffahrt; then
  • excited with anticipation at Alessandra's imminent arrival in the country, Nicoleta had posted a picture of her friend (her 'baby') as her new profile picture.

This seems a little forced to me, but it is not completely impossible. (I had no substantive interest in 'Dr' Auffahrt's personal life {nor Dr Manganelli's} – I just wanted to find out if there was evidence she was a real person who had genuinely earned those academic qualifications.)

Given the amount of information on the web, I am fairly confident Dr Manganelli is a real person.

(I emailed her to tell her I had found her picture on Nicoleta Auffahrt's Facebook page and asked if she had an email contact for Nicoleta Auffahrt. No reply (as yet) but Dr Manganelli is under no obligation to reply to emails from strangers asking her about her friends.)

I am less sure about 'Dr' Nicoleta Auffahrt.

I also emailed the registry at the University of Pennsylvania to say I had been contacted by someone claiming to hold a Ph.D. from the University, where I had suspicions about this, and asked if there was a public listing of Ph.D. holders that could be checked. So, far no response (beyond an automatic reply with a case number pointing out that a response "may take up to 3-5 business days", sent over two weeks ago). Perhaps the University of Pennsylvania does not concern itself with people who are possibly falsely claiming to hold its doctorates.*

Conclusion?

Perhaps, Nicoleta Auffahrt is a real person who does hold the degrees she claims, including a higher doctorate, despite having no scholarly trace on the web (though this seems incredible to the point of being virtually impossible), and does work for 'Global Journals'; and perhaps she did write the email telling me she was visiting Sydney (why tell me that?) and wishing me glorious Christmas candles (why say that weeks after Christmas?)… and then also writing the second email email ignoring my reply and repeating the information about Sydney and candles? I guess this is not impossible, just extremely unlikely. And if this is the case, and if she is so keen to 'develop an academic relationship' with me, then why does she ignore my replies and my request to learn more about her work?


Alternatively, perhaps Nicoleta Auffahrt is a real person with a genuine, if seldom updated, Facebook page, and a close relationship of some kind (which is genuinely none of my business) with Dr Alessandra Manganelli, but her identify has been 'borrowed' by Global Journals. So, perhaps, there is wrongdoing, but Nicoleta Auffahrt is totally innocent of this.

I suspect this sometimes happens – it would explain why the long-retired philosophy professor, Kuang-Ming Wu, Ph.D., supposed editor of a philosophy journal, thought I was qualified to review a paper on…well, I read the abstract and was still not sure what it was about, but it clearly was not anything related to science education.


It seems more likely to me that the Facebook page is a sham set up to give some kind of minimal web presence to 'Dr' Auffahrt (a fictitious Managing Editor at Global Journals), and that there is no Nicoleta Auffahrt (and that Dr Manganelli's image was simply arbitrarily sourced from the web somewhere without her knowledge).



Of course, I may be wrong, but there is certainly something dodgy about communications from this publisher, as the supposed managing editor seems to share her email account with Dr. Stacey J. Newman / Dr. Nellie K. Neblett – and checking back through old email I found another invitation (appended below) from the same email address supposedly from a Dr. Gisela Steins (there is a real academic with this name who is a psychology professor in Germany and is listed on the editorial board of the Global Journal of Human-Social Science).

Prof. Steins thought my paper "Knowledge, beliefs and pedagogy: how the nature of science should inform the aims of science education (and not just when teaching evolution)" was "remarkable and significant" and could be "vital for fellow researchers and scientists". That was very nice of her – at least, if she did actually write the email!


Invitation from a highly qualified scholar?
My reply to Nicoleta

Dear Dr. Nicoleta Auffahrt

Thank you for your kind message.

It was rewarding to learn that you considered our publication "Secondary Students' Values and Perceptions of Science-Related Careers Responses to Vignette-Based Scenarios" to be worthy of admiration, and that you have shared our work with your colleagues.

Congratulations on your role as Managing Editor, Department of Humanities and Social Science at Global Journals. This sounds a prestigious and challenging position. I hope you enjoy Sydney – I've not been there myself. I was a little confused by your remark about candles, as I had always assumed most Australians celebrated Christmas at the same time as in Western Europe – I am afraid Christmas already seems a memory here.

I wonder what you found of particular interest in the paper – perhaps you would be prepared to share what it is you found especially of value in this work?

In answer to your question, I am now retired from my teaching role. I maintain an affiliation with my Faculty as an Emeritus Officer of the University, and intend to follow my own scholarly interests for as long as I am able.

Perhaps this links to your own research? I hope you would be kind enough, in return, to answer a question for me. I was intrigued to see that you had a higher doctorate, a D. Litt. in Teaching Education, so clearly your background is relevant to my work. I was wondering where you were awarded that? I see from the journal publisher' web-pages that you were awarded your Ph.D. from the University of Pennsylvania and your Master of Arts from Ottawa University, USA, but it does not mention your D.Litt. Is 'Teaching Education' meant to be an abbreviation for teaching and education or was your degree specifically related to teacher education? In my national context a D. Litt. would usually only be awarded after a highly positive evaluation of a portfolio of post-doctoral publications, but I believe in the U.S. some universities offer this as an outcome of a thesis-based programme. I would be interested to know more about your area of work – in particular the body of work for which the D.Litt. was awarded, and how it links to my own scholarship and research.

Best wishes

Keith


What is this obsession with Christmas candles?


Praise (supposedly) from a psychology professor who found time to read my 'remarkable' work.

Update: Dr. Nicoleta Auffahrt succeeded by her doppelgänger?



My thanks to Dr. Murat Siviloglu for forwarding to me this extract (above) from an invitation he received from the current Managing Editor of the Global Journal of Human-Social Science. It seems perhaps "Dr Nicoleta Auffahrt" has moved on form her role, and now invitations are being sent out by "Dr Carolyn C. Mitchell". According to the invitation, Mitchell, like Aufffahrt holds the higher doctorate of a D.Litt., again in the odd subject of 'Teaching Education'.

The only references to a "Dr [or Dr.] Carolyn C. Mitchell"that showed on a web-search were on the Journal publisher's sites. I could not find anyone called Carolyn Mitchell who seemed to have a D.Litt., so, like her predecessor, Mitchell seems to have achieved high academic status without any visible trace of research and scholarship. 1

Mitchell is also a 'dead ringer' for Auffahrt, as their profile pictures seem, well, identical.



And the similarities do not stop there. According to the Journal website, "Dr Carolyn C. Mitchell" also holds the degrees of

"Ph.D. University of Pennsylvania, Master of Arts in Ottawa University, USA".

It seems that when appointing senior editors, Global Journals certainly 'go for a type' as they say.


Update (21st February, 2024)

On the 13th of February 2024, I received an email ("Immediate Cease and Desist Demand – Defamatory and Harmful Content") from the email address <legals@globaljournals.org> from someone claiming to be the Chief Legal Officer for Global Journals Incorporated, and asking me to remove this page (or face immediate legal action). The email acknowledged that the company engages in "the use of alternate identities by our editors and reviewers to engage potential authors" (something the email suggested they do "for privacy and safety").


Notes

1 It may seen obvious that if someone has not published any work, then no one can be citing them. That is fair enough. However, Google Scholar will find citations in work that is accessible on the web to work that is not itself found on the web – for example, references made to books that were published many years ago and have never been digitised, or to conference papers that were distributed at talks in hard copy, but have never been included in web repositories.


* Update. On 17th March I received a reply, from a Student Service Center Counselor, to my enquiry from eight weeks earlier:

Sorry for the delay as the registrar's office is months behind on email requests. We have been tasked with assisting to clear their portal. Below is what we typically send 3rd party requests for information:

Thank you for contacting Student Registration and Financial Services (SRFS). We have received your education verification request for (INSERT STUDENT NAME) from the University of Pennsylvania. Third-Party education verifications are required to go through the National Student Clearinghouse (NSC). Please use this link: https://secure.studentclearinghouse.org/vs/Index to place your request. Penn's school code is…

email from University of Pennsylvania

When I investigated the National Student Clearinghouse website, I found I needed to first register as either a representative of a 'company' or as a student seeking to verify my own record.

Screenshot of part of a webpage of the National Student Clearinghouse

"The mission of the National Student Clearinghouse is to serve the education and workforce communities and all learners with access to trusted data, related services, and insights."

https://www.studentclearinghouse.org/about/

It seems there is no facility for someone approached by a publisher to check the veracity of an editor's claimed qualifications.


Not actually a government event

"It's just the name of the shop, love"


Keith S. Taber



An invitation

This week I received an invitation to chair an event (well, as most weeks, I received several of those, but this one seemed to be actually on a topic I knew a little about…).

Dear Keith,

"It is my pleasure to invite you to chair at Government Events' upcoming event The Delivering Excellence in Teaching STEM in Schools Conference 2023, taking place online on 29th of March 2023.

Chairing would involve giving a short opening and closing address, chairing the Q&A and panel discussions, and providing insights throughout the day.

Invited Speakers Include:

  • Kiera Newmark, Deputy Director for STEM, Digital and EdTech, Department for Education
  • Maria Rossini, Head of Education, British Science Association
  • Sam Parrett, Chief Executive, London South East Colleges

I feel you would add great value and insight to the day and I would be delighted to confirm your involvement in this event! …"

(Well, I claim to know a bit about teaching science, not so much about teaching technology or mathematics, or engineering {that I was not aware was really part of the National Curriculum}.)

Read about STEM education

So, at face value this would be a government-sponsored event, including a senior representative from the ministry of education – so perhaps another chance for me to lobby to have the embarrassing errors in the English National Curriculum for science corrected – as well as a leading executive from the 'British Ass'. 1

My initial reaction was mixed. This was clearly an important topic, and one where I probably was qualified to act as chair and might be able to make a useful contribution. And it was on-line, so I would not have to travel. Then again, I retired from teaching because I suffer from regular bouts of fatigue, and find I have to limit my high intensity inputs as I tire very easily and quickly these days. Chairing a session might not completely drain me, but a whole conference?

Due diligence

Finding myself tempted, I felt the need to do some due diligence. Was this really what it seemed? What would be involved?

The invitation seemed genuine enough, even if it included one of those dodgy legalese footers sometimes used by scam artists to put people off sharing dubious messages. (The 'you must not tell anyone' trope reminds me of what fictional blackmailers say about not contacting the police.)


A rather silly legal disclaimer.
It seems from the wording, presumably carefully chosen by the legal department, that this disclaimer only applies to "email (which included any attachment)" – whereas mine did not.

This one suggested that if I had received the message in error I should

  • permanently delete it from my computer without making a copy
  • then reply to say I had done so

I will leave the reader to spot the obvious problem there.

However, this lack of clear logic did remind me of the similarly bizarre statement about the conservation of energy in the National Curriculum which perhaps gave some credence to this being a government event.

Luckily, I was the intended recipient, but in any case I take the view that if someone sends me an unsolicited email, then they have no right to tell me what to do with it, and as in this case I discovered they had already announced the invitation on their website (see image above), I could not see how any court would uphold their claim that the message was confidential.

Government events?

I was clearly aware that just because an event was organised by an entity called "Government Events" was not assurance this really was a government event. So I checked out the website. (The lack of any link in the invitation email to the event webpage, or indeed the organisation more generally, might have been an oversight, but seemed odd.)

As you will have likely guessed, this was not a government event.

In situations such as this I am always put in mind of the 'song' 'Shirt' by the dadaist-inspired Bonzo Dog Doo-dah Band which included a joke about a man who takes his shirt to a dry cleaner for 'express cleaning' and was then told it would be ready for collection in three weeks. 1

Three weeks!? But, the sign outside says '59 minute cleaning'

Yes, that's just the name of the shop, love.

On searching out the website I found that "Government events" claims to be "Supporting UK Public Sector Teams to Deliver World Class Public Services" and to be "the UK's No. 1 provider of public sector conferences, training courses and case study focused insights and intelligence". By "public sector conferences" they presumably mean conferences for the public sector, not conferences in the pubic sector.

It transpired that "Government Events" is one brand of an organisation called "Professional Development Group". That organisation has a webpage featuring members of its "Advisory Board [which] is made up of senior executives and academics from both corporate and public-sector background" but its website did not seem to provide any information about its governance or who its executives or owners were. (Professional Development Group does have a listing in the Companies House registry showing two current directors.)


Possibly the senior leadership team at Professional Development Group? But probably not.


A bias against the private sector?

Perhaps I am simply displaying my personal bias towards the public sector? I've worked in state schools, a state college, and a state university. I have worked in the private sector if we include after-school, and holiday, jobs (mainly for Boots the Chemist or Boots the Pharmacist as they should be known), but my career is very much public sector. And I've not liked what I've seen as the inappropriate and misguided attempts to make the health and education service behave like a series of competing units in a free market. (And do not 'get me going' on the state of utilities in England now – the selling off of state assets at discounted rates to profit-making concerns (seemingly to fund temporary tax cuts for electoral advantage), and so replacing unitary authorities (with no need to budget for continuously needing to advertise and to try to poach each others' customers) by dozens of competing and, recently, often failing, profit-making companies that often own each other or are owned overseas.)

So, although I have no problem with the private sector, which no doubts does some things better, I am suspicious of core 'public sector' activities being moved into the commercial sphere.

Perhaps "Government Events" does a good job despite the misleading name. After all they are kite-marked by an organisation called the CPD Certification Service (a trademark of The CPD Certification Service Ltd, so another privately owned company. Again, the website did not give any information about governance or The CPD Certification Service Ltd's executives. But four directors are named in the public listing at Company's House). This all seems alien to someone from the public sector where organisations go out of their way to provide such information, and value transparency in such matters. (Three of the four directors share the same family name, 'Savage', which might raise some questions in a publicly governed organisation.)

A bit pricey for an on-line meeting?

But even if Professional Development Group do a wonderful job, do they offer value for money?

The conference is aimed at "teachers who work in STEM and senior leadership representatives from schools". If they work in state schools the cost per delegate is £399.00 (plus V.A.T., but schools should be able to reclaim that). For that they get a one-day on line conference. The chair (currently listed as" "Keith Taber, Professor of Science Education, University of Cambridge (invited)", but that will need to be changed*) is due to open the event at 09.50, and to wind it up with some closing remarks at 16.20. The £399 will presumably not include accommodation, refreshments, lunch, a notepad, a branded pen, a tote bag for the goodies, or any of the other features of face-to-face events.

It will include a chance to hear a range of speakers. Currently listed (caveat emptor: "programme subject to change without notice") are ten specified presentations as well as two Key Supporter Sessions (!) The advertised topics seem valuable:

  • National Trends and Updates on Boosting the Profile of STEM Subjects in Schools
  • Best Practice for Implementing Flexible Working to Help Recruit and Retain STEM Teachers
  • Providing an Inclusive and Accessible STEM Curriculum for Pupils with SEND
  • Driving Increased Interest and Participation in STEM Among Female Students
  • Encouraging Students from Disadvantaged Backgrounds to Study STEM Subjects
  • Taking Action to Boost Extracurricular Engagement with STEM Subjects 
• Primary: Implementing a Whole School Approach to Boost the Profile of STEM Subjects• Secondary: Supporting Students to Succeed and Improve Outcomes in STEM Examinations
• Primary: Partnership Working to Promote STEM Education in Primary Schools• Secondary: Working with Employers and Universities to Encourage Post-Secondary STEM

However, anyone looking to book should notice that at this point only one of the ten mooted speakers has confirmed – the rest are 'invited'.

I was also intrigued by the two slots reserved for 'Key Supporter Session's. You, dear reader, could buy ('sponsor') one of these slots to talk at the conference.

You can sponsor the conference

Professional Development Group offer "sponsorship and exhibition packages" for their events. This would allow a sponsor to meet their "target audience", to "demonstrate your products or services" and even to "speak alongside industry [sic] leading experts".

Someone wishing to invest as a Key Supporter (pricing not disclosed) gets branding on the Website and Event Guide and a "20-minute speaking slot followed by Q&A". (For this specific conference it seems you could buy time to sell your wares in the 10.40 slot or the 13.55 slot.)

  • Perhaps you have invented a new type of perpetual motion kit for use in the classroom and are seeking an opportunity to demonstrate and market your wares? ["demonstrate your products or services"]
  • Perhaps you think that evolution is not really science because it is only a theory, and you want to subject delegates to a diatribe on why impressionable young people should not be indoctrinated with such dangerous speculations? ["speak alongside industry [sic] leading experts"]
  • Perhaps your company mines and refines uranium ore, and is looking to find a market for the vast amounts of fine slag produced, and think it might make an excellent modelling material for use in design and technology classes? [meet "your target audience"]

A Strategic Headline Sponsor at a Professional Development Group event can also purchase other features such as a "pre show marketing email to all registered delegates". I guess the terms and conditions of signing up to a Professional Development Group event mean delegates agree to receive such sponsored advertising.

What's wrong with selling conference slots?

There is nothing inherently immoral about selling slots at a commercial conference – after all, it is a commercial event – so, it is primarily about 'the bottom line' of the balance sheet. But that's my point. This would be unacceptable at an academic conference, where some speakers are invited because they are considered to have something relevant to say, and others wishing to present have to submit their proposals to peer review.

What I find, if not immoral, certainly distasteful here, is that an on-line conference of the kind that would likely be arranged for free or for a nominal fee in an academic context, is being priced at £399 for state school teachers at a time when public services are under immense pressures and budgets need to be very wisely spent. How can this price be justified?

Perhaps the speaker fees are a significant cost. But I doubt that: I was not offered any fee to give up a day of my time to chair the meeting, and so I expect the other speakers are also being expected to speak for free as well. That's how things usually work in academia and the state sector. (But if this is a commercial activity, then the professional speakers SHOULD ask for a fee. If they are taking time out of school, and so already being paid, then perhaps the fee could be used to buy school books or pay for supply teachers?) Indeed, there are two slots for fee-paying speakers who wish to advertise their wares.

So, this is perhaps not actually a scam, but it does not meet the standards of honesty and transparency I would expect in the state sector (because it is only masquerading as state sector), and the event seems to be priced in order to make money for shareholders, not primarily to meet a mission of "Supporting UK Public Sector Teams".

If the COVID pandemic taught us anything, it is that many (probably not all, but surely most) meetings can be held just as well on line, so avoiding all the time, money and carbon footprints of moving people around the country. Oh, and consequently, it showed us that most of these meetings (a) can be offered for free where they are hosted by a public sector organisation that can consider them as meeting part of its core mission; and (b) that even when that does not apply, and so costs have to be covered, they can be arranged for a fraction of the expense of a face-to-face event at a hired venue.

As you may have guessed, I declined.*


* I replied to decline this opportunity on 19th November. Checking on 25th November, I see I am still listed as Chair (invited). See note 1



Notes

1 In the academic world, the term 'invited speaker' is used to designate a conference speaker who was invited by the organisers in contrast to a speaker who applied to speak and proposed a contribution in response to an open call. However, 'invited speaker' here seems to mean someone whom has been invited to speak, in contrast to someone who has agreed to.


2 I have a pretty poor memory, but do recall seeing Bonzo stalwart Neil Innes play at Nottingham University when I was a student. He sang their most successful song, "I'm the urban spaceman" (which reached no. 5 in the UK single charts and led to Innnes getting an Ivor Novello award for his song-writing), then announced, deadpan, "that was a medley of hit".

The Bonzos

Falsifying research conclusions

You do not need to falsify your results if you are happy to draw conclusions contrary to the outcome of your data analysis.


Keith S. Taber


Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: but their own data analaysis does not support this.

I recently read a research study to evaluate a teaching innovation where the authors

  • presented their results,
  • reported the statistical test they had used to analyse their results,
  • acknowledged that the outcome of their experiment was negative (not statistically significant), then
  • stated their findings as having obtained a positive outcome, and
  • concluded their paper by arguing they had demonstrated their teaching innovation was effective.

Li, Ouyang, Xu and Zhang's (2022) paper in the Journal of Chemical Education contravenes the scientific norm that your conclusions should be consistent with the outcome of your data analysis.
(Magnified portions of this scheme are presented below)

And this was not in a paper in one of those predatory journals that I have criticised so often here – this was a study in a well regarded journal published by a learned scientific society!

The legal analogy

I have suggested (Taber, 2013) that writing up research can be understood in terms of a number of metaphoric roles: researchers need to

  • tell the story of their research;
  • teach readers about the unfamiliar aspects of their work;
  • make a case for the knowledge claims they make.

Three metaphors for writing-up research

All three aspects are important in making a paper accessible and useful to readers, but arguably the most important aspect is the 'legal' analogy: a research paper is an argument to make a claim for new public knowledge. A paper that does not make its case does not add anything of substance to the literature.

Imagine a criminal case where the prosecution seeks to make its argument at a pre-trial hearing:

"The police found fingerprints and D.N.A. evidence at the scene, which they believe were from the accused."

"Were these traces sent for forensic analysis?"

"Of course. The laboratory undertook the standard tests to identify who left these traces."

"And what did these analyses reveal?"

"Well according to the current standards that are widely accepted in the field, the laboratory was unable to find a definite match between the material collected at the scene, and fingerprints and a D.N.A. sample provided by the defendant."

"And what did the police conclude from these findings?"

"The police concluded that the fingerprints and D.N.A. evidence show that the accused was at the scene of the crime."

It seems unlikely that such a scenario has ever played out, at least in any democratic country where there is an independent judiciary, as the prosecution would be open to ridicule and it is quite likely the judge would have some comments about wasting court time. What would seem even more remarkable, however, would be if the judge decided on the basis of this presentation that there was a prima facie case to answer that should proceed to a full jury trial.

Yet in educational research, it seems parallel logic can be persuasive enough to get a paper published in a good peer-reviewed journal.

Testing an educational innovation

The paper was entitled 'Implementation of the Student-Centered Team-Based Learning Teaching Method in a Medicinal Chemistry Curriculum' (Li, Ouyang, Xu & Zhang, 2022), and it was published in the Journal of Chemical Education. 'J.Chem.Ed.' is a well-established, highly respected periodical that takes peer review seriously. It is published by a learned scientific society – the American Chemical Society.

That a study published in such a prestige outlet should have such a serious and obvious flaw is worrying. Of course, no matter how good editorial and peer review standards are, it is inevitable that sometimes work with serious flaws will get published, and it is easy to pick out the odd problematic paper and ignore the vast majority of quality work being published. But, I did think this was a blatant problem that should have been spotted.

Indeed, because I have a lot of respect for the Journal of Chemical Education I decided not to blog about it ("but that is what you are doing…?"; yes, but stick with me) and to take time to write a detailed letter to the journal setting out the problem in the hope this would be acknowledged and the published paper would not stand unchallenged in the literature. The journal declined to publish my letter although the referees seemed to generally accept the critique. This suggests to me that this was not just an isolated case of something slipping through – but a failure to appreciate the need for robust scientific standards in publishing educational research.

Read the letter submitted to the Journal of Chemical Education

A flawed paper does not imply worthless research

I am certainly not suggesting that there is no merit in Li, Ouyang, Xu and Zhang's work. Nor am I arguing that their work was not worth publishing in the journal. My argument is that Li and colleague's paper draws an invalid conclusion, and makes misleading statements inconsistent with the research data presented, and that it should not have been published in this form. These problems are pretty obvious, and should (I felt) have been spotted in peer review. The authors should have been asked to address these issues, and follow normal scientific standards and norms such that their conclusions follow from, rather than contradict, their results.

That is my take. Please read my reasoning below (and the original study if you have access to J.Chem.Ed.) and make up your own mind.

Li, Ouyang, Xu and Zhang report an innovation in a university course. They consider this to have been a successful innovation, and it may well have great merits. The core problem is that Li and colleagues claim that their innovation is successful in improving teaching quality and student learning: when their own data analysis does not support this.

The evidence for a successful innovation

There is much material in the paper on the nature of the innovation, and there is evidence about student responses to it. Here, I am only concerned with the failure of the paper to offer a logical chain of argument to support their knowledge claim that the teaching innovation improved student achievement.

There are (to my reading – please judge for yourself if you can access the paper) some slight ambiguities in some parts of the description of the collection and analysis of achievement data (see note 5 below), but the key indicator relied on by Li, Ouyang, Xu and Zhang is the average score achieved by students in four teaching groups, three of which experienced the teaching innovation (these are denoted collectively as the 'the experimental group') and one group which did not (denoted as 'the control group', although there is no control of variables in the study 1). Each class comprised of 40 students.

The study is not published open access, so I cannot reproduce the copyright figures from the paper here, but below I have drawn a graph of these key data:


Key results from Li et al, 2022: this data was the basis for claiming an effective teaching innovation.

Loading poll ...

It is on the basis of this set of results that Li and colleagues claim that "the average score showed a constant upward trend, and a steady increase was found". Surely, anyone interrogating these data might have pause to wonder if that is the most authentic description of the pattern of scores year on year.

Does anyone teaching in a university really think that assessment methods are good enough to produce average class scores that are meaningful to 3 or 4 significant figures. To a more reasonable level of precision, nearest %age point (which is presumably what these numbers are – that is not made explicit), the results were:


CohortAverage class score
201780
201880
201980
202080
Average class scores (2 s.f.) year on year

When presented to a realistic level of precision, the obvious pattern is…no substantive change year on year!

A truncated graph

In their paper, Li and colleagues do present a graph to compare the average results in 2017 with (not 2018, but) 2019 and 2020, somewhat similar to the one I have reproduced here which should have made it very clear how little the scores varied between cohorts. However, Li and colleagues did not include on their axis the full range of possible scores, but rather only included a small portion of the full range – from 79.4 to 80.4.

This is a perfectly valid procedure often used in science, and it is quite explicitly done (the x-axis is clearly marked), but it does give a visual impression of a large spread of scores which could be quite misleading. In effect, their Figure 4b includes just a slither of my graph above, as shown below. If one takes the portion of the image below that is not greyed out, and stretches it to cover the full extent of the x axis of a graph, that is what is presented in the published account.


In the paper in J.Chem.Ed., Li and colleagues (2022) truncate the scale on their average score axis to expand 1% of the full range (approximated above in the area not shaded over) into a whole graph as their Figure 4b. This gives a visual impression of widely varying scores (to anyone who does not read the axis labels).

Compare images: you can use the 'slider' to change how much of each of the two images is shown.

What might have caused those small variations?

If anyone does think that differences of a few tenths of a percent in average class scores are notable, and that this demonstrates increasing student achievement, then we might ask what causes this?

Li and colleagues seem to be convinced that the change in teaching approach caused the (very modest) increase in scores year on year. That would be possible. (Indeed, Li et al seem to be arguing that the very, very modest shift from 2017 to subsequent years was due to the change of teaching approach; but the not-quite-so-modest shifts from 2018 to 2019 to 2020 are due to developing teacher competence!) However, drawing that conclusion requires making a ceteris paribus assumption: that all other things are equal. That is, that any other relevant variables have been controlled.

Read about confounding variables

Another possibility however is simply that each year the teaching team are more familiar with the science, and have had more experience teaching it to groups at this level. That is quite reasonable and could explain why there might be a modest increase in student outcomes on a course year on year.

Non-equivalent groups of students?

However, a big assumption here is that each of the year groups can be considered to be intrinsically the same at the start of the course (and to have equivalent relevant experiences outside the focal course during the programme). Often in quasi-experimental studies (where randomisation to conditions is not possible 1) a pre-test is used to check for equivalence prior to the innovation: after all, if students are starting from different levels of background knowledge and understanding then they are likely to score differently at the end of a course – and no further explanation of any measured differences in course achievement need be sought.

Read about testing for initial equivalence

In experiments, you randomly assign the units of analysis (e.g., students) to the conditions, which gives some basis for at least comparing any differences in outcomes with the variations likely by chance. But this was not a true experiment as there was no randomisation – the comparisons are between successive year groups.

In Li and colleagues' study, the 40 students taking the class in 2017 are implicitly assumed equivalent to the 40 students taking the class in each of the years 20818-2020: but no evidence is presented to support this assumption. 3

Yet anyone who has taught the same course over a period of time knows that even when a course is unchanged and the entrance requirements stable, there are naturally variations from one year to the next. That is one of the challenges of educational research (Taber, 2019): you never can "take two identical students…two identical classes…two identical teachers…two identical institutions".

Novelty or expectation effects?

We would also have to ignore any difference introduced by the general effect of there being an innovation beyond the nature of the specific innovation (Taber, 2019). That is, students might be more attentive and motivated simply because this course does things differently to their other current courses and past courses. (Perhaps not, but it cannot be ruled out.)

The researchers are likely enthusiastic for, and had high expectations for, the innovation (so high that it seems to have biased their interpretation of the data and blinded them to the obvious problems with their argument) and much research shows that high expectation, in its own right, often influences outcomes.

Read about expectancy effects in studies

Equivalent examination questions and marking?

We also have to assume the assessment was entirely equivalent across the four years. 4 The scores were based on aggregating a number of components:

"The course score was calculated on a percentage basis: attendance (5%), preclass preview (10%), in-class group presentation (10%), postclass mind map (5%), unit tests (10%), midterm examination (20%), and final examination (40%)."

Li, et al, 2022, p.1858

This raises questions about the marking and the examinations:

  • Are the same test and examination questions used each year (that is not usually the case as students can acquire copies of past papers)?
  • If not, how were these instruments standardised to ensure they were not more difficult in some years than others?
  • How reliable is the marking? (Reliable meaning the same scores/mark would be assigned to the same work on a different occasion.)

These various issues do not appear to have been considered.

Change of assessment methodology?

The description above of how the students' course scores were calculated raises another problem. The 2017 cohort were taught by "direct instruction". This is not explained as the authors presumably think we all know exactly what that is : I imagine lectures. By comparison, in the innovation (2018-2020 cohorts):

"The preclass stage of the SCTBL strategy is the distribution of the group preview task; each student in the group is responsible for a task point. The completion of the preview task stimulates students' learning motivation. The in-class stage is a team presentation (typically PowerPoint (PPT)), which promotes students' understanding of knowledge points. The postclass stage is the assignment of team homework and consolidation of knowledge points using a mind map. Mind maps allow an orderly sorting and summarization of the knowledge gathered in the class; they are conducive to connecting knowledge systems and play an important role in consolidating class knowledge."

Li, et al, 2022, p.1856, emphasis added.

Now the assessment of the preview tasks, the in-class group presentations, and the mind maps all contributed to the overall student scores (10%, 10%, 5% respectively). But these are parts of the innovative teaching strategy – they are (presumably) not part of 'direct instruction'. So, the description of how the student class scores were derived only applies to 2018-2020, and the methodology used in 2017 must have been different. (This is not discussed in the paper.) 5

A quarter of the score for the 'experimental' groups came from assessment components that could not have been part of the assessment regime applied to the 2017 cohort. At the very least, the tests and examinations must have been more heavily weighed into the 'control' group students' overall scores. This makes it very unlikely the scores can be meaningfully directly compared from 2017 to subsequent years: if the authors think otherwise they should have presented persuasive evidence of equivalence.


Li and colleagues want to convince us that variations in average course scores can be assumed to be due to a change in teaching approach – even though there are other conflating variables.

So, groups that we cannot assume are equivalent are assessed in ways that we cannot assume to be equivalent and obtain nearly identical average levels of achievement. Despite that, Li and colleagues want to persuade us that the very modest differences in average scores between the 'control' and 'experimental' groups (which is actually larger between different 'experimental group' cohorts than between the 'control' group and the successive 'experimental' cohort) are large enough to be significant and demonstrate their teaching innovation improves student achievement.

Statistical inference

So, even if we thought shifts of less than a 1% average in class achievement were telling, there are no good reasons to assume they are down to the innovation rather than some other factor. But Li and colleagues use statistical tests to tell them whether differences between the 'control' and 'experimental' conditions are significant. They find – just what anyone looking at the graph above would expect – "there is no significant difference in average score" (p.1860).

The scientific convention in using such tests is that the choice of test, and confidence level (e.g., a probability of p<0.05 to be taken as significant) is determined in advance, and the researchers accept the outcomes of the analysis. There is a kind of contract involved – a decision to use a statistical test (chosen in advance as being a valid way of deciding the outcome of an experiment) is seen as a commitment to accept its outcomes. 2 This is a form of honesty in scientific work. Just as it is not acceptable to fabricate data, nor is is acceptable to ignore experimental outcomes when drawing conclusions from research.

Special pleading is allowed in mitigation (e.g., "although our results were non-significant, we think this was due to the small samples sizes, and suggest that further research should be undertaken with large groups {and we are happy to do this if someone gives us a grant}"), but the scientist is not allowed to simply set aside the results of the analysis.


Li and colleagues found no significant difference between the two conditions, yet that did not stop them claiming, and the Journal of Chemical Education publishing, a conclusion that the new teaching approach improved student achievement!

Yet setting aside the results of their analysis is what Li and colleagues do. They carry out an analysis, then simply ignore the findings, and conclude the opposite:

"To conclude, our results suggest that the SCTBL method is an effective way to improve teaching quality and student achievement."

Li, et al, 2022, p.1861

It was this complete disregard of scientific values, rather than the more common failure to appreciate that they were not comparing like with like, that I found really shocking – and led to me writing a formal letter to the journal. Not so much surprise that researchers might do this (I know how intoxicating research can be, and how easy it is to become convinced in one's ideas) but that the peer reviewers for the Journal of Chemical Education did not make the firmest recommendation to the editor that this manuscript could NOT be published until it was corrected so that the conclusion was consistent with the findings.

This seems a very stark failure of peer review, and allows a paper to appear in the literature that presents a conclusion totally unsupported by the evidence available and the analysis undertaken. This also means that Li, Ouyang, Xu and Zhang now have a publication on their academic records that any careful reader can see is critically flawed – something that could have been avoided had peer reviewers:

  • used their common sense to appreciate that variations in class average scores from year to year between 79.8 and 80.3 could not possibly be seen as sufficient to indicate a difference in the effectiveness of teaching approaches;
  • recommended that the authors follow the usual scientific norms and adopt the reasonable scholarly value position that the conclusion of your research should follow from, and not contradict, the results of your data analysis.


Work cited:

Notes

1 Strictly the 2017 cohort has the role of a comparison group, but NOT a control group as there was no randomisation or control of variables, so this was not a true experiment (but a 'quasi-experiment'). However, for clarity, I am here using the original authors' term 'control group'.

Read about experimental research design


2 Some journals are now asking researchers to submit their research designs and protocols to peer review BEFORE starting the research. This prevents wasted effort on work that is flawed in design. Journals will publish a report of the research carried out according to an accepted design – as long as the researchers have kept to their research plans (or only made changes deemed necessary and acceptable by the journal). This prevents researchers seeking to change features of the research because it is not giving the expected findings and means that negative results as well as positive results do get published.


3 'Implicitly' assumed as nowhere do the authors state that they think the classes all start as equivalent – but if they do not assume this then their argument has no logic.

Without this assumption, their argument is like claiming that growing conditions for tree development are better at the front of a house than at the back because on average the trees at the front are taller – even though fast-growing mature trees were planted at the front and slow-growing saplings at the back.


4 From my days working with new teachers, a common rookie mistake was assuming that one could tell a teaching innovation was successful because students achieved an average score of 63% on the (say, acids) module taught by the new method when the same class only averaged 46% on the previous (say, electromagnetism) module. Graduate scientists would look at me with genuine surprise when I asked how they knew the two tests were of comparable difficulty!

Read about why natural scientists tend to make poor social scientists


5 In my (rejected) letter to the Journal of Chemical Education I acknowledged some ambiguity in the paper's discussion of the results. Li and colleagues write:

"The average scores of undergraduates majoring in pharmaceutical engineering in the control group and the experimental group were calculated, and the results are shown in Figure 4b. Statistical significance testing was conducted on the exam scores year to year. The average score for the pharmaceutical engineering class was 79.8 points in 2017 (control group). When SCTBL was implemented for the first time in 2018, there was a slight improvement in the average score (i.e., an increase of 0.11 points, not shown in Figure 4b). However, by 2019 and 2020, the average score increased by 0.32 points and 0.54 points, respectively, with an obvious improvement trend. We used a t test to test whether the SCTBL method can create any significant difference in grades among control groups and the experimental group. The calculation results are shown as follows: t1 = 0.0663, t2 = 0.1930, t3 =0.3279 (t1 <t2 <t3 <t𝛼, t𝛼 =2.024, p>0.05), indicating that there is no significant difference in average score. After three years of continuous implementation of SCTBL, the average score showed a constant upward trend, and a steady increase was found. The SCTBL method brought about improvement in the class average, which provides evidence for its effectiveness in medicinal chemistry."

Li, et al, 2022, p.1858-1860, emphasis added

This appears to refer to three distinct measures:

  • average scores (produced by weighed summations of various assessment components as discussed above)
  • exam scores (perhaps just the "midterm examination…and final examination", or perhaps just the final examination?)
  • grades

Formal grades are not discussed in the paper (the word is only used in this one place), although the authors do refer to categorising students into descriptive classes ('levels') according to scores on 'assessments', and may see these as grades:

"Assessments have been divided into five levels: disqualified (below 60), qualified (60-69), medium (70-79), good (80-89), and excellent (90 and above)."

Li, et al, 2022, p.1856, emphasis added

In the longer extract above, the reference to testing difference in "grades" is followed by reporting the outcome of the test for "average score":

"We used a t test to test …grades …The calculation results … there is no significant difference in average score"

As Student's t-test was used, it seems unlikely that the assignment of students to grades could have been tested. That would surely have needed something like the Chi-squared statistic to test categorical data – looking for an association between (i) the distributions of the number of students in the different cells 'disqualified', 'qualified', 'medium', 'good' and 'excellent'; and (ii) treatment group.

Presumably, then, the statistical testing was applied to the average course scores shown in the graph above. This also makes sense because the classification into descriptive classes loses some of the detail in the data and there is no obvious reason why the researchers would deliberately chose to test 'reduced' data rather than the full data set with the greatest resolution.


Methodological and procedural flaws in published study

A letter to the editor of the Journal of Chemical Education

the authors draw a conclusion which is contrary to the results of their data analysis and so is invalid and misleading

I have copied below the text of a letter I wrote to the editor of the Journal of Chemical Education, to express my concern about the report of a study published in that journal. I was invited to formally submit the letter for consideration for publication. I did. Following peer review it was rejected.

Often when I see apparent problems in published research, I discuss them here. Usually, the journals concerned are predatory, and do not seem to take peer review seriously. That does not apply here. The Journal of Chemical Education is a long-established, well-respected, periodical published by a national learned scientific society: the American Chemical Society. Serious scientific journals often do publish comments from readers about published articles and even exchanges between correspondents and the original authors of the work commented on. I therefore thought it was more appropriate to express my concerns directly to the journal. 𝛂 On this occasion, after peer review, the editor decided my letter was not suitable for publication. 𝛃

I am aware of the irony – I am complaining about an article which passed peer review in a posting which is publishing a letter submitted, but rejected, after peer review. Readers should bear that in mind. The editor will have carefully considered the submission and the referee recommendations and reports, and decided to decline publication based on journal policy and the evaluation of my submission.

However, having read the peer reviewers' comments (which were largely positive about the submission and tended to agree with my critique 𝜸), I saw no reason to change my mind. If such work is allowed to stand in the literature without comment, it provides a questionable example for other researchers, and, as the abstracts and conclusions from research papers are often considered in isolation (so, here, without being aware that the conclusions contradicted the results), it distorts the research literature.

To my reading, the published study sets aside accepted scientific standards and values – though I very much suspect inadvertently. Perhaps the authors' enthusiasm for their teaching innovation affected their judgement and dulled their critical faculties. We are all prone to that: but one would normally expect such a major problem to have been spotted in peer review, allowing the authors the opportunity to put this right before publication.

Read about falsifying research conclusions


Methodological and procedural flaws in published study

Abstract

A recent study reported in the journal is presented as an experimental test of a teaching innovation. Yet the research design does not meet the conditions for an experiment as there is insufficient control of variables and no random assignment to conditions. The study design used does not allow a comparison of student scores in the 'experimental' and 'control' conditions to provide a valid test of the innovation. Moreover, the authors draw a conclusion which is contrary to the results of their data analysis and so is invalid and misleading. While the authors may well feel justified in putting aside the outcome of their statistical analysis, this goes against good scientific norms and practice.

Dear Editor

I am writing regarding a recent article published in J.Chem.Ed. 1, as I feel the reporting of this study, as published, is contrary to good scientific practice. The article, 'Implementation of the Student-Centered Team-Based Learning Teaching Method in a Medicinal Chemistry Curriculum' reports an innovation in pedagogy, and as such is likely to be of wide interest to readers of the journal. I welcome both this kind of work in developing pedagogy and its reporting to inform others; however, I think the report contravenes normal scientific standards.

Although the authors do not specify the type of research methodology they use, they do present their analysis in terms of 'experimental' and 'control' groups (e.g., p.1856), so it is reasonable to consider they see this as a kind of experimental research. There are many serious challenges when applying experimental method to social research, and it is not always feasible to address all such challenges in educational research designs 2 – but perhaps any report of educational experimental research should acknowledge relevant limitations.

A true experiment requires units of analysis (e.g., students) to be assigned to conditions randomly, as this can avoid (or, strictly, reduce the likelihood) of systematic differences between groups. Here the comparison is across different cohorts. These may be largely similar, but that cannot just be assumed. (Strictly, the comparison group should not be labelled as a 'control' group.2 ) There is clearly a risk of conflating variables.

  • Perhaps admission standards are changing over time?
  • Perhaps the teaching team has been acquiring teaching experience and expertise over time regardless of the innovation?

Moreover, if I have correctly interpreted the information on p.1858 about how student course scores after the introduction of the innovation in part derived from the novel activities in the new approach, then there is no reason to assume that the methodology of assigning scores is equivalent with that used in the 'control' (comparison) condition. The authors seem to simply assume the change in scoring methodology will not of itself change the score profile. Without evidence that assessment is equivalent across cohorts, this is an unsupportable assumption.

As it is not possible to 'blind' teachers and students to conditions there is a very real risk of expectancy effects which have been shown to often operate when researchers are positive about an innovation – when introducing the investigated innovation, teachers

  • may have a new burst of enthusiasm,
  • perhaps focus more than usual on this aspect of their work,
  • be more sensitive to students responses to teaching and so forth.

(None of this needs to be deliberate to potentially influence outcomes.) Although (indeed, perhaps because) there is often little that can be done in a teaching situation to address these challenges to experimental designs, it seems appropriate for suitable caveats to be included in a published report. I would have expected to have seen such caveats here.

However, a specific point that I feel must be challenged is in the presentation of results on p.1860. When designing an experiment, it is important to specify before collecting data how one will know what to conclude from the results. The adoption of inferential statistics is surely a commitment to accepting the outcomes of the analysis undertaken. Li and colleagues tell readers that "We used a t test to test whether the SCTBL method can create any significant difference in grades among control groups and the experimental group" and that "there is no significant difference in average score". This is despite the new approach requiring an "increased number of study tasks, and longer preclass preview time" (pp.1860-1).

I would not suggest this is necessarily a good enough reason for Li and colleagues to give up on their innovation, as they have lived experience of how it is working, and that may well offer good grounds for continuing to implement, refine, and evaluate it. As the authors themselves note, evaluation "should not only consider scores" (p.1858).

However, from a scientific point of view, this is a negative result. That certainly should not exclude publication (it is recognised that there is a bias against publishing negative results which distorts the literature in many fields) but it suggests, at the very least, that more work is needed before a positive conclusion can be drawn.

Therefore, I feel it is scientifically invalid for the authors to argue that as "the average score showed a constant [i.e., non-significant] upward trend, and a steady [i.e., non-significant] increase was found" they can claim their teaching "method brought about improvement in the class average, which provides evidence for its effectiveness in medicinal chemistry". Figure 4 reiterates this: a superficially impressive graphic, even if omits the 2018 data, actually shows just how little scores changed when it is noticed that the x-axis has a range only from 79.4-80.4 (%, presumably). The size of the variation across four cohorts (<1%, "an obvious improvement trend"?) is not only found to not be significant but can be compared with how 25% of student scores apparently derived from different types of assessment in the different conditions. 3

To reiterate, this is an interesting study, reporting valuable work. There might be very good reasons to continue the new pedagogic approach even if it does not increase student scores. However, I would argue that it is simply scientifically inadmissible to design an experiment where data will be analysed by statistical tests, and then to offer a conclusion contrary to the results of those tests. A reader who skipped to the end of the paper would find "To conclude, our results suggest that the SCTBL method is an effective way to improve teaching quality and student achievement" (p.1861) but that is to put aside the results of the analysis undertaken.


Keith S. Taber

Emeritus Professor of Science Education, University of Cambridge

References

1 Li, W., Ouyang, Y., Xu, J., & Zhang, P. (2022). Implementation of the Student-Centered Team- Based Learning Teaching Method in a Medicinal Chemistry Curriculum. Journal of Chemical Education, 99(5), 1855-1862. https://doi.org/10.1021/acs.jchemed.1c00978

2 Taber, K. S. (2019). Experimental research into teaching innovations: responding to methodological and ethical challenges. Studies in Science Education, 55(1), 69-119. https://doi.org/10.1080/03057267.2019.1658058

3 I felt there was some ambiguity regarding what figures 4a and 4b actually represent. The labels suggest they refer to "Assessment levels of pharmaceutical engineering classes [sic] in 2017-2020" and "Average scores of the medicinal chemistry course in the control group and the experimental group" (which might, by inspection, suggest that achievement on the medicinal chemistry course is falling behind shifts across the wider programme), but the references in the main text suggest that both figures refer only to the medicinal chemistry course, not the wider pharmaceutical engineering programme. Similarly, although the label for (b) refers to 'average scores' for the course, the text suggests the statistical tests were only applied to 'exam scores' (p.1858) which would only amount to 60% of the marks comprising the course scores (at least in 2018-2020; the information on how course scores were calculated for the 2017 cohort does not seem to be provided but clearly could not follow the methodology reported for the 2018-2020 cohorts). So, given that (a) and (b) do not seem consistent, it may be that the 'average scores' in (b) refers only to examination scores and not overall course scores. If so, that would at least suggest the general assessment methodology was comparable, as long as the setting and marking of examinations are equivalent across different years. However, even then, a reader would take a lot of persuasion that examination papers and marking are so consistent over time that changes of a third or half a percentage point between cohorts exceeds likely measurement error.


Read: Falsifying research conclusions. You do not need to falsify your results if you are happy to draw conclusions contrary to the outcome of your data analysis.


Notes:

𝛂 This is the approach I have taken previously. For example, a couple of years ago a paper was published in the Royal Society of Chemistry's educational research journal, Chemistry Education Research and Practice, which to my reading had similar issues, including claiming "that an educational innovation was effective despite outcomes not reaching statistical significance" (Taber, 2020).

Taber, K. S. (2020). Comment on "Increasing chemistry students' knowledge, confidence, and conceptual understanding of pH using a collaborative computer pH simulation" by S. W. Watson, A. V. Dubrovskiy and M. L. Peters, Chem. Educ. Res. Pract., 2020, 21, 528. Chemistry Education Research and Practice. doi:10.1039/D0RP00131G


𝛃 I wrote directly to the editor, Prof. Tom Holme on 12th July 2022. I received a reply the next day, inviting me to submit my letter through the journal's manuscript submission system. I did this on the 14th.

I received the decision letter on 15th September. (The "manuscript is not suitable for publication in the Journal of Chemical Education in its present form.") The editor offered to consider a resubmission of "a thoroughly rewritten manuscript, with substantial modification, incorporating the reviewers' points and including any additional data they recommended". I decided that, although I am sure the letter could have been improved in some senses, any new manuscript sufficiently different to be considered "thoroughly rewritten manuscript, with substantial modification" would not so clearly make the important points I felt needed to be made.


𝜸 There were four reviewers. The editor informed me that the initial reviews led to a 'split' perspective, so a fourth referee was invited.

  • Referee 1 recommended that the letter was published as submitted.
  • Referee 2 recommended that the letter was published as submitted.
  • Referee 3 recommended major revisions should be undertaken.
  • Referee 4 recommended rejection.

Read more about peer review and editorial decisions

Delusions of educational impact

A 'peer-reviewed' study claims to improve academic performance by purifying the souls of students suffering from hallucinations


Keith S. Taber


The research design is completely inadequate…the whole paper is confused…the methodology seems incongruous…there is an inconsistency…nowhere is the population of interest actually identified…No explanation of the discrepancy is provided…results of this analysis are not reported…the 'interview' technique used in the study is highly inadequate…There is a conceptual problem here…neither the validity nor reliability can be judged…the statistic could not apply…the result is not reported…approach is completely inappropriate…these tables are not consistent…the evidence is inconclusive…no evidence to demonstrate the assumed mechanism…totally unsupported claims…confusion of recommendations with findings…unwarranted generalisation…the analysis that is provided is useless…the research design is simply inadequate…no control condition…such a conclusion is irresponsible

Some issues missed in peer review for a paper in the European Journal of Education and Pedagogy

An invitation to publish without regard to quality?

I received an email from an open-access journal called the European Journal of Education and Pedagogy, with the subject heading 'Publish Fast and Pay Less' which immediately triggered the thought "another predatory journal?" Predatory journals publish submissions for a fee, but do not offer the editorial and production standards expected of serious research journals. In particular, they publish material which clearly falls short of rigorous research despite usually claiming to engage in peer review.

A peer reviewed journal?

Checking out the website I found the usual assurances that the journal used rigorous peer review as:

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general.

We use a double-blind system for peer-reviewing; both reviewers and authors' identities remain anonymous to each other. The paper will be peer-reviewed by two or three experts; one is an editorial staff and the other two are external reviewers."

https://www.ej-edu.org/index.php/ejedu/about

Peer review is critical to the scientific process. Work is only published in (serious) research journals when it has been scrutinised by experts in the relevant field, and any issues raised responded to in terms of revisions sufficient to satisfy the editor.

I could not find who the editor(-in-chief) was, but the 'editorial team' of European Journal of Education and Pedagogy were listed as

  • Bea Tomsic Amon, University of Ljubljana, Slovenia
  • Chunfang Zhou, University of Southern Denmark, Denmark
  • Gabriel Julien, University of Sheffield, UK
  • Intakhab Khan, King Abdulaziz University, Saudi Arabia
  • Mustafa Kayıhan Erbaş, Aksaray University, Turkey
  • Panagiotis J. Stamatis, University of the Aegean, Greece

I decided to look up the editor based in England where I am also based but could not find a web presence for him at the University of Sheffield. Using the ORCID (Open Researcher and Contributor ID) provided on the journal website I found his ORCID biography places him at the University of the West Indies and makes no mention of Sheffield.

If the European Journal of Education and Pedagogy is organised like a serious research journal, then each submission is handled by one of this editorial team. However the reference to "editorial staff" might well imply that, like some other predatory journals I have been approached by (e.g., Are you still with us, Doctor Wu?), the editorial work is actually carried out by office staff, not qualified experts in the field.

That would certainly help explain the publication, in this 'peer-reviewed research journal', of the first paper that piqued my interest enough to motivate me to access and read the text.


The Effects of Using the Tazkiyatun Nafs Module on the Academic Achievement of Students with Hallucinations

The abstract of the paper published in what claims to be a peer-reviewed research journal

The paper initially attracted my attention because it seemed to about treatment of a medical condition, so I wondered was doing in an education journal. Yet, the paper seemed to also be about an intervention to improve academic performance. As I read the paper, I found a number of flaws and issues (some very obvious, some quite serious) that should have been spotted by any qualified reviewer or editor, and which should have indicated that possible publication should have been be deferred until these matters were satisfactorily addressed.

This is especially worrying as this paper makes claims relating to the effective treatment of a symptom of potentially serious, even critical, medical conditions through religious education ("a  spiritual  approach", p.50): claims that might encourage sufferers to defer seeking medical diagnosis and treatment. Moreover, these are claims that are not supported by any evidence presented in this paper that the editor of the European Journal of Education and Pedagogy decided was suitable for publication.


An overview of what is demonstrated, and what is claimed, in the study.

Limitations of peer review

Peer review is not a perfect process: it relies on busy human beings spending time on additional (unpaid) work, and it is only effective if suitable experts can be found that fit with, and are prepared to review, a submission. It is also generally more challenging in the social sciences than in the natural sciences. 1

That said, one sometimes finds papers published in predatory journals where one would expect any intelligent person with a basic education to notice problems without needing any specialist knowledge at all. The study I discuss here is a case in point.

Purpose of the study

Under the heading 'research objectives', the reader is told,

"In general, this journal [article?] attempts to review the construction and testing of Tazkiyatun Nafs [a Soul Purification intervention] to overcome the problem of hallucinatory disorders in student learning in secondary schools. The general objective of this study is to identify the symptoms of hallucinations caused by subtle beings such as jinn and devils among students who are the cause of disruption in learning as well as find solutions to these problems.

Meanwhile, the specific objective of this study is to determine the effect of the use of Tazkiyatun Nafs module on the academic achievement of students with hallucinations.

To achieve the aims and objectives of the study, the researcher will get answers to the following research questions [sic]:

Is it possible to determine the effect of the use of the Tazkiyatun Nafs module on the academic achievement of students with hallucinations?"

Awang, 2022, p.42

I think I can save readers a lot of time regarding the research question by suggesting that, in this study, at least, the answer is no – if only because the research design is completely inadequate to answer the research question. (I should point that the author comes to the opposite conclusion: e.g., "the approach taken in this study using the Tazkiyatun Nafs module is very suitable for overcoming the problem of this hallucinatory disorder", p.49.)

Indeed, the whole paper is confused in terms of what it is setting out to do, what it actually reports, and what might be concluded. As one example, the general objective of identifying "the symptoms of hallucinations caused by subtle beings such as jinn and devils" (but surely, the hallucinations are the symptoms here?) seems to have been forgotten, or, at least, does not seem to be addressed in the paper. 2


The study assumes that hallucinations are caused by subtle beings such as jinn and devils possessing the students.
(Image by Tünde from Pixabay)

Methodology

So, this seems to be an intervention study.

  • Some students suffer from hallucinations.
  • This is detrimental to their education.
  • It is hypothesised that the hallucinations are caused by supernatural spirits ("subtle beings that lead to hallucinations"), so, a soul purification module might counter this detriment;
  • if so, sufferers engaging with the soul purification module should improve their academic performance;
  • and so the effect of the module is being tested in the study.

Thus we have a kind of experimental study?

No, not according to the author. Indeed, the study only reports data from a small number of unrepresentative individuals with no controls,

"The study design is a case study design that is a qualitative study in nature. This study uses a case study design that is a study that will apply treatment to the study subject to determine the effectiveness of the use of the planned modules and study variables measured many times to obtain accurate and original study results. This study was conducted on hallucination disorders [students suffering from hallucination disorders?] to determine the effectiveness of the Tazkiyatun Nafs module in terms of aspects of student academic achievement."

Awang, 2022, p.42

Case study?

So, the author sees this as a case study. Research methodologies are better understood as clusters of similar approaches rather than unitary categories – but case study is generally seen as naturalistic, rather than involving an intervention by an external researcher. So, case study seems incongruous here. Case study involves the detailed exploration of an instance (of something of interest – a lesson, a school, a course of tudy, a textbook, …) reported with 'thick description'.

Read about the characteristics of case study research

The case is usually a complex phenomena which is embedded within a context from which is cannot readily be untangled (for example, a lesson always takes place within a wider context of a teacher working over time with a class on a course of study, within a curricular, and institutional, and wider cultural, context, all of which influence the nature of the specific lesson). So, due to the complex and embedded nature of cases, they are all unique.

"a case study is a study that is full of thoroughness and complex to know and understand an issue or case studied…this case study is used to gain a deep understanding of an issue or situation in depth and to understand the situation of the people who experience it"

Awang, 2022, p.42

A case is usually selected either because that case is of special importance to the researcher (an intrinsic case study – e.g., I studied this school because it is the one I was working in) or because we hope this (unique) case can tell us something about similar (but certainly not identical) other (also unique) cases. In the latter case [sic], an instrumental case study, we are always limited by the extent we might expect to be able to generalise beyond the case.

This limited generalisation might suggest we should not work with a single case, but rather look for a suitably representative sample of all cases: but we sometimes choose case study because the complexity of the phenomena suggests we need to use extensive, detailed data collection and analyses to understand the complexity and subtlety of any case. That is (i.e., the compromise we choose is), we decide we will look at one case in depth because that will at least give us insight into the case, whereas a survey of many cases will inevitably be too superficial to offer any useful insights.

So how does Awang select the case for this case study?

"This study is a case study of hallucinatory disorders. Therefore, the technique of purposive sampling (purposive sampling [sic]) is chosen so that the selection of the sample can really give a true picture of the information to be explored ….

Among the important steps in a research study is the identification of populations and samples. The large group in which the sample is selected is termed the population. A sample is a small number of the population identified and made the respondents of the study. A case or sample of n = 1 was once used to define a patient with a disease, an object or concept, a jury decision, a community, or a country, a case study involves the collection of data from only one research participant…

Awang, 2022, p.42

Of course, a case study of "a community, or a country" – or of a school, or a lesson, or a professional development programme, or a school leadership team, or a homework policy, or an enrichnment activity, or … – would almost certainly be inadequate if it was limited to "the collection of data from only one research participant"!

I do not think this study actually is "a case study of hallucinatory disorders [sic]". Leading aside the shift from singular ("a case study") to plural ("disorders"), the research does not investigate a/some hallucinatory disorders, but the effect of a soul purification module on academic performance. (Actually, spoiler alert  😉, it does not actually investigate the effect of a soul purification module on academic performance either, but the author seems to think it does.)

If this is a case study, there should be the selection of a case, not a sample. Sometimes we do sample within a case in case study, but only from those identified as part of the case. (For example, if the case was a year group in a school, we may not have resources to interact in depth with several hundred different students). Perhaps this is pedantry as the reader likely knows what Awang meant by 'sample' in the paper – but semantics is important in research writing: a sample is chosen to represent a population, whereas the choice of case study is an acknowledgement that generalisation back to a population is not being claimed).

However, if "among the important steps in a research study is the identification of populations" then it is odd that nowhere in the paper is the population of interest actually specified!

Things slip our minds. Perhaps Awang intended to define the population, forgot, and then missed this when checking the text – buy, hey, that is just the kind of thing the reviewers and editor are meant to notice! Otherwise this looks very like including material from standard research texts to play lip-service to the idea that research-design needs to be principled, but without really appreciating what the phrases used actually mean. This impression is also given by the descriptions of how data (for example, from interviews) were analysed – but which are not reflected at all in the results section of the paper. (I am not accusing Awang of this, but because of the poor standard of peer review not raising the question, the author is left vulnerable to such an evaluation.)

The only one research participant?

So, what do we know about the "case or sample of n = 1 ", the "only one research participant" in this study?

The actual respondents in this case study related to hallucinatory disorders were five high school students. The supportive respondents in the case study related to hallucination disorders were five counseling teachers and five parents or guardians of students who were the actual respondents."

Awang, 2022, p.42

It is certainly not impossible that a case could comprise a group of five people – as long as those five make up a naturally bounded group – that is a group that a reasonable person would recognise as existing as a coherent entiy as they clearly had something in common (they were in the same school class, for example; they were attending the same group therapy session, perhaps; they were a friendship group; they were members of the same extended family diagnosed with hallucinatory disorders…something!) There is no indication here of how these five make up a case.

The identification of the participants as a case might have made sense had the participants collectively undertaken the module as a group, but the reader is told: "This study is in the form of a case study. Each practice and activity in the module are done individually" (p.50). Another justification could have been if the module had been offered in one school, and these five participants were the students enrolled in the programme at that time but as "analysis of  the  respondents'  academic  performance  was conducted  after  the  academic  data  of  all  respondents  were obtained  from  the  respective  respondent's  school" (p.45) it seems they did not attend a single school.

The results tables and reports in the text refer to "respondent 1" to "respondent 4". In case study, an approach which recognises the individuality and inherent value of the particular case, we would usually assign assumed names to research participants, not numbers. But if we are going to use numbers, should there not be a respondent 5?

The other one research participant?

It seems that these is something odd here.

Both the passage above, and the abstract refer to five respondents. The results report on four. So what is going on? No explanation of the discrepancy is provided. Perhaps:

  • There only ever were four participants, and the author made a mistake in counting.
  • There only ever were four participants, and the author made a typographical mistake (well, strictly, six typographical mistakes) in drafting the paper, and then missed this in checking the manuscript.
  • There were five respondents and the author forgot to include data on respondent 5 purely by accident.
  • There were five respondents, but the author decided not to report on the fifth deliberately for a reason that is not revealed (perhaps the results did not fit with the desired outcome?)

The significant point is not that there is an inconsistency but that this error was missed by peer reviewers and the editor – if there ever was any genuine peer review. This is the kind of mistake that a school child could spot – so, how is it possible that 'expert reviewers' and 'editorial staff' either did not notice it, or did not think it important enough to query?

Research instruments

Another section of the paper reports the instrumentation used in the paper.

"The research instruments for this study were Takziyatun Nafs modules, interview questions, and academic document analysis. All these instruments were prepared by the researcher and tested for validity and reliability before being administered to the selected study sample [sic, case?]."

Awang, 2022, p.42

Of course, it is important to test instruments for validity and reliability (or perhaps authenticity and trustworthiness when collecting qualitative data). But it is also important

  • to tell the reader how you did this
  • to report the outcomes

which seems to be missing (apart from in regard to part of the implemented module – see below). That is, the reader of a research study wants evidence not simply promises. Simply telling readers you did this is a bit like meeting a stranger who tells you that you can trust them because they (i.e., say that they) are honest.

Later the reader is told that

"Semi- structured interview questions will be [sic, not 'were'?] developed and validated for the purpose of identifying the causes and effects of hallucinations among these secondary school students…

…this interview process will be [sic, not 'was'] conducted continuously [sic!] with respondents to get a clear and specific picture of the problem of hallucinations and to find the best solution to overcome this disorder using Islamic medical approaches that have been planned in this study

Awang, 2022, pp.43-44

At the very least, this seems to confuse the plan for the research with a report of what was done. (But again, apparently, the reviewers and editorial staff did not think this needed addressing.) This is also confusing as it is not clear how this aspect of the study relates to the intervention. Were the interviews carried out before the intervention to help inform the design of the modules (presumably not as they had already been "tested for validity and reliability before being administered to the selected study sample"). Perhaps there are clear and simple answers to such questions – but the reader will not know because the reviewers and editor did not seem to feel they needed to be posed.

If "Interviews are the main research instrument in this study" (p.43), then one would expect to see examples of the interview schedules – but these are not presented. The paper reports a complex process for analysing interview data, but this is not reflected in the findings reported. The readers is told that the six stage process leads to the identifications and refinement of main and sub-categories. Yet, these categories are not reported in the paper. (But, again, peer reviewers and the editor did not apparently raise this as something to be corrected.) More generally "data  analysis  used  thematic  analysis  methods" (p.44), so why is there no analysis presented in terms of themes? The results of this analysis are simply not reported.

The reader is told that

"This  interview  method…aims to determine the respondents' perspectives, as well as look  at  the  respondents'  thoughts  on  their  views  on  the issues studied in this study."

Awang, 2022, p.44

But there is no discussion of participants perspectives and views in the findings of the study. 2 Did the peer reviewers and editor not think this needed addressing before publication?

Even more significantly, in a qualitative study where interviews are supposedly the main research instrument, one would expect to see extracts from the interviews presented as part of the findings to support and exemplify claims being made: yet, there are none. (Did this not strike the peer reviewers and editor as odd: presumably they are familiar with the norms of qualitative research?)

The only quotation from the qualitative data (in this 'qualitative' study) I can find appears in the implications section of the paper:

"Are you aware of the importance of education to you? Realize. Is that lesson really important? Important. The success of the student depends on the lessons in school right or not? That's right"

Respondent 3: Awang, 2022, p.49

This seems a little bizarre, if we accept this is, as reported, an utterance from one of the students, Respondent 3. It becomes more sensible if this is actually condensed dialogue:

"Are you aware of the importance of education to you?"

"Realize."

"Is that lesson really important?"

"Important."

"The success of the student depends on the lessons in school right or not?"

"That's right"

It seems the peer review process did not lead to suggesting that the material should be formatted according to the norms for presenting dialogue in scholarly texts by indicating turns. In any case, if that is typical of the 'interview' technique used in the study then it is highly inadequate, as clearly the interviewer is leading the respondent, and this is more an example of indoctrination than open-ended enquiry.

Random sampling of data

Completely incongruous with the description of the purposeful selection of the participants for a case study is the account of how the assessment data was selected for analysis:

"The  process  of  analysis  of  student  achievement documents is carried out randomly by taking the results of current  examinations  that  have  passed  such  as the  initial examination of the current year or the year before which is closest  to  the  time  of  the  study."

Awang, 2022, p.44

Did the peer reviewers or editor not question the use of the term random here? It is unclear what is meant to by 'random' here, but clearly if the analysis was based on randomly selected data that would undermine the results.

Validating the soul purification module

There is also a conceptual problem here. The Takziyatun Nafs modules are the intervention materials (part of what is being studied) – so they cannot also be research instruments (used to study them). Surely, if the Takziyatun Nafs modules had been shown to be valid and reliable before carrying out the reported study, as suggested here, then the study would not be needed to evaluate their effectiveness. But, presumably, expert peer reviewers (if there really were any) did not see an issue here.

The reliability of the intervention module

The Takziyatun Nafs modules had three components, and the author reports the second of the three was subjected to tests of validity and reliability. It seems that Awang thinks that this demonstrates the validity and reliability of the complete intervention,

"The second part of this module will go through [sic] the process of obtaining the validity and reliability of the module. Proses [sic] to obtain this validity, a questionnaire was constructed to test the validity of this module. The appointed specialists are psychologists, modern physicians (psychiatrists), religious specialists, and alternative medicine specialists. The validity of the module is identified from the aspects of content, sessions, and activities of the Tazkiyatun Nafs module. While to obtain the value of the reliability coefficient, Cronbach's alpha coefficient method was used. To obtain this Cronbach's alpha coefficient, a pilot test was conducted on 50 students who were randomly selected to test the reliability of this module to be conducted."

Awang, 2022, pp.43-44

Now to unpack this, it may be helpful to briefly outline what the intervention involved (as as the paper is open access anyone can access and read the full details in the report).


From the MGM film 'A Night at the Opera' (1935): "The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced"

The description does not start off very helpfully ("The introduction of the module will elaborate on the introduction, rationale, and objectives of this module introduced" (p.43) put me in mind of the Marx brothers: "The party of the first part shall be known in this contract as the party of the first part"), but some key points are,

"the Tazkiyatun Nafs module was constructed to purify the heart of each respondent leading to the healing of hallucinatory disorders. This liver purification process is done in stages…

"the process of cleansing the patient's soul will be done …all the subtle beings in the patient will be expelled and cleaned and the remnants of the subtle beings in the patient will be removed and washed…

The second process is the process of strengthening and the process of purification of the soul or heart of the patient …All the mazmumah (evil qualities) that are in the heart must be discarded…

The third process is the process of enrichment and the process of distillation of the heart and the practices performed. In this process, there will be an evaluation of the practices performed by the patient as well as the process to ensure that the patient is always clean from all the disturbances and disturbances [sic] of subtle beings to ensure that students will always be healthy and clean from such disturbances…

Awang, 2022, p.45, p.43

Quite how this process of exorcising and distilling and cleansing will occur is not entirely clear (and if the soul is equated with the heart, how is the liver involved?), but it seems to involve reflection and prayer and contemplation of scripture – certainly a very personal and therapeutic process.

And yet its validity and reliability was tested by giving a questionnaire to 50 students randomly selected (from the unspecified population, presumably)? No information is given on how a random section was made (Taber, 2013) – which allows a reader to be very sceptical that this actually was a random sample from the (un?)identified population, and not just an arbitrary sample of 50 students. (So, that is twice the word 'random' is used in the paper when it seems inappropriate.)

It hardly matters here, as clearly neither the validity nor the reliability of a spiritual therapy can be judged from a questionnaire (especially when administered to people who have never undertaken the therapy). In any case, the "reliability coefficient" obtained from an administration of a questionnaire ONLY applies to that sample on that occasion. So, the statistic could not apply to the four participants in the study. And, in any case, the result is not reported, so the reader has no idea what the value of Cronbach's alpha was (but then, this was described as a qualitative study!)

Moreover, Cronbach's alpha only indicates the internal coherence of the items on a scale (Taber, 2019): so, it only indicates whether the set of questions included in the questionnaire seem to be accessing the same underlying construct in motivating the responses of those surveyed across the set of items. It gives no information about the reliability of the instrument (i.e., whether it would give the same results on another occasion).

This approach to testing validity and reliability is then completely inappropriate and unhelpful. So, even if the outcomes of the testing had been reported (and they are not) they would not offer any relevant evidence. Yet it seems that peer reviewers and editor did not think to question why this section was included in the paper.

Ethical issues

A study of this kind raises ethical issues. It may well be that the research was carried out in an entirely proper and ethical manner, but it is usual in studies with human participants ('human subjects') to make this clear in the published report (Taber, 2014b). A standard issue is whether the participants gave voluntary, informed, consent. This would mean that they were given sufficient information about the study at the outset to be able to decide if they wished to participate, and were under no undue pressure to do so. The 'respondents' were school students: if they were considered minors in the research context (and oddly for a 'case study' such basic details as age and gender are not reported) then parental permission would also be needed, again subject to sufficient briefing and no duress.

However, in this specific research there are also further issues due to the nature of the study. The participants were subject to medical disorders, so how did the researcher obtain information about, and access to, the students without medical confidentiality being broken? Who were the 'gatekeepers' who provided access to the children and their personal data? The researcher also obtained assessment data "from  the  class  teacher  or  from  the  Student Affairs section of the student's school" (p.44), so it is important to know that students (and parents/guardians) consented to this. Again, peer review does not seem to have identified this as an issue to address before publication.

There is also the major underlying question about the ethics of a study when recognising that these students were (or could be, as details are not provided) suffering from serious medical conditions, but employing religious education as a treatment ("This method of treatment is to help respondents who suffer from hallucinations caused by demons or subtle beings", p.44). Part of the theoretical framework underpinning the study is the assumption that what is being addressed is"the problem of hallucinations caused by the presence of ethereal beings…" (p.43) yet it is also acknowledged that,

"Hallucinatory disorders in learning that will be emphasized in this study are due to several problems that have been identified in several schools in Malaysia. Such disorders are psychological, environmental, cultural, and sociological disorders. Psychological disorders such as hallucinatory disorders can lead to a more critical effect of bringing a person prone to Schizophrenia. Psychological disorders such as emotional disorders and psychiatric disorders. …Among the causes of emotional disorders among students are the school environment, events in the family, family influence, peer influence, teacher actions, and others."

Awang, 2022, p.41

There seem to be three ways of understanding this apparent discrepancy, which I might gloss:

  1. there are many causes of conditions that involve hallucinations, including, but not only, possession by evil or mischievousness spirits;
  2. the conditions that lead to young people having hallucinations may be understood at two complementary levels, at a spiritual level in terms of a need for inner cleansing and exorcising of subtle beings, and in terms of organic disease or conditions triggered by, for example, social and psychological factors;
  3. in the introduction the author has relied on various academic sources to discuss the nature of the phenomenon of students having hallucinations, but he actually has a working assumption that is completely different: hallucinations are due to the presence of jinn or other spirits.

I do not think it is clear which of these positions is being taken by the study's author.

  1. In the first case it would be necessary to identify which causes are present in potential respondents and only recruit those suffering possession for this study (which does not seem to have been done);
  2. In the second case, spiritual treatment would need to complement medical intervention (which would completely undermine the validity of the study as medical treatments for the underlying causes of hallucinations are likely to be the cause of hallucinations ceasing, not the tested intervention);
  3. The third position is clearly problematic in terms of academic scholarship as it is either completely incompetent or deliberately disregards academic norms that require the design of a study to reflect the conceptual framework set out to motivate it.

So, was this tested intervention implemented instead of or alongside formal medical intervention?

  • If it was alongside medical treatment, then that raises a major confound for the study.
  • Yet it would clearly be unacceptable to deny sufferers indicated medical treatment in order to test an educational intervention that is in effect a form of exorcism.

Again, it may be there are simple and adequate responses to these questions (although here I really cannot see what they might be), but unfortunately it seems the journal referees and editor did not think to ask for them.  

Findings


Results tables presented in Awang, 2022 (p.45) [Published with a creative commons licence allowing reproduction]: "Based on the findings stated in Table I show that serial respondents experienced a decline in academic achievement while they face the problem of hallucinations. In contrast to Table II which shows an improvement in students' academic achievement  after  hallucinatory  disorders  can  be  resolved." If we assume that columns in the second table have been mislabelled, then it seems the school performance of these four students suffered while they were suffering hallucinations, but improved once they recovered. From this, we can infer…?

The key findings presented concern academic performance at school. Core results are presented in tables I and II. Unfortunately these tables are not consistent as they report contradictory results for the academic performance of students before and during periods when they had hallucinations.

They can be made consistent if the reader assumes that two of the columns in table II are mislabelled. If the reader assumes that the column labelled 'before disruption' actually reports the performance 'during disruption' and that the column actually labelled 'during disruption' is something else, then they become consistent. For the results to tell a coherent story and agree with the author's interpretation this 'something else' presumably should be 'after disruption'.

This is a very unfortunate error – and moreover one that is obvious to any careful reader. (So, why was it not obvious to the referees and editor?)

As well as looking at these overall scores, other assessment data is presented separately for each of respondent 1 – respondent 4. Theses sections comprise presentations of information about grades and class positions, mixed with claims about the effects of the intervention. These claims are not based on any evidence and in many cases are conclusions about 'respondents' in general although they are placed in sections considering the academic assessment data of individual respondents. So,there are a number of problems with these claims:

  • they are of the nature of conclusions, but appear in the section presenting the findings;
  • they are about the specific effects of the intervention that the author assumes has influenced academic performance, not the data analysed in these sections;
  • they are completely unsubstantiated as no data or analysis is offered to support them;
  • often they make claims about 'respondents' in general, although as part of the consideration of data from individual learners.

Despite this, the paper passed peer-review and editorial scrutiny.

Rhetorical research?

This paper seems to be an example of a kind of 'rhetorical research' where a researcher is so convinced about their pre-existant theoretical commitments that they simply assume they have demonstrated them. Here the assumption seem to be:

  1. Recovering from suffering hallucinations will increase student performance
  2. Hallucinations are caused by jinn and devils
  3. A spiritual intervention will expel jinn and devils
  4. So, a spiritual intervention will cure hallucinations
  5. So, a spiritual intervention will increase student performance

The researcher provided a spiritual intervention, and the student performance increased, so it is assumed that the scheme is demonstrated. The data presented is certainly consistent with the assumption, but does not in itself support this scheme without evidence. Awang provides evidence that student performance improved in four individuals after they had received the intervention – but there is no evidence offered to demonstrate the assumed mechanism.

A gardener might think that complimenting seedlings will cause them to grow. Perhaps she praises her seedlings every day, and they do indeed grow. Are we persuaded about the efficacy of her method, or might we suspect another cause at work? Would the peer-reveiewers and editor of the European Journal of Education and Pedagogy be persuaded this demonstrated that compliments cause plant growth? On the evidence of this paper, perhaps they would.

This is what Awang tells readers about the analysis undertaken:

Each student  respondent  involved  in  this  study  [sic, presumably not, rather the researcher] will  use  the analysis  of  the  respondent's  performance  to  determine the effect of hallucination disorders on student achievement in secondary school is accurate.

The elements compared in this analysis are as follows: a) difference in mean percentage of achievement by subject, b) difference in grade achievement by subject and c) difference in the grade of overall student achievement. All academic results of the respondents will be analyzed as well as get the mean of the difference between the  performance  before, during, and after the  respondents experience  hallucinations. 

These  results  will  be  used  as research material to determine the accuracy of the use of the Tazkiyatun  Nafs  Module  in  solving  the  problem  of hallucinations   in   school   and   can   improve   student achievement in academic school."

Awang, 2022, p.45

There is clearly a large jump between the analysis outlined in the second paragraph here, and testing the study hypotheses as set out in the final paragraph. But the author does not seem to notice this (and more worryingly, nor do the journal's reviewers and editor).

So interleaved into the account of findings discussing "mean percentage of achievement by subject…difference in grade achievement by subject…difference in the grade of overall student achievement" are totally unsupported claims. Here is an example for Respondent 1:

"Based on the findings of the respondent's achievement in the  grade  for  Respondent  1  while  facing  the  problem  of hallucinations  shows  that  there  is  not  much  decrease  or deterioration  of  the  respondent's  grade.  There  were  only  4 subjects who experienced a decline in grade between before and  during  hallucination  disorder.  The  subjects  that experienced  decline  were  English,  Geography,  CBC, and Civics.  Yet  there  is  one  subject  that  shows  a  very  critical grade change the Civics subject. The decline occurred from grade A to grade E. This shows that Civics education needs to be given serious attention in overcoming this problem of decline. Subjects experiencing this grade drop were subjects involving  emotion,  language,  as  well  as  psychomotor fitness.  In  the  context  of  psychology,  unstable  emotional development  leads  to  a  decline  in the psychomotor  and emotional development of respondents.

After  the  use  of  the  Tazkiyatun  Nafs  module  in overcoming  this  problem,  hallucinatory  disorders  can  be overcome.  This  situation  indicates  the  development  of  the respondents  during  and  after  experiencing  hallucinations after  practicing  the  Tazkiyatun  Nafs  module.  The  process that takes place in the Tzkiyatun Nafs module can help the respondent  to  stabilize  his  emotions  and  psyche  for  the better. From the above findings there were 5 subjects who experienced excellent improvement in grades. The increase occurred in English, Malay, Geography, and Civics subjects. The best improvement is in the subject of Civic education from grade E to grade B. The improvement in this language subject  shows  that  the  respondents'  emotions  have stabilized.  This  situation  is  very  positive  and  needs  to  be continued for other subjects so that respondents continue to excel in academic achievement in school.""

Awang, 2022, p.45 (emphasis added)

The material which I show here as underlined is interjected completely gratuitously. It does not logically fit in the sequence. It is not part of the analysis of school performance. It is not based on any evidence presented in this section. Indeed, nor is it based on any evidence presented anywhere else in the paper!

This pattern is repeated in discussing other aspects of respondents' school performance. Although there is mention of other factors which seem especially pertinent to the dip in school grades ("this was due to the absence of the  respondents  to  school  during  the  day  the  test  was conducted", p.46; "it was an increase from before with no marks due to non-attendance at school", p.46) the discussion of grades is interspersed with (repetitive) claims about the effects of the intervention for which no evidence is offered.


Respondent 1Respondent 2Respondent 3Respondent 4
§: Differences in Respondents' Grade Achievement by Subject"After the use of the Tazkiyatun Nafs module in overcoming this problem, hallucinatory disorders can be overcome. This situation indicates the development of the respondents during and after experiencing hallucinations after practicing the Tazkiyatun Nafs module. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.45)"After the use of the Tazkiyatun Nafs module as a soul purification module, showing the development of the respondents during and after experiencing hallucination disorders is very good. The process that takes place in the Tzkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better" (p.46)"The process that takes place in the Tazkiyatun Nafs module can help the respondent to stabilize his emotions and psyche for the better." (p.46)
§:Differences in Respondent Grades according to Overall Academic Achievement"Based on the findings of the study after the hallucination
disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement.
" (pp.46-7)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module. … This excellence also shows that the respondents have recovered from hallucinations after practicing the methods found in the Tazkiayatun Nafs module that has been introduced.
In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement
." (p.47)
"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)"Based on the findings of the study after the hallucination disorder was overcome showed that the development of the respondents was very positive after going through the treatment process using the Tazkiyatun Nafs module…In general, the use of the Tazkiyatun Nafs module has successfully changed the learning lifestyle and achievement of the respondents from poor condition to good and excellent achievement." (p.47)
Unsupported claims made within findings sections reporting analyses of individual student academic grades: note (a) how these statements included in the analysis of individual school performance data from four separate participants (in a case study – a methodology that recognises and values diversity and individuality) are very similar across the participants; (b) claims about 'respondents' (plural) are included in the reports of findings from individual students.

Awang summarises what he claims the analysis of 'differences in respondents' grade achievement by subject' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective achievement grades. Therefore, this soul purification module should be practiced by every student to help them in stabilizing their soul and emotions and stay away from all the disturbances of the subtle beings that lead to hallucinations"

Awang, 2022, p.46

And, on the next page, Awang summarises what he claims the analysis of 'differences in respondent grades according to overall academic achievement' shows:

"The use of the Tazkiyatun Nafs module in this study helped the students improve their respective overall academic achievement. Therefore, this soul purification module should be practiced by every student to help them in stabilizing the soul and emotions as well as to stay away from all the disturbances of the subtle beings that lead to hallucination disorder."

Awang, 2022, p.47

So, the analysis of grades is said to demonstrate the value of the intervention, and indeed Awang considers this is reason to extend the intervention beyond the four participants, not just to others suffering hallucinations, but to "every student". The peer review process seems not to have raised queries about

  • the unsupported claims,
  • the confusion of recommendations with findings (it is normal to keep to results in a findings section), nor
  • the unwarranted generalisation from four hallucination suffers to all students whether healthy or not.

Interpreting the results

There seem to be two stories that can be told about the results:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved.  

Narrative 1

Now narrative 1 relies on a very substantial implied assumption – which is that the numbers presented as school performance are comparable over time. So, a control would be useful: such as what happened to the performance scores of other students in the same classes over the same time period. It seems likely they would not have shown the same dip – unless the dip was related to something other than hallucinations – such as the well-recognised dip after long school holidays, or some cultural distraction (a major sports tournament; fasting during Ramadan; political unrest; a pandemic…). Without such a control the evidence is suggestive (after all, being ill, and missing school as a result, is likely to lead to a dip in school performance, so the findings are not surprising), but inconclusive.

Intriguingly, the author tells readers that "student  achievement  statistics  from  the  beginning  of  the year to the middle of the current [sic, published in 2022] year in secondary schools in Northern Peninsular Malaysia that have been surveyed by researchers show a decline (Sabri, 2015 [sic])" (p.42), but this is not considered in relation to the findings of the study.

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, as a result of undergoing the soul purification module, their school performance improved.  

Narrative 2

Clearly narrative 2 suffers from the same limitation as narrative 1. However, it also demands an extra step in making an inference. I could re-write this narrative:

When the four students suffered hallucinations, this led to a deterioration in their school performance. Later, once they had recovered from the episodes of hallucinations, their school performance improved. 
AND
the recovery was due to engagement with the soul purification module.

Narrative 2'.

That is, even if we accept narrative 1 as likely, to accept narrative 2 we would also need to be convinced that:

  • a) sufferers from medical conditions leading to hallucinations do not suffer periodic attacks with periods of remission in between; or
  • b) episodes of hallucinations cannot be due to one-off events (emotional trauma, T.I.A. {transient ischaemic attack or mini-strokes},…) that resolve naturally in time; or
  • c) sufferers from medical conditions leading to hallucinations do not find they resolve due to maturation; or
  • d) the four participants in this study did not undertaken any change in life-style (getting more sleep, ceasing eating strange fungi found in the woods) unrelated to the intervention that might have influenced the onset of hallucinations; or
  • e) the four participants in this study did not receive any medical treatment independent of the intervention (e.g., prescribed medication to treat migraine episodes) that might have influenced the onset of hallucinations

Despite this study being supposedly a case study (where the expectation is there should be 'thick description' of the case and its context), there is no information to help us exclude such options. We do not know the medical diagnoses of the conditions causing the participants' hallucinations, or anything about their lives or any medical treatment that may have been administered. Without such information, the analysis that is provided is useless for answering the research question.

In effect, regardless of all the other issues raised, the key problem is that the research design is simply inadequate to test the research question. But it seems the referees and editor did not notice this shortcoming.

Alleged implications of the research

After presenting his results Awang draws various implications, and makes a number of claims about what had been found in the study:

  • "After the students went through the treatment session by using the Tazkiayatun Nafsmodule to treat hallucinations, it showed a positive effect on the student respondents. All this was certified by the expert, the student's parents as well as the  counselor's  teacher." (p.48)
  • "Based on these findings, shows that hallucinations are very disturbing to humans and the appropriate method for now to solve this problem is to use the Tazkiyatun Nafs Module." (p.48)
  • "…the use of the Tazkiyatun Nafs module while the  respondent  is  suffering  from  hallucination  disorder  is very  appropriate…is very helpful to the respondents in restoring their minds and psyche to be calmer and healthier. These changes allow  students  to  focus  on  their  studies  as  well  as  allow them to improve their academic performance better." (p.48)
  • "The use of the Tazkiyatun Nafs Module in this study has led to very positive changes there are attitudes and traits of students  who  face  hallucinations  before.  All  the  negative traits  like  irritability, loneliness,  depression,etc.  can  be overcome  completely." (p.49)
  • "The personality development of students is getting better and perfect with the implementation of the Tazkiaytun Nafs module in their lives." (p.49)
  • "Results  indicate that  students  who  suffer  from  this hallucination  disorder are in  a  state  of  high  depression, inactivity, fatigue, weakness and pain,and insufficient sleep." (p.49)
  • "According  to  the  findings  of  this study,  the  history  of  this  hallucination  disorder  started in primary  school  and  when  a  person  is  in  adolescence,  then this  disorder  becomes  stronger  and  can  cause  various diseases  and  have  various  effects  on  a  person who  is disturbed." (p.50)

Given the range of interview data that Awang claims to have collected and analysed, at least some of the claims here are possibly supported by the data. However, none of this data and analysis is available to the reader. 2 These claims are not supported by any evidence presented in the paper. Yet peer reviewers and the editor who read the manuscript seem to feel it is entirely acceptable to publish such claims in a research paper, and not present any evidence whatsoever.

Summing up

In summary: as far as these four students were concerned (but not perhaps the fifth participant?), there did seem to be a relationship between periods of experiencing hallucinations and lower school performance (perhaps explained by such factors as "absenteeism to school during the day the test was conducted" p.46) ,

"the performance shown by students who face chronic hallucinations is also declining and  declining.  This  is  all  due  to  the  actions  of  students leaving the teacher's learning and teaching sessions as well as  not  attending  school  when  this  hallucinatory  disorder strikes.  This  illness or  disorder  comes  to  the  student suddenly  and  periodically.  Each  time  this  hallucination  disease strikes the student causes the student to have to take school  holidays  for  a  few  days  due  to  pain  or  depression"

Awang, 2022, p.42

However,

  • these four students do not represent any wider population;
  • there is no information about the specific nature, frequency, intensity, etcetera, of the hallucinations or diagnoses in these individuals;
  • there was no statistical test of significance of changes; and
  • there was no control condition to see if performance dips were experienced by others not experiencing hallucinations at the same time.

Once they had recovered from the hallucinations (and it is not clear on what basis that judgement was made) their scores improved.

The author would like us to believe that the relief from the hallucinations was due to the intervention, but this seems to be (quite literally) an act of faith 3 as no actual research evidence is offered to show that the soul purification module actually had any effect. It is of course possible the module did have an effect (whether for the conjectured or other reasons – such as simply offering troubled children some extra study time in a calm and safe environment and special attention – or because of an expectancy effect if the students were told by trusted authority figures that the intervention would lead to the purification of their hearts and the healing of their hallucinatory disorder) but the study, as reported, offers no strong grounds to assume it did have such an effect.

An irresponsible journal

As hallucinations are often symptoms of organic disease affecting blood supply to the brain, there is a major question of whether treating the condition by religious instruction is ethically sound. For example, hallucinations may indicate a tumour growing in the brain. Yet, if the module was only a complement to proper medical attention, a reader may prefer to suspect that any improvement in the condition (and consequent increased engagement in academic work) may have been entirely unrelated to the module being evaluated.

Indeed, a published research study that claims that soul purification is a suitable treatment for medical conditions presenting with hallucinations is potentially dangerous as it could lead to serious organic disease going untreated. If Awang's recommendations were widely taken up in Malaysia such that students with serious organic conditions were only treated for their hallucinations by soul purification rather than with medication or by surgery it would likely lead to preventable deaths. For a research journal to publish a paper with such a conclusion, where any qualified reviewer or editor could easily see the conclusion is not warranted, is irresponsible.

As the journal website points out,

"The process of reviewing is considered critical to establishing a reliable body of research and knowledge. The review process aims to make authors meet the standards of their discipline, and of science in general."

https://www.ej-edu.org/index.php/ejedu/about

So, why did the European Journal of Education and Pedagogy not subject this submission to meaningful review to help the author of this study meet the standards of the discipline, and of science in general?


Work cited:

Notes:

1 In mature fields in the natural sciences there are recognised traditions ('paradigms', 'disciplinary matrices') in any active field at any time. In general (and of course, there will be exceptions):

  • at any historical time, there is a common theoretical perspective underpinning work in a research programme, aligned with specific ontological and epistemological commitments;
  • at any historical time, there is a strong alignment between the active theories in a research programme and the acceptable instrumentation, methodology and analytical conventions.

Put more succinctly, in a mature research field, there is generally broad agreement on how a phenomenon is to be understood; and how to go about investigating it, and how to interpret data as research evidence.

This is generally not the case in educational research – which is in part at least due to the complexity and, so, multi-layered nature, of the phenomena studied (Taber, 2014a): phenomena such as classroom teaching. So, in reviewing educational papers, it is sometimes necessary to find different experts to look at the theoretical and the methodological aspects of the same submission.


2 The paper is very strange in that the introductory sections and the conclusions and implications sections have a very broad scope, but the actual research results are restricted to a very limited focus: analysis of school test scores and grades.

It is as if as (and could well be that) a dissertation with a number of evidential strands has been reduced to a paper drawing upon only one aspect of the research evidence, but with material from other sections of the dissertation being unchanged from the original broader study.


3 Readers are told that

"All  these  acts depend on the sincerity of the medical researcher or fortune-teller seeking the help of Allah S.W.T to ensure that these methods and means are successful. All success is obtained by the permission of Allah alone"

Awang, 2022, p.43


Quasi-experiment or crazy experiment?

Trustworthy research findings are conditional on getting a lot of things right


Keith S. Taber


A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy.

I was looking for research into the effectiveness of P-O-E (predict-observe-explain) pedagogy, a teaching technique that is believed to help challenge learners' alternative conceptions and support conceptual change.

Read about the predict-observe-explain approach



One of the papers I came across reported identifying, and then using P-O-E to respond to, students' alternative conceptions. The authors reported that

The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test).

Kibirige, Osodo & Tlala, 2014, p.302

The references to "in the 80%" did not seem to be explained anywhere. Perhaps only 80% of students responded to the open-ended questions included as part of the assessment instrument (discussed below), so the authors gave the incidence as a proportion of those responding? Ideally, research reports are explicit about such matters avoiding the need for readers to speculate.

The authors concluded from their research that

"This study revealed that the use of POE strategy has a positive effect on learners' misconceptions about dissolved salts. As a result of this strategy, learners were able to overcome their initial misconceptions and improved on their performance….The implication of these results is that science educators, curriculum developers, and textbook writers should work together to include elements of POE in the curriculum as a model for conceptual change in teaching science in schools."

Kibirige, Osodo & Tlala, 2014, p.305

This seemed pretty positive. As P-O-E is an approach which is consistent with 'constructivist' thinking that recognises the importance of engaging with learners' existing thinking I am probably biased towards accepting such conclusions. I would expect techniques such as P-O-E, when applied carefully in suitable curriculum contexts, to be effective.

Read about constructivist pedagogy

Yet I also have a background in teaching research methods and in acting as a journal editor and reviewer – so I am not going to trust the conclusion of a research study without having a look at the research design.


All research findings are subject to caveats and provisos: good practice in research writing is for the authors to discuss them – but often they are left unmentioned for readers to spot. (Read about drawing conclusions from studies)


Kibirige and colleagues describe their study as a quasi-experiment.

Experimental research into teaching approaches

If one wants to see if a teaching approach is effective, then it seems obvious that one needs to do an experiment. If we can experimentally compare different teaching approaches we can find out which are more effective.

An experiment allows us to make a fair comparison by 'control of variables'.

Read about experimental research

Put very simply, the approach might be:

  • Identify a representative sample of an identified population
  • Randomly assign learners in the sample to either an experimental condition or a control condition
  • Set up two conditions that are alike in all relevant ways, apart from the independent variable of interest
  • After the treatments, apply a valid instrument to measure learning outcomes
  • Use inferential statistics to see if any difference in outcomes across the two conditions reaches statistical significance
  • If it does, conclude that
    • the effect is likely to due to the difference in treatments
    • and will apply, on average, to the population that has been sampled

Now, I expect anyone reading this who has worked in schools, and certainly anyone with experience in social research (such as research into teaching and learning), will immediately recognise that in practice it is very difficult to actually set up an experiment into teaching which fits this description.

Nearly always (if indeed not always!) experiments to test teaching approaches fall short of this ideal model to some extent. This does not mean such studies can not be useful – especially where there are many of them with compensatory strengths and weaknesses offering similar findings (Taber, 2019a)- but one needs to ask how closely published studies fit the ideal of a good experiment. Work in high quality journals is often expected to offer readers guidance on this, but readers should check for themselves to see if they find a study convincing.

So, how convincing do I find this study by Kibirige and colleagues?

The sample and the population

If one wishes a study to be informative about a population (say, chemistry teachers in the UK; or 11-12 year-olds in state schools in Western Australia; or pharmacy undergraduates in the EU; or whatever) then it is important to either include the full population in the study (which is usually only feasible when the population is a very limited one, such as graduate students in a single university department) or to ensure the sample is representative.

Read about populations of interest in research

Read about sampling a population

Kibirige and colleagues refer to their participants as a sample

"The sample consisted of 93 Grade 10 Physical Sciences learners from two neighbouring schools (coded as A and B) in a rural setting in Moutse West circuit in Limpopo Province, South Africa. The ages of the learners ranged from 16 to 20 years…The learners were purposively sampled."

Kibirige, Osodo & Tlala, 2014, p.302

Purposive sampling means selecting participants according to some specific criteria, rather than sampling a population randomly. It is not entirely clear precisely what the authors mean by this here – which characteristics they selected for. Also, there is no statement of the population being sampled – so the reader is left to guess what population the sample is a sample of. Perhaps "Grade 10 Physical Sciences" students – but, if so, universally, or in South Africa, or just within Limpopo Province, or indeed just the Moutse West circuit? Strictly the notion of a sample is meaningless without reference to the population being sampled.

A quasi-experiment

A key notion in experimental research is the unit of analysis

"An experiment may, for example, be comparing outcomes between different learners, different classes, different year groups, or different schools…It is important at the outset of an experimental study to clarify what the unit of analysis is, and this should be explicit in research reports so that readers are aware what is being compared."

Taber, 2019a, p.72

In a true experiment the 'units of analysis' (which in different studies may be learners, teachers, classes, schools, exam. papers, lessons, textbook chapters, etc.) are randomly assigned to conditions. Random assignment allows inferential statistics to be used to directly compare measures made in the different conditions to determine whether outcomes are statistically significant. Random assignment is a way of making systematic differences between groups unlikely (and so allows the use of inferential statistics to draw meaningful conclusions).

Random assignment is sometimes possible in educational research, but often researchers are only able to work with existing groupings.

Kibirige, Osodo & Tlala describe their approach as using a quasi-experimental design as they could not assign learners to groups, but only compare between learners in two schools. This is important, as means that the 'units of analysis' are not the individual learners, but the groups: in this study one group of students in one school (n=1) is being compared with another group of students in a different school (n=1).

The authors do not make it clear whether they assigned the schools to the two teaching conditions randomly – or whether some other criterion was used. For example, if they chose school A to be the experimental school because they knew the chemistry teacher in the school was highly skilled, always looking to improve her teaching, and open to new approaches; whereas the chemistry teacher in school B had a reputation for wishing to avoid doing more than was needed to be judged competent – that would immediately invalidate the study.

Compensating for not using random assignment

When it is not possible to randomly assign learners to treatments, researchers can (a) use statistics that take into account measurements on each group made before, as well as after, the treatments (that is, a pre-test – post-test design); (b) offer evidence to persuade readers that the groups are equivalent before the experiment. Kibirige, Osodo and Tlala seek to use both of these steps.

Do the groups start as equivalent?

Kibirige, Osodo and Tlala present evidence from the pre-test to suggest that the learners in the two groups are starting at about the same level. In practice, pre-tests seldom lead to identical outcomes for different groups. It is therefore common to use inferential statistics to test for whether there is a statistically significant difference between pre-test scores in the groups. That could be reasonable, if there was an agreed criterion for deciding just how close scores should be to be seen as equivalent. In practice, many researchers only check that the differences do not reach statistical significance at the level of probability <0.05: that it they look to see if there are strong differences, and, if not, declare this is (or implicitly treat this as) equivalence!

This is clearly an inadequate measure of equivalence as it will only filter out cases where there is a difference so large it is found to be very unlikely to be a chance effect.


If we want to make sure groups start as 'equivalent', we cannot simply look to exclude the most blatant differences. (Original image by mcmurryjulie from Pixabay)

See 'Testing for initial equivalence'


We can see this in the Kibirige and colleagues' study where the researchers list mean scores and standard deviations for each question on the pre-test. They report that:

"The results (Table 1) reveal that there was no significant difference between the pre-test achievement scores of the CG [control group] and EG [experimental group] for questions (Appendix 2). The p value for these questions was greater than 0.05."

Kibirige, Osodo & Tlala, 2014, p.302

Now this paper is published "licensed under Creative Commons Attribution 3.0 License" which means I am free to copy from it here.



According to the results table, several of the items (1.2, 1.4, 2.6) did lead to statistically significantly different response patterns in the two groups.

Most of these questions (1.1-1.4; 2.1-2.8; discussed below) are objective questions, so although no marking scheme was included in the paper, it seems they were marked as correct or incorrect.

So, let's take as an example question 2.5 where readers are told that there was no statistically significant difference in the responses of the two groups. The mean score in the control group was 0.41, and in the experimental group was 0.27. Now, the paper reports that:

"Forty nine (49) learners (31 males and 18 females) were from school A and acted as the experimental group (EG) whereas the control group (CG) consisted of 44 learners (18 males and 26 females) from school B."

Kibirige, Osodo & Tlala, 2014, p.302

So, according to my maths,


Correct responsesIncorrect responses
School A (49 students)(0.27 ➾) 1336
School B (44 students)(0.41 ➾) 1826
pre-test results for an item with no statistically significant difference between groups

"The achievement of the EG and CG from pre-test results were not significantly different which suggest that the two groups had similar understanding of concepts" (p.305).
Pre-test results for an item with no statistically significant difference between groups (offered as evidence of 'similar' levels of initial understanding in the two groups)

While, technically, there may have been no statistically significant difference here, I think inspection is sufficient to suggest this does not mean the two groups were initially equivalent in terms of performance on this item.


Data that is normally distributed falls on a 'bell-shaped' curve

(Image by mcmurryjulie from Pixabay)


Inspection of this graphic also highlights something else. Student's t-test (used by the authors to produce the results in their table 1), is a parametric test. That means it can only be used when the data fit certain criteria. The data sample should be randomly selected (not true here) and normally distributed. A normal distribution means data is distributed in a bell-shaped Gaussian curve (as in the image in the blue circle above).If Kibirige, Osodo & Tlala were applying the t-test to data distributed as in my graphic above (a binary distribution where answers were either right or wrong) then the test was invalid.

So, to summarise, the authors suggest there "was no significant difference between the pre-test achievement scores of the CG and EG for questions", although sometimes there was (according to their table); and they used the wrong test to check for this; and in any case lack of statistical significance is not a sufficient test for equivalence.

I should note that the journal does claim to use peer review to evaluate submissions to see if they are ready for publication!

Comparing learning gains between the two groups

At one level equivalence might not be so important, as the authors used an ANCOVA (Analysis of Covariance) test which tests for difference at post-test taking into account the pre-test. Yet this test also has assumptions that need to be tested for and met, but here seem to have just been assumed.

However, to return to an even more substantive point I made earlier, as the learners were not randomly assigned to the two different conditions /treatments, what should be compared are the two school-based groups (i.e., the unit of analysis should be the school group) but that (i.e., a sample of 1 class, rather than 40+ learners, in each condition) would not facilitate using inferential statistics to make a comparison. So, although the authors conclude

"that the achievement of the EG [taking n=49] after treatment (mean 34. 07 ± 15. 12 SD) was higher than the CG [taking n =44] (mean 20. 87 ± 12. 31 SD). These means were significantly different"

Kibirige, Osodo & Tlala, 2014, p.303

the statistics are testing the outcomes as if 49 units independently experienced one teaching approach and 44 independently experienced another. Now, I do not claim to be a statistics expert, and I am aware that most researchers only have a limited appreciation of how and why stats. tests work. For most readers, then, a more convincing argument may be made by focussing on the control of variables.

Controlling variables in educational experiments

The ability to control variables is a key feature of laboratory science, and is critical to experimental tests. Control of variables, even identification of relevant variables, is much more challenging outside of a laboratory in social contexts – such as schools.

In the case of Kibirige, Osodo & Tlala's study, we can set out the overall experimental design as follows


Independent
variable
Teaching approach:
– predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Controlled
variable(s)
Anything other than teaching approach which might make a difference to student learning
Variables in Kibirige, Osodo & Tlala's study

The researchers set up the two teaching conditions, measure learning gains, and need to make sure any other factors which might have an effect on learning outcomes, so called confounding variables, are controlled so the same in both conditions.

Read about confounding variables in research

Of course, we cannot be sure what might act as a confounding variable, so in practice we may miss something which we do not recognise is having an effect. Here are some possibilities based on my own (now dimly recalled) experience of teaching in school.

The room may make a difference. Some rooms are

  • spacious,
  • airy,
  • well illuminated,
  • well equipped,
  • away from noisy distractions
  • arranged so everyone can see the front, and the teacher can easily move around the room

Some rooms have

  • comfortable seating,
  • a well positioned board,
  • good acoustics

Others, not so.

The timetable might make a difference. Anyone who has ever taught the same class of students at different times in the week might (will?) have noticed that a Tuesday morning lesson and a Friday afternoon lesson are not always equally productive.

Class size may make a difference (here 49 versus 44).

Could gender composition make a difference? Perhaps it was just me, but I seem to recall that classes of mainly female adolescents had a different nature than classes of mainly male adolescents. (And perhaps the way I experienced those classes would have been different if I had been a female teacher?) Kibirige, Osodo and Tlala report the sex of the students, but assuming that can be taken as a proxy for gender, the gender ratios were somewhat different in the two classes.


The gender make up of the classes was quite different: might that influence learning?

School differences

A potentially major conflating variable is school. In this study the researchers report that the schools were "neighbouring" and that

Having been drawn from the same geographical set up, the learners were of the same socio-cultural practices.

Kibirige, Osodo & Tlala, 2014, p.302

That clearly makes more sense than choosing two schools from different places with different demographics. But anyone who has worked in schools will know that two neighbouring schools serving much the same community can still be very different. Different ethos, different norms, and often different levels of outcome. Schools A and B may be very similar (but the reader has no way to know), but when comparing between groups in different schools it is clear that school could be a key factor in group outcome.

The teacher effect

Similar points can be made about teachers – they are all different! Does ANY teacher really believe that one can swap one teacher for another without making a difference? Kibirige, Osodo and Tlala do not tell readers anything about the teachers, but as students were taught in their own schools the default assumption must be that they were taught by their assigned class teachers.

Teachers vary in terms of

  • skill,
  • experience,
  • confidence,
  • enthusiasm,
  • subject knowledge,
  • empathy levels,
  • insight into their students,
  • rapport with classes,
  • beliefs about teaching and learning,
  • teaching style,
  • disciplinary approach
  • expectations of students

The same teacher may perform at different levels with different classes (preferring to work with different grade levels, or simply getting on/not getting on with particular classes). Teachers may have uneven performance across topics. Teachers differentially engage with and excel in different teaching approaches. (Even if the same teacher had taught both groups we could not assume they were equally skilful in both teaching conditions.)

Teacher variable is likely to be a major difference between groups.

Meta-effects

Another conflating factor is the very fact of the research itself. Students may welcome a different approach because it is novel and a change from the usual diet (or alternatively they may be nervous about things being done differently) – but such 'novelty' effects would disappear once the new way of doing things became established as normal. In which case, it would be an effect of the research itself and not of what is being researched.

Perhaps even more powerful are expectancy effects. If researchers expect an innovation to improve matters, then these expectations get communicated to those involved in the research and can themselves have an affect. Expectancy effects are so well demonstrated that in medical research double-blind protocols are used so that neither patients nor health professionals they directly engage with in the study know who is getting which treatment.

Read about expectancy effects in research

So, we might revise the table above:


Independent
variable
Teaching approach:
– predict-observe-explain (experimental)
– lectures (comparison condition)
Dependent
variable
Learning gains
Potentially conflating
variables
School effect
Teacher effect
Class size
Gender composition of teaching groups
Relative novelty of the two teaching approaches
Variables in Kibirige, Osodo & Tlala's study

Now, of course, these problems are not unique to this particular study. The only way to respond to teacher and school effects of this kind is to do large scale studies, and randomly assign a large enough number of schools and teachers to the different conditions so that it becomes very unlikely there will be systematic differences between treatment groups.

A good many experimental educational research studies that compare treatments across two classes or two schools are subject to potentially conflating variables that invalidate study findings and make any consequent conclusions and recommendations untrustworthy (Taber, 2019a). Strangely, often this does not seem to preclude publication in research journals. 1

Advice on controls in scientific investigations:

I can probably do no better than to share some advice given to both researchers, and readers of research papers, in an immunology textbook from 1910:

"I cannot impress upon you strongly enough never to operate without the necessary controls. You will thus protect yourself against grave errors and faulty diagnoses, to which even the most competent investigator may be liable if he [or she] fails to carry out adequate controls. This applies above all when you perform independent scientific investigations or seek to assess them. Work done without the controls necessary to eliminate all possible errors, even unlikely ones, permits no scientific conclusions.

I have made it a rule, and would advise you to do the same, to look at the controls listed before you read any new scientific papers… If the controls are inadequate, the value of the work will be very poor, irrespective of its substance, because none of the data, although they may be correct, are necessarily so."

Julius Citron

The comparison condition

It seems clear that in this study there is no strict 'control' of variables, and the 'control' group is better considered just a comparison group. The authors tell us that:

"the control group (CG) taught using traditional methods…

the CG used the traditional lecture method"

Kibirige, Osodo & Tlala, 2014, pp.300, 302

This is not further explained, but if this really was teaching by 'lecturing' then that is not a suitable approach for teaching school age learners.

This raises two issues.

There is a lot of evidence that a range of active learning approaches (discussion work, laboratory work, various kinds of group work) engages and motivates students more than whole lessons spent listening to a teacher. Therefore any approach which basically involves a mixture of students doing things, discussing things, engaging with manipulatives and resources as well as listening to a teacher, tends to be superior to just being lectured. Good science teaching normally involves lessons sequenced into a series of connected episodes involving different types of student activity (Taber, 2019b). Teacher presentations of the target scientific account are very important, but tend to be effective when embedded in a dialogic approach that allows students to explore their own thinking and takes into account their starting points.

So, comparing P-O-E with lectures (if they really were lectures) may not tell researchers much about P-O-E specifically, as a teaching approach. A better test would compare P-O-E with some other approach known to be engaging.

"Many published studies argue that the innovation being tested has the potential to be more effective than current standard teaching practice, and seek to demonstrate this by comparing an innovative treatment with existing practice that is not seen as especially effective. This seems logical where the likely effectiveness of the innovation being tested is genuinely uncertain, and the 'standard' provision is the only available comparison. However, often these studies are carried out in contexts where the advantages of a range of innovative approaches have already been well demonstrated, in which case it would be more informative to test the innovation that is the focus of the study against some other approach already shown to be effective."

Taber, 2019a, p.93

The second issue is more ethical than methodological. Sometimes in published studies (and I am not claiming I know this happened here, as the paper says so little about the comparison condition) researchers seem to deliberately set up a comparison condition they have good reason to expect is not effective: such as asking a teacher to lecture and not include practical work or discussion work or use of digital learning technologies and so forth. Potentially the researchers are asking the teacher of the 'control' group to teach less effectively than normally to bias the experiment towards their preferred outcome (Taber, 2019a).

This is not only a failure to do good science, but also an abuse of those learners being deliberately subjected to poor teaching. Perhaps in this study the class in School B was habitually taught by being lectured at, so the comparison condition was just what would have occurred in the absence of the research, but this is always a worry when studies report comparison conditions that seem to deliberately disadvantage students. (This paper does not seem to report anything about obtaining voluntary informed consent from participants, nor indeed about how access to the schools was negotiated. )

"In most educational research experiments of the type discussed in this article, potential harm is likely to be limited to subjecting students (and teachers) to conditions where teaching may be less effective, and perhaps demotivating…It can also potentially occur in control conditions if students are subjected to teaching inputs of low effectiveness when better alternatives were available. This may be judged only a modest level of harm, but – given that the whole purpose of experiments to test teaching innovations is to facilitate improvements in teaching effectiveness – this possibility should be taken seriously."

Taber, 2019a, p.94

Validity of measurements

Even leaving aside all the concerns expressed above, the results of a study of this kind depends upon valid measurements. Assessment items must test what they claim to test, and their analysis should be subject to quality control (and preferably blind to which condition a script being analysed derives form). Kibirige, Osodo and Tlala append the test they used in the study (Appendix 2, pp.309-310), which is very helpful in allowing readers to judge at least its face validity. Unfortunately, they do not include a mark/analysis scheme to show what they considered responses worthy of credit.

"The [Achievement Test] consisted of three questions. Question one consisted of five statements which learners had to classify as either true or false. Question two consisted of nine [sic, actually eight] multiple questions which were used as a diagnostic tool in the design of the teaching and learning materials in addressing misconceptions based on prior knowledge. Question three had two open-ended questions to reveal learners' views on how salts dissolve in water (Appendix 1 [sic, 2])."

Kibirige, Osodo & Tlala, 2014, p.302

"Question one consisted of five statements which learners had to classify as either true or false."

Question 1 is fairly straightforward.

1.2: Strictly all salts do dissolve in water to some extent. I expect that students were taught that some salts are insoluble. Often in teaching we start with simple dichotomous models (metal-non metal; ionic-covalent; soluble-insoluble; reversible – irreversible) and then develop these to more continuous accounts that recognise difference of degree. It is possible here then that a student who had learnt that all salts are soluble to some extent might have been disadvantaged by giving the 'wrong' ('True') response…

…although[sic] , actually, there is perhaps no excuse for answering 'True' ('All salts can dissolve in water') here as a later question begins "3.2. Some salts does [sic] not dissolve in water. In your own view what happens when a salt do [sic] not dissolve in water".

Despite the test actually telling students the answer to this item, it seems only 55% of the experimental group, and 23% of the control group obtained the correct answer on the post test – precisely the same proportions as on the pre-test!



1.4: Seems to be 'False' as the ions exist in the salt and are not formed when it goes into solution. However, I am not sure if that nuance of wording is intended in the question.

Question 2 gets more interesting.


"Question two consisted of nine multiple questions" (seven shown here)

I immediately got stuck on question 2.2 which asked which formula (singular, not 'formula/formulae', note) represented a salt. Surely, they are all salts?

I had the same problem on 2.4 which seemed to offer three salts that could be formed by reacting acid with base. Were students allowed to give multiple responses? Did they have to give all the correct options to score?

Again, 2.5 offered three salts which could all be made by direct reaction of 'some substances'. (As a student I might have answered A assuming the teacher meant to ask about direct combination of the elements?)

At least in 2.6 there only seemed to be two correct responses to choose between.

Any student unsure of the correct answer in 2.7 might have taken guidance from the charges as shown in the equation given in question 2.8 (although indicated as 2.9).

How I wished they had provided the mark scheme.



The final question in this section asked students to select one of three diagrams to show what happens when a 'mixture' of H2O and NaCl in a closed container 'react'. (In chemistry, we do not usually consider salt dissolving as a reaction.)

Diagram B seemed to show ion pairs in solution (but why the different form of representation?) Option C did not look convincing as the chloride ions had altogether vanished from the scene and sodium seemed to have formed multiple bonds with oxygen and hydrogens.

So, by a process of elimination, the answer is surely A.

  • But components seem to be labelled Na and Cl (not as ions).
  • And the image does not seem to represent a solution as there is much too much space between the species present.
  • And in salt solution there are many water molecules between solvated ions – missing here.
  • And the figure seems to show two water molecules have broken up, not to give hydrogen and hydroxide ions, but lone oxygen (atoms, ions?)
  • And why is the chlorine shown to be so much larger in solution than it was in the salt? (If this is meant to be an atom, it should be smaller than the ion, not larger. The real mystery is why the chloride ions are shown so much smaller than smaller sodium ions before salvation occurs when chloride ions have about double the radii of sodium ions.)

So diagram A is incredible, but still not quite as crazy an option as B and C.

This is all despite

"For face validity, three Physical Sciences experts (two Physical Sciences educators and one researcher) examined the instruments with specific reference to Mpofu's (2006) criteria: suitability of the language used to the targeted group; structure and clarity of the questions; and checked if the content was relevant to what would be measured. For reliability, the instruments were piloted over a period of two weeks. Grade 10 learners of a school which was not part of the sample was used. Any questions that were not clear were changed to reduce ambiguity."

Kibirige, Osodo & Tlala, 2014, p.302

One wonders what the less clear, more ambiguous, versions of the test items were.

Reducing 'misconceptions'

The final question was (or, perhaps better, questions were) open-ended.



I assume (again, it would be good for authors of research reports to make such things explicit) these were the questions that led to claims about the identified alternative conceptions at pre-test.

"The pre-test revealed a number of misconceptions held by learners in both groups: learners believed that salts 'disappear' when dissolved in water (37% of the responses in the 80% from the pre-test) and that salt 'melts' when dissolved in water (27% of the responses in the 80% from the pre-test)."

Kibirige, Osodo & Tlala, 2014, p.302

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not be a conceptual error

As the first two (sets of) questions only admit objective scoring, it seems that this data can only have come from responses to Q3. This means that the authors cannot be sure how students are using terms. 'Melt' is often used in an everyday, metaphorical, sense of 'melting away'. This use of language should be addressed, but it may not (for at least some of these learners) be a conceptual error as much as poor use of terminology. .

To say that salts disappear when they dissolve does not seem to me a misconception: they do. To disappear means to no longer be visible, and that's a fair description of the phenomenon of salt dissolving. The authors may assume that if learners use the term 'disappear' they mean the salt is no longer present, but literally they are only claiming it is not directly visible.

Unfortunately, the authors tell us nothing about how they analysed the data collected form their test, so the reader has no basis for knowing how they interpreted student responded to arrive at their findings. The authors do tell us, however, that:

"the intervention had a positive effect on the understanding of concepts dealing with dissolving of salts. This improved achievement was due to the impact of POE strategy which reduced learners' misconceptions regarding dissolving of salts"

Kibirige, Osodo & Tlala, 2014, p.305

Yet, oddly, they offer no specific basis for this claim – no figures to show the level at which "learners believed that salts 'disappear' when dissolved in water …and that salt 'melts' when dissolved in water" in either group at the post-test.


'disappear' misconception'melt' misconception
pre-test:
experimental group
not reportednot reported
pre-test:
comparison group
not reportednot reported
pre-test:
total
(0.37 x 0.8 x 93 =)
24.5 (!?)
(0.27 x 0.8 x 93 =)
20
post-test:
experimental group
not reportednot reported
post-test:
comparison group
not reportednot reported
post-test:
total
not reportednot reported
Data presented about the numbers of learners considered to hold specific misconceptions said to have been 'reduced' in the experimental condition

It seems journal referees and the editor did not feel some important information was missing here that should be added before publication.

In conclusion

Experiments require control of variables. Experiments require random assignment to conditions. Quasi-experiments, where random assignment is not possible, are inherently weaker studies than true experiments.

Control of variables in educational contexts is often almost impossible.

Studies that compare different teaching approaches using two different classes each taught by a different teacher (and perhaps not even in the same school) can never be considered fair comparisons able to offer generalisable conclusions about the relative merits of the approaches. Such 'experiments' have no value as research studies. 1

Such 'experiments' are like comparing the solubility of two salts by (a) dropping a solid lump of 10g of one salt into some cold water, and (b) stirring a finely powdered 35g sample of the other salt into hot propanol; and watching to see which seems to dissolve better.

Only large scale studies that encompass a wide range of different teachers/schools/classrooms in each condition are likely to produce results that are generalisable.

The use of inferential statistical tests is only worthwhile when the conditions for those statistical tests are met. Sometimes tests are said to be robust to modest deviations from such acquirements as normality. But applying tests to data that do not come close to fitting the conditions of the test is pointless.

Any research is only as trustworthy as the validity of its measurements. If one does not trust the measuring instrument or the analysis of measurement data then one cannot trust the findings and conclusions.


The results of a research study depend on an extended chain of argumentation, where any broken link invalidates the whole chain. (From 'Critical reading of research')

So, although the website for the Mediterranean Journal of Social Science claims "All articles submitted …undergo to a rigorous double blinded peer review process", I think the peer reviewers for this article were either very generous, very ignorant, or simply very lazy. That may seem harsh, but peer review is meant to help authors improve submissions till they are worthy of appearing in the literature, and here peer review has failed, and the authors (and readers of the journal) have been let down by the reviewers and the editor who ultimately decided this study was publishable in this form.

If I asked a graduate student (or indeed an undergraduate student) to evaluate this paper, I would expect to see a response something along these sorts of lines:


Applying the 'Critical Reading of Empirical Studies Tool' to 'The effect of predict-observe-explain strategy on learners' misconceptions about dissolved salts'

I still think P-O-E is a very valuable part of the science teacher's repertoire – but this paper can not contribute anything to support to that view.

Work cited:

Note

1 A lot of these invalid experiments get submitted to research journals, scrutinised by editors and journal referees, and then get published without any acknowledgement of how they fall short of meeting the conditions for a valid experiment. (See, for example, examples discussed in Taber 2019a.) It is as if the mystique of experiment is so great that even studies with invalid conclusions are considered worth publishing as long as the authors did an experiment.

My work in the field of catalysis

Another predatory conference?


Keith S. Taber


Dear Programme Manager

Thank you for your message on behalf of the scientific committee offering me the position of invited speaker at 12th Edition of Global Conference on Catalysis, Chemical Engineering & Technology.

I appreciate that your scientific committee comprises of eminent leaders in the field of catalysis, but when you write that "By going through your work in the field of Catalysis, our scientific committee would like to offer you the position of Speaker" I am at a loss to work out what

  • Stanislaw Dzwigaj
  • Jose C Conesa
  • Anne M Gaffney
  • Nikolaos C Kokkinos
  • Dmitry Nikushchenko
  • M A Martin Luengo
  • Osman Adiguzel
  • Ahmet Haxhiaj
  • Eugenio Meloni
  • Ramesh C Gupta
  • Abdelkrim Abourriche

have found in my work that makes them feel my it would be of any particular interest to your delegates.

Perhaps you would be kind enough to ask the scientific committee to specify which of my publications they consider to be in the field of catalysis, so I have some idea what I am being invited to speak about.

I assume that as an invited speaker all relevant fees would be waived?

I am afraid that otherwise I will just have to conclude that this is yet another dishonest approach from a predatory conference where 'invited speaker' invitations are of no worth and are issued indiscriminately as a ploy to elicit money from potential speakers: without any regard at all for their suitability or relevance – as long as they can pay you the conference fees.

As you "would be glad to answer any questions [I] may have and provide necessary clarifications where needed" I look forward to your clarification so I can put my mind to rest and avoid concluding that this invitation is just another scam.

Best wishes

Keith

[Email response to the conference (copied to committee members). Clarifications awaited*]


The scientific committee of a catalysis conference has, allegedly, invited me to speak on the topic.

According to the conference programme manager, this committee of experts invited me to speak after 'going through' my (non-existent) 'work in the field of Catalysis'.
Are they incompetent? (I very much doubt that.)
Did the programme manager mishear 'Benjamin List' or 'David MacMillan' as 'Keith Taber'?
Or
Is this just another lie to publicise a predatory conference?



* Update: A clarification

To be fair to 12th Edition of Global Conference on Catalysis, Chemical Engineering & Technology I have today (20th July) received a response. I have been informed that:

"We went through your books and articles regarding teachings in chemistry and concepts and thought to invite you to our event, as most of the delegates who attend our event are from academia"

That sounds reasonable enough, as long as there is a suitable place in the programme.

"As an invited speaker, there are no registration charges to be paid"

Again, that is reasonable.

It is one thing to pay to be present at a conference you are seeking to attend, but another to pay for the privileged of giving a talk when you have been invited to speak.

"But you can present on any of the topics related to scientific sessions"

Okay, so where would a talk to 'mostly academics' about 'teachings in chemistry and concepts' fit?

The conference sessions are on:

  • Catalysis and Porous Materials
  • Catalysis for Energy
  • Chemical Engineering
  • Heterogeneous Catalysis
  • Catalysis in Nanotechnology
  • Environmental Catalysis
  • Catalytic Materials
  • Fluid Mechanics
  • Chemical Synthesis and Catalysts Synthesis
  • Macrocyclic and Supramolecular chemistry
  • Petrochemical Engineering
  • Green and Sustainable Chemistry
  • Catalysis for Renewable Sources
  • Catalysis for Biorefineries
  • Chemical Kinetics and Catalytic Activity
  • Photochemistry, Photobiology and Electrochemistry

So no obvious home for a talk on teaching about chemical concepts.

The topics I was directed to in the email were

  • Catalysis and Porous Materials
  • Catalysis for Energy
  • Photochemistry, Photobiology and Electrochemistry
  • Catalysis for Renewable Sources
  • Chemical Kinetics and Catalytic Activity
  • Catalysis and Applications
  • Homogeneous Catalysis, Molecular Catalysis
  • Catalysis for Biorefineries
  • Chemical Engineering
  • Heterogeneous Catalysis
  • Advances in Catalysis and Chemical Engineering
  • Reaction Chemistry and Engineering
  • Catalysis in Nanotechnology
  • Industrial Catalysis and Process Engineering
  • Environmental Catalysis
  • Advanced synthesis, Catalytic systems and new catalyst designing
  • Biocatalysis and Biotransformation
  • Catalytic Materials
  • Organometallics, Organocatalysis and Bioinorganic Chemistry
  • Surface Chemistry: Colloid and Surface aspects
  • Computational Catalysis
  • Enantioselective catalysis
  • Chemical Synthesis and Catalysts Synthesis
  • Fluid Mechanics
  • Micro-emulsion Catalysis and Catalytic Cracking
  • Macrocyclic and Supramolecular chemistry
  • Integrated Catalysis
  • Plasma Catalysis
  • Enzymes, Coenzymes and Metabolic Pathways
  • Nuclear Chemistry/Radiochemistry
  • Separation Processes in Chemical Technology
  • Petrochemical Engineering
  • Green and Sustainable Chemistry
  • Analytical Methodologies
  • Microbial Technology
  • Mechanisms of Microbial Transcription

So, I have been invited because of my expertise relating to teaching chemical concepts (one of the very few areas where I really might be considered to have some kind of expertise), and can participate for free, as long as I submit a talk on some aspect of the science of chemical catalysis in a session about some sub-field of chemistry relating to catalysis.

This is like writing to Reece James to tell him that on the basis of his exceptional skills as a footballer, he is invited to talk at a literary festival on any any genre of fiction writing; or, on the basis of her song-writing and musical achievements, inviting Kate Bush to be give a keynote at a history conference – and allowing her to choose between speaking about Roman Britain, The Agricultural Revolution, Europe between the 'World Wars', or Sino-Japanese tensions over Korea in the nineteenth century.

So, I recognise the attempt to make good on the invitation, but hardly a total 'save'.



Addendum: A glut of catalytic conferences?



By coincidence, or otherwise, today I also received an invitation to be a speaker at the

"?3rd Global Congress on Chemistry and Catalysis?, an event hosted by Phronesis LLC and held at Dubai, UAE during November 18-19, 2022 [where] The main theme of the conference is ?Contemporary Advances and Innovations in chemistry and catalysis?"

I would apparently be a 'perfect person' to speak at one the sessions. These are on:

  • Materials Science and Engineering
  • Advanced Structural Materials
  • Ceramics, Polymers and Composite Materials
  • Advances in Biosensors, Biomaterials, Medical devices and Soft Materials
  • Corrosion, Alloys, Mining and Metallurgy
  • Hybrid Materials and Bioinspired Materials
  • Materials in Nuclear Energy Science and Engineering
  • Energy, Environment and Materials Technology
  • Computational Materials Science
  • 3D Printing Technology
  • Materials Synthesis And Processing
  • Functional materials, Metals, and Metal Casting Technology
  • Emerging Smart Materials, Meta Materials and Smart Coatings
  • Materials Chemistry, Sustainable Chemistry and Materials Physics
  • Polymer Science and Polymeric Materials
  • Nanoscience and Nanotechnology
  • Optics Photonics Electronic and Magnetic Materials
  • Glass Science and Technologies
  • Nanotechnology in Materials Science
  • Nanotechnology for Energy and the Environment
  • Nanomaterials and 2D Materials
  • Carbon Nanomaterials, Nanostructures and Nanocomposites
  • Graphene Technologies and carbon Nanotubes
  • Manufacturing Technology and Instrumentation Technology
  • Materials for Energy and the Environment
  • Nanotechnology in Healthcare and its Applications

Hm. Perhaps I am not quite the 'perfect person', after all?



Diabolical diabetes journal awards non-specialist guest editorship (for a price)

"By the pricking of my thumbs,
Something wicked this way comes"


Keith S. Taber


Diabetes is a life-threatening condition – so one might hope that a research journal called 'Journal of Diabetes Research Reviews & Reports' would have serious academic standards
(Image by Tesa Robbins from Pixabay)

An open access journal that charges USD $ 1519 for publication (and "will not issue refunds of any kind"), that is available for subscription"Euro € 3600.00 for Single Volume, € 600.00 for Single Issue (+postage charge €100)", but which wants me to send it "$2519" because I have been awarded membership.

Dear Henderson

Thank you for your email 'Membership for Your Publications' notifying me that the Journal of Diabetes Research Reviews & Reports has awarded me 'membership' based on my research profile. That is rather incredible as my research is in science education. The most relevant publication that comes to mind is "Is 6% kidney function just as good as 8% kidney function? A case of justifying dubious medical ethics by treating epistemology as ontology" which is not peer-reviewed, but a post on my personal blog.

This does rather suggest that either

  • the Journal of Diabetes Research Reviews & Reports has a rather bizarre notion of its scope given the journal title, or
  • it has extremely low standards in terms of what it feels it might be happy to publish.
  • Or, perhaps both?

I am a little confused by your final paragraph which seems to suggest that although I have been 'awarded' various benefits (well they might have been benefits had I been a diabetes researcher) you would like me to send you $2519 (in some unspecified currency). I only ever recall being honoured with one academic award before, and that came with a sum of money. That is, when you make an academic award, you give money to the recipient, not the other way around.

So, let's be honest.

You do not know, or indeed care, if I know anything about diabetes research. (Either you have not examined my research profile to find out; or whoever was tasked with this has such limited scholarly background that they have no notion of how to identify publications about diabetes research – such, perhaps, as looking to see if the words 'diabetes' or 'diabetic' appear in any paper titles or keywords: not exactly a challenging higher level task.)

You are not making me an award.

You are trying to sell me some kind of a package of 'benefits' in relation to publishing my work in your dodgy journal. That is, the Journal of Diabetes Research Reviews & Reports is one of the many predatory journals seeking to take money from scholars without being in a position to offer a service consistent with normal standards of academic quality in return. (This has already been demonstrated by the journal identifying someone with no publications in the field as 'a potential author' for the journal based on scrutinising my 'research profile in [sic] online'. If that is the level of competence to be expected of the editorial and production side of the journal, why would any serious scholar let their work be published in it?)

That apparent lack of competence in itself does not justify spending my time responding to your invitation.

I write because I find these tactics dishonest. You deliberately set out to deceive by pretending you are offering an award based on the excellence of a scholar's research. I really do not like lying, which is antithetical to the whole academic enterprise. So, I reply to call out the lie.

If you feel that I have misrepresented the situation, and that my research profile justifies an award in the field of diabetes research, then I would be very happy to receive your explanation. Otherwise, perhaps you might wish to consider if you really are comfortable working in an unethical organisation and being complicit in lying to strangers in this way?

Best wishes

Keith


Notification of an 'award'. Benefits (once I have paid a fee) include being appointed a guest editor.

Update (5th August 2022)

I have just received a response from the journal…


"Anticipating for [my] positive response" -despite my reply to the Journal!