The sugger strikes back!

An update on the 'first annual International Survey of Research Centres and Institutes'


Keith S. Taber (masquerading as a learned academic)


if he wanted me to admit I had been wrong, Hussain could direct me to the released survey results and assure me that the data collected for the survey was not being used for other purposes. That is, he should given me grounds to think the survey was a genuine piece of research and not 'sugging'


Some months ago I published an article in this blog about a message I received from an organisation called Acaudio, that has a website where academics can post audio recordings promoting their research, that invited me to participate in "the first annual International Survey of Research Centres and Institutes". I was suspicious of this invitation for a number of reason as I discuss at 'The first annual International Survey of Gullible Research Centres and Institutes')

Several things suggested to me that this was not a genuine piece of academic research, including the commitment that "We will release the results over the next month" which seemed so unrealistic as to have been written either by someone with no experience of collecting and analysing large scale survey data – or someone with no intention of actually following through on the claim.

Sugging?

Having taken a look at the survey questions, I felt pretty sure thus was an example of what has been labelled as 'sugging'. Sugging is a widely recognised, and indeed widely adopted, unethical practice of collecting marketing information by framing it as a survey. The Market Research Society explains that,

Sugging is a market research industry term, meaning 'selling under the guise of research'. Sugging occurs when individuals or companies pretend to be market researchers conducting a research, when in reality they are trying to build databases, generate sales leads or directly sell product or services….

The practices of sugging and frugging [fundraising under the guise of market research] bring discredit on the profession of research… and mislead members of the public when they are being asked for their co-operation…

Failing to clearly specify the purpose for which the data is being collected is also a breach of…the first principle of the Data Protection Act 1998.

https://www.mrs.org.uk/standards/suggingfaq

Although I thought the chances of the results of the first annual International Survey of Research Centres and Institutes actually being released within the month, or even within a few months to allow for a modest level of over-promising, were pretty minuscule, I did think I should wait a few months and then do a search to see if such a report had appeared. I did not think I was likely to find such a report released into the public domain, but any scientist has to be open-minded enough to consider they might be wrong – and certainly in my own case I've collected enough empirical evidence over the years to know I am not just, in principle, fallible.

Acaudio doth protest too much, methink

But (being fallible) I'd rather forgotten about this and had not got round to doing a web search. Until, that is, I was prompted to do so by receiving an email from the company founder, Hussain Ayed, who had had his attention drawn to my blog, and was – understandably perhaps – not happy about my criticisms:



Hussain's letter did not address my specific points from the blog (as he did not want to "get into the nitty gritty of it all"), but assured me his company was genuinely trying to do useful work, and there was no scamming.

Of course, I had not suggested Acaudio, the organisation, was itself a 'scam': in my earlier article I had pointed that Acaudio was offering a free, open-access, service which was likely to be useful to academics – and even briefly pointed out some positive features of their website.

But Acaudio's 'survey' was a different matter. It did not meet the basic requirements for a serious academic study, and it asked questions that seemed to be clearly designed as linked to potential selling points for a company that was offering services to increase research impact (so, perhaps, Acaudio).



And it promised a fantastic time-scale. Perhaps a very large organisation, with staff fully dedicated to analysis and reporting could have released international survey results within a month of collecting data – perhaps? But Acaudio was a company with one company officer that reported employing one person.

Given the scale of the organisation, what Acaudio have achieved with their website in a relatively short time is highly impressive. But…

…where is that survey report?

I replied to Hussain, as below.

Dear Hussain Ayed

Thank you for your message.

I have not written "a comprehensive attack on [your] company" and do not have a sufficient knowledge-base to have done so. I have indeed, however, published a blog article criticising your marketing techniques based on the direct evidence in messages you have sent me. In particular, I claimed that,

(i) (despite being registered as a UK based company) you did not adhere to the UK regulations concerning direct marketing. (I assume you are not seeking to challenge this given the evidence of your own emails)

(ii) that you were also 'sugging': undertaking marketing under the guise of carrying out a survey.

If I understand your complaint, you are suggesting in regard to point (ii) that you really were carrying out a survey for the public good (rather than to collect information for your own commercial purposes) and that any apparent failure of rigour in this regard actually resulted from a lack of relevant expertise within the company. If so, perhaps you will send me, or tell me where I can access, the published outcome of the survey (due to be available by the middle of June 2023 according to your earlier message). I have looked on line for this, but a Google search (using the term "International Survey of Research Centres and Institutes") failed to locate the report.

Can you offer me an assurance that information collected for the survey was ONLY used for the analysis that led to the published survey report (assuming there is one you can point me to), and that this information was not retained by your organisation as a basis for contacting individuals with regard to your company's services? If you can offer appropriate assurances then I will be happy to add an inserted edit into the blog to include a statement along the lines that the company assures me that all information collected was only used for the purposes of producing a survey report, and was not retained or used in any other way by the company.

So, to summarise regarding point (ii), if this survey was not a scam, please (a) point me to the outcomes, and (b) give me these assurances about not collecting information under false premises.

You also have the right to reply directly. If you really think anything in my article amounted to "misleading bits of 'evidence' " then please do correct this. You are free to submit a response in the comments section at the bottom of the page. If you wish to do that, I will be happy to publish your reply (subject to my usual restrictions which I am sure should not be any impediment to you – so, I will not publish anything I think might be libellous of a third party, nor anything with obscenity/profanity etc. Sadly, I do sometimes have to reject comments of these kinds.)

I recognise that comments have less prominence than the blog article they follow, and that indeed some readers may not get that far in their engagement with an article. Therefore, if you do submit a reply I am happy to also add a statement at the HEAD of my article to point out out to readers that there is a reply on behalf of the company beneath the article, so my readers see that notice BEFORE proceeding to read my own account. I am not looking for people/organisations to criticise for the sake of it, but have become concerned about the extent of unethical practice in the name of academic work (such as the marketing of predatory journals and conferences) and do point out some of the examples that come my way. I believe such bad practice is very damaging, and especially so for students who are new to the academic world, and for those working working in under-resourced contexts who may be under extreme pressure to achieve 'tenure'. People spend their limited funds on getting published in journals that have no serious peer review (and so are not taken seriously by most academics), or presenting at conferences which 'invite' contributions from anyone prepared to pay the fees. I do not spend time looking for such bad practice: it arrives in my inbox on a fairly frequent basis.

Perhaps your intentions are indeed honourable, and perhaps you are doing good work. Perhaps you are indeed "working to tackle inequality in higher education and academia", which obviously would be valuable, although I am not sure how this is achieved by working with groups at Cambridge such as the Bioelectronic Systems Tech Group – unless you perhaps charge fees to those in wealthy institutions to allow you to offer a free service for those elsewhere? If you do: good on you. Even so, I would strongly suggest you 'clean up your act' as far as your marketing is concerned, and make sure your email campaigns are within the law. By failing to follow the regulations you present your organisation as either being unprofessional (giving the impression no one knows what they are doing) or dodgy (if you* know the regulations, but are choosing not to follow them). *I assume you are responsible for the marketing strategy, but even if someone else is doing this for you, I suspect you (as the only registered company officer) would be considered ultimately responsible for not following the regulations.

If you are genuine about wishing to learn more about undertaking quality surveys, there are many sources of information. My pages on research methods might be a place to get some introductory background, but if this to be a major part of your company's activity I would really suggest you should employ someone with expertise, or retain a consultant who works in that area.

Thank you for the offer to work with you, but I am retired and have too many existing projects to work on – and in any case you should work with someone you genuinely respect, not someone that you consider only to "masquerade as a learned academic" and who has "shaky morals".

Best wishes

Keith

My key point was that if he wanted me to admit I had been wrong, Hussain could direct me to the released survey results and assure me that the data collected for the survey was not being used for other purposes. That is, he should given me grounds to think the survey was a genuine piece of research and not 'sugging'.

The findings of the survey are 'reserved'

Later that day, I got the following reply:



So, it seems the research report that was supposed to have been released ("over the next month" – according to Acaudio's email dated 15th May 2023) was not available, and – furthermore – would not be made available to me.

  • A key principle of scientific research is that the outcomes are published – that is made available to the public: and not "reserved" for certain people the researchers select!
  • A key feature of ethical research is that a commitment is made to make outcomes available (as Acaudio did) and this is followed through (as Acaudio did not).
What is the research data being used for?

Hussain also failed to offer any assurances that the data collected under the claim (pretence, surely) of carrying out survey research was not being used for commercial purposes – as a basis for evaluating the potential merits of approaching different respondents to tender for services. I cannot prove that Acaudio was using the collected information for such purposes, but if my suspicions were misplaced (and if Hussain really wanted to persuade me that the survey was not intended as a scam) it would have been very easy to simply include a sentence in his response to that effect – to have assured me that the research data was being analysed anonymously and handled separately from the company's marketing data with a suitable 'ethical wall' between.1

That is, Hussain could have simply got into enough of the "nitty gritty" to have offered an assurance of following an ethical protocol, instead of choosing to insult me…as I pointed out to him:-


Dear Hussain

Thank you for your message.

So, the 'survey' results (if indeed any such document actually exists) that you indicated to me would be released by mid-June are still not actually available in the public domain. As you say: 'Hmm'.

You are right, that I would have no right to ask you to provide me with anything – except that YOU ASKED ME to believe I misjudged you, and to withdraw my public criticisms; and so I ASKED YOU to provide the evidence to persuade me by (i) proving there was a survey analysis with published results, and (ii) giving an assurance that you did not use, for your company's marketing purposes, data supposedly collected for publishable research. There is of course no reason why you should have provided either the results or the assurances, unless you actually did feel I had judged Acaudio too harshly and you wanted to give me reason to acknowledge this. The only thing that might give me "some sort of power over [you]" in this regard is your suggestion to me that I might wish to "take back the claims that [I] made". Can I remind you: you contacted me. You contacted me, unsolicited, in December 2022, and then again in May 2023. This morning, you contacted me again specifically to suggest my suggestions of wrong-doing were misjudged. But you will not back that up, so you have simply reinforced my earlier inferences.

For some reason that is not clear to me, you think that my mind is on money – that is presumably why I spend some of my valuable time highlighting poor academic practices on a personal website that brings in no income and is financed from my personal funds. Perhaps that is the company director finding it hard to get inside the mind of a retired teacher who worked his entire career in the public sector? (That is not meant as an insult – I probably have the reverse difficulty in understanding the motivations of the commercial mind. Perhaps that is why these are "things that are beyond [my] understanding"?) I do not have any problem with you setting up a company to make money (good luck to you if you work hard and treat people with due respect), and think it is perfectly possible for an organisation to both make money and produce public goods – I am not against commercial organisations per se. My 'vested interests' relate to commitments to certain values that I think underpin both good science and academic activities more broadly. A key one is honesty (which is one fundamental aspect of treating people with due respect). We are all entitled (perhaps even have a duty?) to make the strongest arguments for our positions, but when people knowingly misrepresent (e.g., "We will release the results over the next month" but no publication is forthcoming) in order to to advance their interests, this undermines the scholarly community. Anyone can be wrong. Anyone can be mistaken. Anyone can fail in a venture. (Such as promising a report, genuinely intending to produce one, but finding the task was more complex than anticipated. Had that been your response, I might have found this feasible. Instead, you promised to release the results, but now you claim you have "every right to ignore [my] request for the outcomes". Yes, that is so – if the commitment you made means nothing.) As long as we can trust each other to be open and honest the system will eventually self-correct in cases when there are false (but honestly motivated) claims. Yet, these days, academics are flooded with offers and claims that are not mistaken, but deliberately misleading. That is what I find so troublesome that I take time to call out examples. That may seem strange to you, but you have to remember I have worked as a school, college, and university, teacher all my working life, so I identify at a very deep level with the basic values underpinning the quest for knowledge and learning. When I get an email from someone claiming they are doing a survey, but which seems to be an attempt to market services, I do take it personally. I do not like to be lied to. I do not like to be treated as a fool. And I do not like the thought that perhaps less experienced colleagues and graduate students may take such approaches at face value and not appreciate they are being scammed. Can does not equate to should: you may have "the ability to write and say what [you] want", but that does not mean you have the right to deliberately mislead people. You say you will not be engaging with me any more. Fine. You started this correspondence with your unsolicited approaches. I will be very happy if you remove me from your marketing list (that I did not sign up for) and do not contact me again. That might be in both our interests.

And despite all this, I wish you well. Whatever your mistakes in the past, if you do genuinely wish to make a difference in the way you suggest, then I hope you are successful. But please, if you believe in your company and the contribution it can make, seek to be totally honest with potential clients. If you are in this for the long term, then developing trust and a strong reputation for ethical business practices will surely create a fund of social capital that will pay dividends as you build up the organisation. Whereas producing emails of the kind you have sent me today is likely to be counter-productive and just alienate people: using ad hominem points – I am masquerading as a learned academic, out of touch, arrogant, unfit and entitled; with shaky morals and vested interests; things are beyond my understanding; I write nonsense – simply suggests you have no substantive points to support your position. By doing this you automatically cede the higher ground. And, moreover, is that really the way you want your company represented in its communications?

Best wishes

Keith 


As I wrote above, Acaudio seem to be doing a really good job in setting up a platform where researchers can post accounts of their research – and given the scale of the organisation – I assume much (if not all) of that is down to Hussain. That, he can be proud of.

However, using the appearance of an international survey as a cover for collecting data that can be used to market a company's services is widely recognised as a dishonest and unethical (if not illegal 2) practice. I think he should less proud of himself in that regard.

If Hussain still wants to maintain that his request for contributions to the first annual International Survey of Research Centres and Institutes was intended as a genuine attempt at academic research, rather than just a marketing scam, then he still has the option of publishing a report of the study so that the academic community can evaluate the extent to which the survey meets the norms of genuine research; and so that, at very least, he will have met one key criterion of academic research (publication).

This would also show that Acaudio are prepared to meet their side of the contract they offered to potential respondents (i.e., please contribute to this survey – in consideration we will release the results over the next month). Any reputable business should be looking to make good on its promises.


Notes

1 The idea of an ethical wall (sometimes referred to as a 'Chinese wall') is important in businesses where there is the potential for conflicts of interest. Consider, for example, firms of lawyers that may have multiple clients, and where information offered in confidence by one client could have commercial value for another. The firm is expected to have protocols in place so that information about one client is not either leaked to another client, or (deliberately or inadvertently) influences advice given to another client. To avoid inadvertent influence, it may be necessary to ensure staff working with one client are not involved in work for another client that may be seen to have conflicting interests.

A company may hire a market research organisation to carry out market research to inform then about future strategies – so the people analysing the data have no bias due to preferred outcomes, and no temptation to misuse the data for direct marketing purposes. The commissioned report will not identify particular respondents. Then there is an ethical wall between the market researchers who report on the overall state of the market, and the client company's marketing and sales section.

My reference to the small size of Acaudio is not intended as an inherent criticism. My original point was that such a small company was unlikely to have the capacity to carry out a meaningful international survey (which does not imply the intention to do so was necessarily inauthentic – Acaudio might have simply overstretched itself).

However, a very small company might well have inherent difficulties in carrying out genuine research which did not leak information about specific respondents to those involved in sales.

Many surveys invite people to offer their email if they wish for feedback or to make themselves available for follow-up interviews – but offer an assurance the email address will not be used for other purposes, and need not be given to participate. Acaudio's survey required identifying information.2 This is a strong indicator that the primary purpose was not scholarly research.



2 The Data Protection Act 2018 concerns personal information:

"Everyone responsible for using personal data has to follow strict rules called 'data protection principles'. They must make sure the information is:

  • used fairly, lawfully and transparently
  • used for specified, explicit purposes
  • used in a way that is adequate, relevant and limited to only what is necessary
  • accurate and, where necessary, kept up to date
  • kept for no longer than is necessary
  • handled in a way that ensures appropriate security, including protection against unlawful or unauthorised processing, access, loss, destruction or damage"
GOV.UK

Acaudio's survey is nominally about research institutes not individual people.

However, it asks questions such as

  • "How satisfied are you with…"
  • "How much time do you spend…"
  • "Do you feel like…"
  • "What are the biggest challenges you face…"
  • "Who do you feel is…"
  • "How effective do you think…"
  • "Do you agree…"
  • "What would you consider..."
  • "How much would you consider…"
  • "Would you be interested in…"
  • "How do you decide…"
  • "What do you hope…"

This is information about a person, moreover a person of known email address:

" 'personal data' means any information relating to an identified or identifiable natural person ('data subject'); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier…"

Information Commissioner's Office

So, if information collected by this survey was used for purposes other than the survey itself –

  • say perhaps for identifying sales leads {e.g., "How satisfied are you with the level of awareness people have of your centre / institute?" "How effective do you think your current promotion methods are?"; "How important is building an audience for the work of the research centre / institute?"};
  • and/or profiling potential clients
    • in terms of level of resource that might be available to buy services {e.g., "How much would you consider to be a reasonable amount to spend on promotional activities?"},
    • or priorities for research impact strategies {e.g., "What mediums [sic] would you consider using to promote your research centre / institute?"; "Do you agree it is important to have a dedicated person to take care of promotional activities?"}

– would that not be a breach of UK data protection law?


The first annual International Survey of Gullible Research Centres and Institutes

When is a 'survey' not really a survey? Perhaps, when it is a marketing tool.


Keith S. Taber


A research survey seeks information about a population by collecting data from a sample.
Acaudio's 'survey' seems to seek information about whether particular respondents might be persuaded to buy their services.

Today I received an invitation to contribute to something entitled "the first annual International Survey of Research Centres and Institutes". Despite this impressive title, I decided not to do so.

This was not because I had some doubts that whether it really was 'the first…' (has there never previously been an annual International Survey of Research Centres and Institutes?) Nor was it because I had been invited to represent 'The Science and Technology Education Research Group' which I used to lead – but not since retiring from my Faculty duties.

My main reason for not participating was because I suspected this was a scam. I imagined this might be marketing apparently masquerading as academic research. I include the provisos 'suspected' and 'apparently' as I was not quite sure whether this was actually a poor attempt to mislead participants or just a misjudged attempt at witty marketing. That is, I was not entirely sure if recipients of the invitation were supposed to think this was a serious academic survey.



There is a carpet company that claims that no one knows more about floors than … insert here any of a number of their individual employees. Their claims – taken together – are almost logically impossible, and certainly incredible. I am sure most people let this wash over them – but I actually find it disconcerting that I am not sure if the company is (i) having a logical joke I am supposed to enjoy ('obviously you are not meant to believe claims in adverts, so how about this…'), or (ii) simply lying to me, assuming that I will be too stupid to spot the logical incoherence.

Read 'Floored or flawed knowledge?: A domain with a low ceiling'

Why is this not serious academic research?

My first clue that this 'survey' was not a serious attempt at research was that the invitation was from an email address of 'playlist.manager@acaudio.com', rather than from an academic institute or a learned society. Of course, commercial organisations can do serious academic research, if usually when they are hired to do so on behalf of a named academically-focussed organisation. The invitation made no mention of any reputable academic sponsor.

I clicked on the link to the survey to check for the indicators one finds in quality research. Academic research is subject to ethical norms, such as seeking voluntary informed consent, and any invitation to engage in bone fide academic research will provide information to participants up front (either on the front page of the survey or via a link that can be accessed before starting to respond to any questions). One would expect to be informed, at a minimum:

  • who is carrying out the research (and who for, if it is commissioned) and for what purpose;
  • how data will be used – for example, usually it is expected that any information provided with be treated as confidential, securely stored, and only used in ways that protect the anonymity of participants.

This was missing. Commercial organisations sometimes see information you provide differently, as being a resource that they can potentially sell on. (Thus the recent legislation regulating what can or cannot be done with personal information that is collected by organisations.)

Hopefully, potential participants will be informed about the population being sampled and something of the methodology being applied. In an ideal world an International Survey of Research Centres and Institutes would identify and seek data from all Research Centres and Institutes, internationally. That would be an immense undertaking – and is clearly not viable. Consider:

  • How many 'research centres' are initiated, and how many close down or fade away, internationally, each year?
  • Do they all even have websites? (If not, how are they to be identified?)
  • If so, spread over how many languages?

Even attempting a meaningful annual survey of all such organisations would require a substantive, well-resourced, research team working full-time on the task. Rather, a viable survey would collect data from a sample of all research centres and research institutes, internationally. So, some indication of how a sample has been formed, or how potential participants identified, might be expected.

Read about sampling a population of interest

One of the major limitations of many surveys of large populations is that even if a decent sample size is achieved, such surveys are unlikely to reach a representative sample, or even provide any useful indicators of whether the sample might be representative. For example, information provided by 'a sample of 80 science teachers' tells us next to nothing about 'science teachers' in general if we have no idea how representative that sample is.

It can be a different matter when surveys are undertaken of small, well-defined, populations. A researcher looking to survey the students in one school, for example (perhaps for a consultation about a mooted change in school dress policy), is likely to be in a position to make sure all in the population have the opportunity to respond, and perhaps encourage a decent response rate. They may even be able to see if, for example, respondents reflect the wider population in some important ways (for example, if one got responses from 400/1000 students, one would usually be reasonably pleased, but less so if hardly any of the responses were in, say, the two youngest year groups).

In such a situation there is likely to be a definitive list of members of the population, and a viable mechanism to reach them all. In more general surveys, this is seldom the case. One might see a particular type of exception as elections (which can be considered as akin to surveys). The electoral register potentially lists all enfranchised to vote, and includes a postal address where each voter can be informed of a forthcoming poll. In this situation, there is a considerable administrative cost of maintaining the register – considered worth paying to support the democratic process – and a legal requirement to register: yet, even here, no one imagines the roll is ever complete and entirely up-to-date.)

  • How many of the extant Research Centres and Research Institutes, internationally, had been invited to participated in this survey?
  • And did these invitations reflect the diversity of Research Centres and Institutes, internationally?
    • By geographical location?
    • By discipline?

No such information was provided.

The time-scale for an International Survey of Research Centres and Institutes

To be fair the invitation email did suggest the 'researchers' would share outcomes with the participants:

"We will release the results over the next month".

But that time-scale actually seemed to undermine the possibility that this initiative was meant as a serious survey. Anyone who has ever undertaken any serious research knows: it takes time.

When planning the stages of a research project, you should keep in mind that everything will likely take longer than you expect…

even when you allow for that.

Not entirely frivolous advice given to research students

Often with surveys, the initial response is weak (filling in other people's questionnaires is seldom anyone's top priority), and it becomes necessary to undertake additional rounds of eliciting participation. It is good practice to promise to provide feedback; but to offer to do this within a month seems, well, foolhardy.

Except, of course, Acaudio are not a research organisation, and the purpose of the 'survey' was, I suggest, not academic research. As becomes clear from the questions asked, this is marketing 'research': a questionnaire to support Acaudio's own marketing.

What does this company do?

Acaudio offer a platform for allowing researchers to upload short audio summaries of their research. Researchers can do this for free. The platform is open-access, allowing anyone to listen. The library is collated with play-lists and search functions. The company provides researchers data on access to their recordings.

This sounds useful, and indeed 'too good to be true' as there are no charges for the service. Clearly, of itself, that would be a lousy business model.

The website explains:

"We also collaborate with publishers and companies. While our services are licensed to these organizations, generating revenue, this approach is slightly different from our collaboration with you as researchers. However, it enables us to maintain the platform as fully open access for our valued users."

https://acaudio.com/faq

So, having established the website, and built up a library of recordings hosted for free (the 'loss leader' as they say 1), the company is now generating income by entering into commercial arrangements with organisations. Another page on their website claims the company has 'signed' 1000 journals and 2000 research centers [sic]. So, alongside the free service, the company is preparing content on behalf of clients to publicise, in effect advertise, their research for them. Nothing terrible there, although one would hope that the research that has the most impact gets that impact on merit, not because some journals and research centres can pay to bring more attention to their work. This business seems similar to those magazines that offer to feature your research in a special glossy article – for a price.

Read 'Research features…but only if you can afford it'

One would like to think that publicly funded researchers, at least, spend the public's money on the actual research, not on playing the impact indicators game by commissioning glossy articles in magazines which would not be any serious scholar's preferred source of information on research. Sadly, since the advent of the Research Assessment Exercise (and its evolution into the 'Research Excellence Framework') vast amounts of useful resource have been spent on both rating research and in playing the games needed to get the best ratings (and so the consequent research income). As is usually the case with anything of this kind (one could even include formal school examinations!), even if the original notion is well-intentioned,

  • the measurement process comes to distort what it is measuring;
  • those seen as competing spend increasing resources in trying to out do each other in terms of the specifics of the assessment indicators/criteria

So, as research impact is now considered measurable, and as it is (supposedly) measured, and contributes to university income, there is a temptation to spend money on things that might increase impact. It becomes less important whether a study has the potential to increase human health and happiness; and more important to get it the kind of public/'end user' attention that might ultimately lead to evidence of 'impact' – as this will increase income, and allow the research to continue (and, who knows, perhaps eventually even increase human health and happiness).

What do Acaudio want to know?

Given that background, the content of the survey questionnaire makes perfect sense. After collecting some information on your research centre, there are various questions such as

  • How satisfied are you with the level of awareness people have of your centre / institute?
  • How important is it that the general public are aware of the work your centre / institute does?

I suspect most heads of research centres think it is important people know of their work, and are not entirely satisfied that enough people do. (I suspect academic researchers generally tend to think that their own research is actually (i) more important than most other people realise and (ii) deserves more attention than it gets. That's human nature, surely? Any self-effacing and modest scholars are going to have to learn to sell themselves better, or, if not, they are perhaps unlikely to be made centre/institute heads.

There are questions about how much time is spent promoting the research centre, and whether this is enough (clearly, one would always want to do more, surely?), and the challenges of doing this, and who is responsible (I suspect most heads of centres feel some such responsibility, without considering it is how they most want to spend their limited time for research and scholarship).

Perhaps the core questions are:

  • Do you agree it is important to have a dedicated person to take care of promotional activities?
  • How much would you consider to be a reasonable amount to spend on promotional activities?

These questions will presumably help Acaudio decide whether you can easily be persuaded to sign up for their help, and what kind of budget you might have for this. (The responses for the latter include an option for spending more than $5000 each year on promotional activities!)

I am guessing that at even $5000+ p.a., they would not actually provide a person dedicated to 'take care of promotional activities' for you, rather than a person dedicated to adding your promotional activities to their existing portfolio of assigned clients!

So, this is a marketing questionnaire.

Is this dishonest?

It seems misleading to call a marketing questionnaire 'the first annual International Survey of Research Centres and Institutes' unless Acaudio are making a serious attempt to undertake a representative survey of Research Centres and Institutes, internationally, and they do intend to publish a full analysis of the findings. "We will release the results over the next month" sounds like a promise to publish, so I will look out with interest for an announcement that the results have indeed been made available.

Lies, delusional lies, and ill-judged attempts at humour

Of course, lying is not simply telling untruths. A person who claims to be Napoleon or Joan of Arc is not lying if that person actually believes that is who they are. Someone who claims they are the best person to run your country is not necessarily lying simply because the claim is false. If the Acaudio people genuinely think they are really doing an International Survey of Research Centres and Institutes then their invitation is not dishonest even if it might betray any claim to know much about academic research.


"I'm [an actor playing] Spartacus";"I'm [an actor playing another character who is not Spartacus, but is pretending to be] Spartacus"; "I'm [another actor playing another character who is also not Spartacus, but is also pretending to be] Spartacus"… [Still from Universal Pictures Home Entertainment movie 'Spartacus']


Nor is it lying, when there is no intent to deceive. Something said sarcastically or as a joke, or in the context of a theatrical performance, is not a lie as long as it is expected that the audience share the conceit and do not confuse it for an authentic knowledge claim. Kirk Douglas, Tony Curtis, and their fellow actors playing rebellious Roman slaves, all knew they were not Spartacus, and that anyone in a cinema watching their claims to be the said Spartacus would recognise these were actors playing parts in a film – and that indeed in the particular context of a whole group of people all claiming to be Spartacus, the aim even in the fiction was actually NOT to identify Spartacus, but to confuse the whole issue (even if being crucified as someone who was only possibly Spartacus might be seen as a Pyrrhic victory 2).

So, given that the claim to be undertaking the first annual International Survey of Research Centres and Institutes was surely, and fairly obviously, an attempt to identify research centres that (a) might be persuaded to purchase Acaudio's services and (b) had budget to pay for those services, I am not really sure this was an attempt to deceive. Perhaps it was a kind of joke, intended to pull in participants, rather than a serious attempt to fool them.

That said, any organisation hoping for credibility among the academic community surely needs to be careful about its reputation. Sending out scam emails that claim to be seeking participants for a research survey that is really a marketing questionnaire seems pretty dubious practice, even if there was no serious attempt to follow through by disguising the questionnaire as a serious piece of research. You might initially approach the questionnaire thinking it was genuine research, but as you worked through it SHOULD have dawned that this information was being collected because (i) it is of commercial value to Acaudio, and not (ii) to answer any theoretically motivated research questions.

  • So, is this dishonest? Well, it is not what it claims to be.
  • Does this intend to deceive? If it did, then it was not well designed to hide its true purpose.
  • Is it malpractice? Well, there are rules in the U.K. about marketing emails:

"You're only allowed to send marketing emails to individual customers if they've given you permission.

Emails or text messages must clearly indicate:

  • who you are
  • that you're selling something

Every marketing email you send must give the person the ability to opt out of (or 'unsubscribe from') further emails."

https://www.gov.uk/marketing-advertising-law/direct-marketing

The email from Hussain Ayed, Founder, Acaudio, told me who he, and his organisation, are, but

  • did not clearly suggest he was selling something: he was inviting me to contribute to a research survey (illegal?)
  • Nor was there any option to opt out of further messages (illegal?)
  • And I am not aware of having invited approaches from this company – which might be why it was masquerading as a request to contribute to research (illegal?)

I checked my email system to see if I'd had any previous communication with this company, and found in my junk folder a previous approach,"invit[ing Keith, again] to talk about some of the research being done at The Science and Technology Education Research Group on Acaudio…". It seems my email software can recognise cold calling – as long as it does not claim to be an invitation to respond to a research study.



The earlier email claimed it was advertising the free service…but then invited me to arrange a time to talk to them for 'roughly' 20 minutes. That seems odd, both because the website seems to provide all the information needed; and then why would they commit 20 minutes of their representative's time to talk about a free service? Presumably, they wanted to sell me their premium service. The email footer also gave a business address in E9, London – so the company should know about the UK laws about direct marketing that Acaudio seems to be flouting.

Perhaps not enough people responded to give them 20 minutes of their time, so the new approach skips all that and asks instead for people to "give us 2-3 minutes of your time to fill in the survey [sic 3]".


Original image by Mohamed Hassan from Pixabay


Would you buy a second hand account of research from this man?

In summary, if someone is looking to buy in this kind of support in publicising their work, and has the budget(!), and feels it is acceptable to spend research funds on such services, then perhaps they might fill in the questionnaire and await the response. But I am not sure I would want to get involved with companies which use marketing scams in this way. After all, if they cannot even start a conversation by staying within the law, and being honest about their intentions, then that does not bode well for being able to trust them going forward into a commercial arrangement.


Update (15th October, 2023): Were the outcomes of the first annual International Survey of Research Centres and Institutes published? See 'The sugger strikes back! An update on the 'first annual International Survey of Research Centres and Institutes'


Notes

1 When a shop offers a product at a much discounted price, below the price needed to 'break even', so as to entice people into the shop where they will hopefully buy other goods (at a decent mark-up for the seller), the goods sold at a loss are the 'loss leaders'.

Goods may also be sold at a loss when they are selling very slowly, to make space on the shop floor and in the storeroom for new produce that it is hoped will generate profit. Date-sensitive goods may be sold at a loss because they will soon not be saleable at all (such as perishables) or only at even greater discounts (such as models about to be replaced by updated versions by manufacturers – e.g., iPhones). But loss leader goods are priced low to get people to view other produce (so they might be displayed dominantly in the window, but only found deep in the shop).


2 In their wars against the armies of King Pyrrhus of Epirus, the Romans lost battles, but in doing so inflicted such heavy and unsustainable losses on the nominally victorious invading army that Pyrrhus was forced to abandon his campaign.

At the end of the slave revolt (a historical event on which the film 'Spartacus' is based) the Romans are supposed to have decided to execute the rebel leader, the escaped gladiator Spartacus, and return the other rebels to slavery. Supposedly, when the Roman official tried to identify Spartacus, each of the recaptured slaves in turn claimed he was Spartacus, thus thwarting identification. So, the ever pragmatic Romans crucified them all.


3 The set of questions is actually a questionnaire which is used to collect data for the survey. Survey (a type of methodology) does not necessarily imply using a questionnaire (a data collection technique) as a survey could be carried out using an observation schedule (i.e., a different data collection technique), for example.

Read about surveys

Read about questionnaires


Reflecting the population

Sampling an "exceedingly large number of students"


Keith S. Taber


the key to sampling a population is identifying a representative sample

Obtaining a representative sample of a population can be challenging
(Image by Gerd Altmann from Pixabay)


Many studies in education are 'about' an identified population (students taking A level Physics examinations; chemistry teachers in German secondary schools; children transferring from primary to secondary school in Scotland; undergraduates majoring in STEM subjects in Australia…).

Read about populations of interest in research

But, in practice, most studies only collect data from a sample of the population of interest.

Sampling the population

One of the key challenges in social research is sampling. Obtaining a sample is usually not that difficult. However, often the logic of research is something along the lines:

  • 1. Aim – to find out about a population.
  • 2. As it is impractical to collect data from the whole population, collect data from a sample.
  • 3. Analyse data collected from the sample.
  • 4. Draw inferences about the population from the analysis of data collected form the sample.

For example, if one wished to do research into the views of school teachers in England and there are, say, 600 000 of them, it is, unlikely anyone could undertake research that collected and analysed data from all of them and produce results in a short enough period for the findings to still be valid (unless they were prepared to employ a research team of thousands!) But perhaps one could collect data from a sample that would be informative about the population.

This can be a reasonable approach (and, indeed, is a very common approach in research in areas like education) but relies on the assumption that what is true of the sample, can be generalised to the population.

That clearly depends on the sample being representatives of the larger population (at least in those ways which are pertinent to the the research).


When a study (as here in the figure an experiment) collects data from a sample drawn at random from a wider population, then the findings of the experiment can be assumed to apply (on average) to the population. (Figure from Taber, 2019.) In practice, unless a population of interest is quite modest in size (e.g., teachers in one school; post-graduate students in one university department; registered members of a society) it is usually simply not feasible to obtain a random sample.

For example, if we were interested in secondary school students in England, and we had a sample of secondary students from England that (a) reflected the age profile of the population; (b) reflected the gender profile of the population; but (c) were all drawn from one secondary school, this is unlikely to be a representative sample.

  • If we do have a representative sample, then the likely error in generalising from sample to population can be calculated (and can be reduced by having a larger sample);
  • If we do not have a representative sample, then there is no way of knowing how well the findings from the sample reflect the wider population and increasing sample size does not really help; and, for that matter,
  • If we do not know whether we have a representative sample, then, again, there is no way of knowing how well the findings from the sample reflect the wider population and increasing sample size does not really help.

So, the key to sampling a population is identifying a representative sample.

Read about sampling a population

If we know that only a small number of factors are relevant to the research then we may (if we are able to characterise members of the population on these criteria) be able to design a sample which is representative based on those features which are important.

If the relevant factors for a study were teaching subject; years of teaching experience; teacher gender, then we would want to build a sample that fitted the population profile accordingly, so, maybe, 3% female maths teachers with 10+ years of teaching experience, et cetera. We would need suitable demographic information about the population to inform the building of the sample.

We can then randomly select from those members of the the population with the right characteristics within the different 'cells'.

However, if we do not know exactly what specific features might be relevant to characterise a population in a particular research project, the best we might be able to do is to to employ a randomly chosen sample which at least allows the measurement error to be estimated.

Labs for exceedingly large numbers of students

Leopold and Smith (2020) were interested in the use of collaborative group work in a "general chemistry, problem-based lab course" at a United States university, where students worked in fixed groups of three or four throughout the course. As well as using group work for more principled reasons, "group work is also utilized as a way to manage exceedingly large numbers of students and efficiently allocate limited time, space, and equipment" (p.1). They tell readers that

"the case we examine here is a general chemistry, problem-based lab course that enrols approximately 3500 students each academic year"

Leopold & Smith, 2020, p.5

Although they recognised a wide range of potential benefits of collaborative work, these depend upon students being able to work effectively in groups, which requires skills that cannot be take for granted. Leopold and Smith report how structured support was put in place that help students diagnose impediments to the effective work of their groups – and they investigated this in their study.

The data collected was of two types. There was a course evaluation at the end of the year taken by all the students in the cohort, "795 students enrolled [in] the general chemistry I lab course during the spring 2019 semester" (p.7). However, they also collected data from a sample of student groups during the course, in terms of responses to group tasks designed to help them think about and develop their group work.

Population and sample

As the focus of their research was a specific course, the population of interest was the cohort of undergraduates taking the course. Given the large number of students involved, they collected qualitative data from a sample of the groups.

Units of analysis

The course evaluation questions sought individual learners' views so for that data the unit of analysis was the individual student. However, the groups were tasked with working as a group to improve their effectiveness in collaborative learning. So, in Leopold and Smith's sample of groups, the unit of analysis was the group. Some data was received from individual groups members, and other data were submitted as group responses: but the analysis was on the basis of responses from within the specific groups in the sample.

A stratified sample

Leopold and Smith explained that

"We applied a stratified random sampling scheme in order to account for variations across lab sections such as implementation fidelity and instructor approach so as to gain as representative a sample as possible. We stratified by individual instructors teaching the course which included undergraduate teaching assistants (TAs), graduate TAs, and teaching specialists. One student group from each instructor's lab sections was randomly selected. During spring 2019, we had 19 unique instructors teaching the course therefore we selected 19 groups, for a total of 76 students."

Leopold & Smith, 2020, p.7

The paper does not report how the random assignment was made – how it was decided which group would be selected for each instructor. As any competent scientist ought to be able to make a random selection quite easily in this situation, this is perhaps not a serious omission. I mention this because sadly not all authors who report having used randomisation can support this when asked how (Taber, 2013).

Was the sample representative?

Leopold and Smith found that, based on their sample, student groups could diagnose impediments to effective group working, and could often put in place effective strategies to increase their effectiveness.

We might wonder if the sample was representative of the wider population. If the groups were randomly selected in the way claimed then one would expect this would probably be the case – only 'probably', as that is the best randomisation and statistics can do – we can never know for certain that a random sample is representative, only that it is unlikely to be especially unrepresentative!

The only way to know for sure that a sample is genuinely representative of the population of interest in relation to the specific focus of a study, would be to collect data from the whole population and check the sample data matches the population data.* But, of course, if it was feasible to collect data from everyone in the population, there would be no need to sample in the first place.

However, because the end of course evaluation was taken by all students in the cohort (the study population) Leopold and Smith were able to see if those students in the sample responded in ways that were generally in line with the population as a whole. The two figures reproduced here seem to suggest they did!


Figure 1 from Leopold & Smith, 2020, p.10, which is published with a Creative Commons Attribution (CC BY) license allowing reproduction.

Figure 2 from Leopold & Smith, 2020, p.10, which is published with a Creative Commons Attribution (CC BY) license allowing reproduction.

There is clearly a pretty good match here. However, it is important to not over-interpret this data. The questions in the evaluation related to the overall experience of group working, whereas the qualitative data analysed from the sample related to the more specific issues of diagnosing and addressing issues in the working of groups. These are related matters but not identical, and we cannot assume that the very strong similarity between sample and population outcomes in the survey demonstrates (or proves!) that the analysis of data from the sample is also so closely representative of what would have been obtained if all the groups had been included in the data collection.


Experiences of learning through group-workLearning to work more effectively in groups
Samplepatterns in data closely reflected population responsesdata only collected from a sample of groups
Populationall invited to provide feedback[it seems reasonable to assume results from sample are likely to apply to the cohort as a whole]
The similarly of the feedback viewing by students in the sample of groups to the overall cohort responses suggests that the sample was broadly representative of the overall population in terms of developing group-work skills and practices

It might well have been, but we cannot know for sure. (* The only way to know for sure that a sample is genuinely representative of the population of interest in relation to the specific focus of a study, would be …)

However, the way the sample so strongly reflected the population in relation to the evaluation data, shows that in that (related if not identical) respect at least the sample is strongly representative, and that is very likely to give readers confidence in the sampling procedure used. If this had been my study I would have been pretty pleased with this, at least strongly suggestive, circumstantial evidence of the representativeness of the sampling of the student groups.


Work cited:

Counting both the bright and the very dim

What is 1% of a very large, unknown, number?


Keith S. Taber


1, skip 99; 2, skip 99; 3, skip 99; 4,… skip 99, 1 000 000 000!
(Image by FelixMittermeier from Pixabay)

How can we count the number of stars in the galaxy?

On the BBC radio programme 'More or Less' it was mooted that there might be one hundred billion (100 000 000 000) stars in our own Milky Way Galaxy (and that this might be a considerable underestimate).

The estimate was suggested by Prof. Catherine Heymans who is
the Astronomer Royal for Scotland and Professor of Astrophysics at the University of Edinburgh.

Programme presenter Tim Harford was tackling a question sent in by a young listener (who is very almost four years of age) about whether there are more bees in the world than stars in the galaxy? (Spoiler alert: Prof. Catherine Heymans confessed to knowing less about bees than stars.)


An episode of 'More or Less' asks: Are there more bees in the world or stars in the galaxy?

Hatford asked how the 100 billion stars figure was arrived at:

"have we counted them, or got a computer to count them, or is it more a case of, well, you take a photograph of a section of sky and you sort of say well the rest is probably a bit like that?"

The last suggestion here is of course the basis for many surveys. As long as there is good reason to think a sample is representative of the wider population it is drawn from we can collect data from the sample and make inferences about the population at large.

Read about sampling a population

So, if we counted all the detectable stars in a typical 1% of the sky and then multiplied the count by 100 we would get an approximation to the total number of detectable stars in the whole sky. That would be a reasonable method to find approximately how many stars there are in the galaxy, as long as we thought all the detected stars were in our galaxy and that all the stars in our galaxy were detectable.

Prof. Heymans replied

"So, we have the European Space Agency Gaia mission up at the moment, it was launched in 2013, and that's currently mapping out 1% of all the stars in our Milky Way galaxy, creating a three dimensional map. So, that's looking at 1 billion of the stars, and then to get an idea of how many others are there we look at how bright all the stars are, and we use our sort of models of how different types of stars live [sic] in our Milky Way galaxy to give us that estimate of how many stars are there."

Prof. Catherine Heymans interviewed on 'More or Less'

A tautology?

This seemed to beg a question: how can we know we are mapping 1% of stars, before we know how many stars there are?

This has the appearance of a tautology – a circular argument.

Read about tautology

To count the number of stars in the galaxy,
  • (i) count 1% of them, and then
  • (ii) multiply by 100.

So,

  • If we assume there are one hundred billion, then we need to
  • count one billion, and then
  • multiply by 100 to give…
  • one hundred billion.

Clearly that did not seem right. I am fairly sure that was not what Prof. Haymans meant. As this was a radio programme, the interview was presumably edited to fit within the limited time allocated for this item, so a listener can never be sure that a question and (apparently immediately direct) response that makes the edit fully reflects the original conversation.

Counting the bright ones

According to the website of the Gaia mission, "Gaia will achieve its goals by repeatedly measuring the positions of all objects down to magnitude 20 (about 400 000 times fainter than can be seen with the naked eye)." Hartman's suggestion that "you take a photograph of a section of sky and you sort of say well the rest is probably a bit like that?" seems very reasonable, until you realise that even with a powerful telescope sent outside of the earth's atmosphere, many of the stars in the galaxy may simply not be detectable. So, what we see cannot be considered to be fully representative of what is out there.

It is not then that the scientists have deliberately sampled 1%, but rather they are investigating EVERY star with an apparent brightness above a certain critical cut off. Whether a star makes the cut, depends on such factors as how bright it is (in absolute terms – which we might imagine we would measure from a standard distance 1) and how close it is, as well as whether the line of sight involves the starlight passing through interstellar dust that absorbs some (or all) of the radiation.

Of course, these are all strictly, largely, unknowns. Astrophysics relies a good on boot-strapping, where our best, but still developing, understanding of one feature is used to build models of other features. In such circumstances, observational tests of predictions from theory are often as much testing the underlying foundations upon which a model used to generate a prediction is built as that specific focal model itself. Knowledge moves on incrementally as adjustments are made to different aspects of interacting models.

Observations are theory-dependent

So, this is, in a sense, a circular process, but it is a virtuous circle rather than just a tautology as there are opportunities for correcting and improving the theoretical framework.

In a sense, what I have described here is true of science more generally, and so when an experiment fails to produce a result predicted by a new theory, it is generally possible to seek to 'save' the theory by suggesting the problem was (if not a human error) not in the actual theory being tested, but in some other part of the more extended theoretical network – such as the theory underpinning the apparatus used to collect data or the the theory behind the analysis used to treat data.

In most mature fields, however, these more foundational features are generally considered to be sound and unlikely to need modifying – so, a scientist who explains that their experiment did not produce the expected answer because electron microscopes or mass spectrometers or Fourier transform analyses do not work they way everyone has for decades thought they did would need to offer a very persuasive case.

However, compared to many other fields, astrophysics has much less direct access to the phenomena it studies (which are often vast in terms of absolute size, distance and duration), and largely relies on observing without being able to manipulate the phenomena, so understandably faces special challenges.

Why we need a theoretical model to finish the count

Researchers can use our best current theories to build a picture of how what we see relates to what is 'out there' given our best interpretations of existing observations. This is why the modelling that Prof. Heymans refers to is so important. Our current best theories tell us that the absolute brightness of stars (which is a key factor in deciding whether they will be detected in a sky survey) depends on their mass, and the stage of their 'evolution'.2

So, completing the count needs a model which allows data for detectable stars to be extrapolated, bearing in mind our best current understanding about the variations in frequencies of different kinds (age, size) of star, how stellar 'densities' vary in different regions of a spiral galaxy like ours, the distribution of dust clouds, and so forth.


…keep in mind we are off-centre, and then allow for the thinning out near the edges, remember there might be a supermassive black hole blocking our view through the centre, take into account dust, acknowledge dwarf stars tend to be missed, take into account that the most massive stars will have long ceased shining, then take away the number you first thought of, and add a bit for luck… (Image by WikiImages from Pixabay)

I have taken the liberty of offering an edited exchange

Hartford: "have we counted [the hundred billion stars], or got a computer to count them, or is it more a case of, well, you take a photograph of a section of sky and you sort of say well the rest is probably a bit like that?"

Heymans "So, we have the European Space Agency Gaia mission up at the moment, it was launched in 2013, and that's currently mapping out…all the stars in our Milky Way galaxy [that are at least magnitude 20 in brightness], creating a three dimensional map. So, that's looking at 1 billion of the [brightest] stars [as seen from our solar system], and then to get an idea of how many others are there we look at how bright all the stars are, and we use our models of how different types of stars [change over time 2] in our Milky Way galaxy to give us that estimate of how many stars are there."

No more tautology. But some very clever and challenging science.

(And are there more bees in the world or stars in the galaxy? The programme is available at https://www.bbc.co.uk/sounds/play/m00187wq.)


Note:

1 This issue of what we mean by the brightness of a star also arose in a recent post: Baking fresh electrons for the science doughnut


2 Stars are not alive, but it is common to talk about their 'life-cycles' and 'births' and 'deaths' as stars can change considerably (in brightness, colour, size) as the nuclear reactions at their core change over time once the hydrogen has all been reacted in fusion reactions.

Study reports that non-representative sample of students has average knowledge of earthquakes

When is a cross-sectional study not a cross-sectional study?


Keith S. Taber


A biomedical paper?

I only came to this paper because I was criticising the Biomedical Journal of Scientific & Technical Research's claimed Impact Factor which seems to be a fabrication. I saw this particular paper being featured in a recent tweet from the journal and wondered how it fitted in a biomedical journal. The paper is on an important topic – what young people know about how to respond to an earthquake, but I was not sure why it fitted in this particular journal.

Respectable journals normally have a clear scope (i.e., the range of topics within which they consider submissions for publication) – whereas predatory journals are often primarily interested in publishing as many papers as possible (and so attracting publication fees from as many authors as possible) and so may have no qualms about publishing material that would seem to be out of scope.

This paper reports a questionnaire about secondary age students' knowledge of earthquakes. It would seem to be an education study, possibly even a science education study, rather than a 'biomedical' study. (The journal invites papers from a wide range of fields 1, some of which – geology, chemical engineering – are not obviously 'biomedical' in nature; but not education.)

The paper reports research (so I assume is classed as 'research' in terms of the scale of charges) and comes from Bangladesh (which I assume the journal publishers consider a low income country) and so it would seem that the author's would have been charged $799 to be published in this journal. Part of what authors are supposed to get for that fee is for editors to arrange peer review to provide evaluation of, feedback on, and recommendations for improving, their work.

Peer review

Respectable journals employ rigorous peer review to ensure that only work of quality is published.

Read about peer review

According to the Biomedical Journal of Scientific & Technical Research website:

Peer review process is the system used to assess the quality of a manuscript before it is published online. Independent professionals/experts/researchers in the relevant research area are subjected to assess the submitted manuscripts for originality, validity and significance to help editors determine whether a manuscript should be published in their journal. 

This Peer review process helps in validating the research works, establish a method by which it can be evaluated and increase networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation

Only the articles that meet good scientific standards, explanations, records and proofs of their work presented with Bibliographic reasoning (e.g., acknowledge and build upon other work in the field, rely on logical reasoning and well-designed studies, back up claims with evidence etc.) are accepted for publication in the Journal.

https://biomedres.us/peer-review-process.php

Which seems reassuring. It seems 'Preventive Practice on Earthquake Preparedness Among Higher Level Students of Dhaka City' should then only have been published after evaluation in rigorous peer review. Presumably any weaknesses in the submission would have been highlighted in the review process helping the authors to improve their work before publication. Presumably, the (unamed) editor did not approve publication until peer reviewers were satisfied the paper made a valid new contribution to knowledge and, accordingly, recommended publication. 2


The paper was, apparently, submitted; screened by editors; sent to selected expert peer reviewers; evaluated by reviewers, so reports could be returned to the editor who collated them, and passed them to the authors with her/his decision; revised as indicated; checked by editors and reviewers, leading to a decision to publish; copy edited, allowing proofs to be sent to authors for checking; and published, all in less than three weeks.

Although supposedly published in July 2021, the paper seems to be assigned to an issue published a year before it was submitted

Although one might wonder if a journal which seems to advertise with an inflated Impact Factor can be trusted to follow the procedures it claims. So, I had a quick look at the paper.

The abstract begins:

The present study was descriptive Cross-sectional study conducted in Higher Secondary Level Students of Dhaka, Bangladesh, during 2017. The knowledge of respondent seems to be average regarding earthquake. There is a found to have a gap between knowledge and practice of the respondents.

Gurung & Khanum, 2021, p.29274

Sampling a population (or not)

So, this seems to be a survey, and the population sampled was Higher Secondary Level Students of Dhaka, Bangladesh. Dhaka has a population of about 22.5 million people. I could not readily find out how many of these might be considered 'Higher Secondary Level', but clearly it will be many, many thousands – I would imagine about half a million as a 'ball-park' figure.


Dhaka has a large population of 'higher secondary level students'
(Image by Mohammad Rahmatullah from Pixabay)

For a survey of a population to be valid it needs to be based on a sample which is large enough to minimise errors in extrapolating to the full population, and (even more importantly) the sample needs to be representative of the population.

Read about sampling

Here:

"Due to time constrain the sample of 115."

Gurung & Khanum, 2021, p.29276

So, the sample size was limited to 115 because of time constraints. This would likely lead to large errors in inferring population statistics from the sample, but could at least give some indication of the population as long as the 115 were known to be reasonable representative of the wider population being surveyed.

The reader is told

"the study sample was from Mirpur Cantonment Public School and College , (11 and 12 class)."

Gurung & Khanum, 2021, p.29275

It seems very unlikely that a sample taken from any one school among hundreds could be considered representative of the age cohort across such a large City.

Is the school 'typical' of Dhaka?

The school website has the following evaluation by the school's 'sponsor':

"…one the finest academic institutions of Bangladesh in terms of aesthetic beauty, uncompromised quality of education and, most importantly, the sheer appeal among its learners to enrich themselves in humanity and realism."

Major General Md Zahirul Islam

The school Principal notes:

"Our visionary and inspiring teachers are committed to provide learners with all-rounded educational experiences by means of modern teaching techniques and incorporation of diverse state-of-the-art technological aids so that our students can prepare themselves to face the future challenges."

Lieutenant Colonel G M Asaduzzaman

While both of these officers would be expected to be advocates for the school, this does not give a strong impression that the researchers have sought a school that is typical of Dhakar schools.

It also seems unlikely that this sample of 115 reflects all of the students in these grades. According to the school website, there are 7 classes in each of these two grades so the 115 students were drawn from 14 classes. Interestingly, in each year 5 of the 7 classes are following a science programme 3 – alongside with one business studies and one humanities class. The paper does not report which programme(s) were being followed by the students in the sample. Indeed no information is given regarding how the 115 were selected. (Did the researchers just administer the research instrument to the first students they came across in the school? Were all the students in these grades asked to contribute, and only 115 returned responses?)

Yet, if the paper was seen and evaluated by "independent professionals/experts/researchers in the relevant research area" they seem to have not questioned whether such a small and unrepresentative sample invalidated the study as being a survey of the population specified.

Cross-sectional studies

A cross-sectional study examines and compares different slices of a population – so here, different grades. Yet only two grades were sampled, and these were adjacent grades – 11 and 12 – which is not usually ideal to make comparisons across ages.

There could be a good reason to select two grades that are adjacent in this way. However, the authors do not present separate data for year 11 and year 12, but rather pool it. So they make no comparisons between these two year groups. This "Cross-sectional study" was then NOT actually a cross-sectional study.

If the paper did get sent to "independent professionals/experts/researchers in the relevant research area" for review, it seems these experts missed that error.

Theory and practice?

The abstract of the paper claims

"There is a found to have a gap between knowledge and practice of the respondents. The association of the knowledge and the practice of the students were done in which after the cross-tabulation P value was 0.810 i.e., there is not any [statistically significant?] association between knowledge and the practice in this study."

Gurung & Khanum, 2021, p.29274

This seems to suggest that student knowledge (what they knew about earthquakes) was compared in some way with practice (how they acted during an earthquake or earthquake warning). But the authors seem to have only collected data with (what they label) a questionnaire. They do not have any data on practice. The distinction they seem to really be making is between

  • knowledge about earthquakes, and
  • knowledge about what to do in the event of an earthquake.

That might be a useful thing to examine, but any "independent professionals/experts/researchers in the relevant research area"asked to look at the submission do not seem to have noted that the authors do not investigate practice and so needed to change the descriptions they use an claims they make.

Average levels of knowledge

Another point that any expert reviewer 'worth their salt' would have queried is the use of descriptors like 'average' in evaluating students responses. The study concluded that

"The knowledge of earthquake and its preparedness among Higher Secondary Student were average."

Gurung & Khanum, 2021, p.29280

But how do the authors know what counts as 'average'?

This might mean that there is some agreed standard here described in extant literature – but, if so, this is not revealed. It might mean that the same instrument had previously been used to survey nationally or internationally to offer a baseline – but this is not reported. Some studies on similar themes carried out elsewhere are referred to, but it is not clear they used the same instrumentation or analytical scheme. Indeed, the reader is explicitly told very little about the instrument used:

"Semi-structured both open ended and close ended questionnaire was used for this study."

Gurung & Khanum, 2021, p.29276

The authors seem to have forgotten to discuss the development, validation and contents of the questionnaire – and any experts asked to evaluate the submission seem to have forgotten to look for this. I would actually suggest that the authors did not really use a questionnaire, but rather an assessment instrument.

Read about questionnaires

A questionnaire is used to survey opinions, views and so forth – and there are no right or wrong answers. (What type of music do you like? Oh jazz, sorry that's not the right answer.) As the authors evaluated and scored the student responses this was really an assessment.

The authors suggest:

"In this study the poor knowledge score was 15 (13%), average 80 (69.6%) and good knowledge score 20 (17.4%) among the 115 respondents. Out of the 115 respondents most of the respondent has average knowledge and very few 20 (17.4%) has good knowledge about earthquake and the preparedness of it."

Gurung & Khanum, 2021, p.29280

Perhaps this means that the authors had used some principled (but not revealed) technique to decide what counted as poor, average and good.

ScoreDescription
15poor knowledge
80average knowledge
20good knowledge
Descriptors applied to student scores on the 'questionnaire'

Alternatively, perhaps "poor knowledge score was 15 (13%), average 80 (69.6%) and good knowledge score 20 (17.4%)" is reporting what was found in terms of the distribution in this sample – that is, they empirically found these outcomes in this distribution.

Well, not actually these outcomes, of course, as that would suggest that a score of 20 is better than a score of 80, but presumably that is just a typographic error that was somehow missed by the authors when they made their submission, then missed by the editor who screened the paper for suitability (if there is actually an editor involved in the 'editorial' process for this journal), then missed by expert reviewers asked to scrutinise the manuscript (if there really were any), then missed by production staff when preparing proofs (i.e., one would expect this to have been raised as an 'author query' on proofs 4), and then missed again by authors when checking the proofs for publication.

If so, the authors found that most respondents got fairly typical scores, and fewer scored at the tails of the distribution – as one would expect. On any particular assessment, the average performance is (as the authors report here)…average.


Work cited:
  • Gurung, N. and Khanum, H. (2021) Preventive Practice on Earthquake Preparedness Among Higher Level Students of Dhaka City. Biomedical Journal of Scientific & Technical Research, July, 2020, Volume 37, 2, pp 29274-29281

Note:

1 The Biomedical Journal of Scientific & Technical Research defines its scope as including:

  • Agri and Aquaculture 
  • Biochemistry
  • Bioinformatics & Systems Biology 
  • Biomedical Sciences
  • Clinical Sciences
  • Chemical Engineering
  • Chemistry
  • Computer Science 
  • Economics & Accounting 
  • Engineering
  • Environmental Sciences
  • Food & Nutrition
  • General Science
  • Genetics & Molecular Biology
  • Geology & Earth Science
  • Immunology & Microbiology
  • Informatics
  • Materials Science
  • Orthopaedics
  • Mathematics
  • Medical Sciences
  • Nanotechnology
  • Neuroscience & Psychology
  • Nursing & Health Care
  • Pharmaceutical Sciences
  • Physics
  • Plant Sciences
  • Social & Political Sciences 
  • Veterinary Sciences 
  • Clinical & Medical 
  • Anesthesiology
  • Cardiology
  • Clinical Research 
  • Dentistry
  • Dermatology
  • Diabetes & Endocrinology
  • Gastroenterology
  • Genetics
  • Haematology
  • Healthcare
  • Immunology
  • Infectious Diseases
  • Medicine
  • Microbiology
  • Molecular Biology
  • Nephrology
  • Neurology
  • Nursing
  • Nutrition
  • Oncology
  • Ophthalmology
  • Pathology
  • Pediatrics
  • Physicaltherapy & Rehabilitation 
  • Psychiatry
  • Pulmonology
  • Radiology
  • Reproductive Medicine
  • Surgery
  • Toxicology

Such broad scope is a common characteristic of predatory journals.


2 The editor(s) of a research journal is normally a highly regarded academic in the field of the journal. I could not find the name of the editor of this journal although it has seven associate editors and dozens of people named as being on an 'editorial committee'. Whether any of these people actually carry out the functions of an academic editor or whether this work is delegated to non-academic office staff is a moot point.


3 The classes are given names. So, nursery classes include Lotus and Tulip and so forth. In the senior grades, the science classes are called:

  • Flora
  • Neon
  • Meson
  • Sigma
  • Platinam [sic]
  • Argon
  • Electron
  • Neutron
  • Proton
  • Redon [sic]

4 Production staff are not expected to be experts in the topic of the paper, but they do note any obvious omissions (such as missing references) or likely errors and list these as 'author queries' for authors to respond to when checking 'proofs', i.e., the article set in the journal format as it will be published.

What COVID really likes

Researching viral preferences

Keith S. Taber

When I was listening to the radio news I heard a clip of the Rt. Hon. Sajid Javid MP, the U.K. Secretary of State for Health and Social Care, talking about the ongoing response to the COVID pandemic:

Health Secretary Sajid Javid talking on 12th September

"Now that we are entering Autumn and Winter, something that COVID and other viruses, you know, usually like, the prime minister this week will be getting out our plans to manage COVID over the coming few months."

Sajid Javid

So, COVID and other viruses usually like Autumn and Winter (by implication, presumably, in comparison with Spring and Summer).

This got me wondering how we (or Sajid, at least) could know what the COVID virus (i.e., SARS-CoV-2 – severe acute respiratory syndrome coronavirus 2) prefers – what the virus 'likes'. I noticed that Mr Javid offered a modal qualification to his claim: usually. It seemed 'COVID and other viruses' did not always like Autumn and Winter, but usually did.

Yet there was a potential ambiguity here depending how one parsed the claim. Was he suggesting that

[COVID and other viruses]

usually

like Autumn and Winter
orCOVID

[and other viruses usually]

like Autumn and Winter

This might have been clearer in a written text as either

COVID and other viruses usually like Autumn and WinterorCOVID, and other viruses usually, like Autumn and Winter

The second option may seem a little awkward in its phrasing, 1 but then not all viral diseases are more common in the Winter months, and some are considered to be due to 'Summer viruses':

"Adenovirus, human bocavirus (HBoV), parainfluenza virus (PIV), human metapneumovirus (hMPV), and rhinovirus can be detected throughout the year (all-year viruses). Seasonal patterns of PIV are type specific. Epidemics of PIV type 1 (PIV1) and PIV type 3 (PIV3) peak in the fall [Autumn] and spring-summer, respectively. The prevalence of some non-rhinovirus enteroviruses increases in summer (summer viruses)"


Moriyama, Hugentobler & Iwasaki, 2020: 86

Just a couple of days later Mr Javid was being interviewed on the radio, and he made a more limited claim:

Health Secretary Sajid Javid talking on BBC Radio 4's 'Today' programme, 15th September

"…because we know Autumn and Winter, your COVID is going to like that time of year"

Sajid Javid

So, this claim was just about the COVID virus, not viruses more generally, and that we know that COVID is going to like Autumn and Winter. No ambiguity there. But how do we know?

Coming to knowledge

Historically there have been various ways of obtaining knowledge.

  • Divine revelation: where God reveals the knowledge to someone, perhaps through appearing to the chosen one in a dream.
  • Consulting an oracle, or a prophet or some other kind of seer.
  • Intuiting the truth by reflecting on the nature of things using the rational power of the human intellect.
  • Empirical investigation of natural phenomena.

My focus in this blog is related to science, and given that we are talking about public health policy in modern Britain, I would like to think Mr Javid was basing his claim on the latter option. Of course, even empirical methods depend upon some metaphysical assumptions. For example, if one assumes the cosmos has inbuilt connections one might look for evidence in terms of sympathies or correspondences. Perhaps, if the COVID virus was observed closely and looked like a snowflake, that could (in this mindset) be taken as a sign that it liked Winter.

A snowflake – or is it a virus particle?
(Image by Gerd Altmann from Pixabay)

Sympathetic magic

This kind of correspondence, a connection indicated by appearance, was once widely accepted, so that a plant which was thought to resemble some part of the anatomy might be assumed to be an appropriate medicine for diseases or disorders associated with that part of the body.

This is a kind of magic, and might seem a 'primitive' belief to many people today, but such an idea was sensible enough in the context of a common set of underlying beliefs about the nature and purposes of the world, and the place and role of people in that world. One might expect that specific beliefs would soon die out if, for example, the plant shaped like an ear turned out to do nothing for ear ache. Yet, at a time when medical practitioners could offer little effective treatment, and being sent to a hospital was likely to reduce life expectancy, herbal remedies at least often (if not always) did no harm.

Moreover, many herbs do have medicinal properties, and something with a general systemic effect might work as topical medicine (i.e., when applied to a specific site of disease). Add to that, the human susceptibility to confirmation bias (taking more notice of, and giving more weight to, instances that meet our expectations than those which do not) and the placebo effect (where believing we are taking effective medication can sometimes in itself have beneficial effects) and the psychological support offered by spending time with an attentive practitioner with a good 'bedside' manner – and we can easily see how beliefs about treatments may survive limited definitive evidence of effectiveness.

The gold standard of experimental method

Of course, today, we have the means to test such medicines by taking a large representative sample of a population (of ear ache sufferers, or whatever), randomly dividing them into two groups, and using a double-blind (or should that be double-deaf) approach, treat them with the possible medicine or a placebo, without either the patient or the practitioner knowing who was getting which treatment. (The researchers have a way to know of course – or it would difficult to deduce anything from the results.) That is, the randomised control trial (RCT).

Now, I have been very critical of the notion that these kinds of randomised experimental designs should be automatically be seen as the preferred way of testing educational innovations (Taber, 2019) – but in situations where control of variables and 'blinding' is possible, and where randomisation can be applied to samples of well-defined populations, this does deserve to be considered the gold standard. (It is when the assumptions behind a research methodology do not apply that we should have reservations about using it as a strategy for enquiry.)

So can the RCT approach be used to find out if COVID has a preference for certain times of year? I guess this depends on our conceptual framework for the research (e.g., how do we understand what a 'like' actually is) and the theoretical perspective we adopt.

So, for example, behaviourists would suggest that it is not useful to investigate what is going on in someone's mind (perhaps some behaviorists do not even think the mind concept corresponds to anything real) so we should observe behaviours that allow us to make inferences. This has to be done with care. Someone who buys and eats lots of chocolate presumably likes chocolate, and someone who buys and listens to a lot of reggae probably likes reggae, but a person who cries regularly, or someone that stumbles around and has frequent falls, does not necessary like crying, or falling over, respectively.

A viral choice chamber

So, we might think that woodlice prefer damp conditions because we have put a large number of woodlice in choice chambers with different conditions (dry and light, dry and dark, damp and light, damp and dark) and found that there was a statistically significant excess of woodlice settling down in the damp sections of the chamber.

Of course, to infer preferences from behaviour – or even to use the term 'behaviour' – for some kinds of entity is questionable. (To think that woodlice make a choice based on what they 'like' might seem to assume a level of awareness that they perhaps lack?) In a cathode ray tube electrons subject to a magnetic field may be observed (indirectly!) to move to one side of the tube, just as woodlice might congregate in one chamber, but I am not sure I would describe this as electrons liking that part of the tube. I think it can be better explained with concepts such as electrical charge, fields, forces, and momentum.

It is difficult to see how we can do double blind trials to see which season a virus might like, as if the COVID virus really does like Winter, it must surely have a way of knowing when it is Winter (making blinding impossible). In any case, a choice chamber with different sections at different times of the year would require some kind of time portal installed between its sections.

Like electrons, but unlike woodlice, COVID viral particles do not have an active form of transport available to them. Rather, they tend to be sneezed and coughed around and then subject to the breeze, or deposited by contact with surfaces. So I am not sure that observing virus 'behaviour' helps here.

So perhaps a different methodology might be more sensible.

A viral opinion poll

A common approach to find out what people like would be a survey. Surveys can sometimes attract responses from large numbers of respondents, which may seem to give us confidence that they offer authentic accounts of widespread views. However, sample size is perhaps less important than sample representativeness. Imagine carrying out a survey of people's favourite football teams at a game at Stamford Bridge; or undertaking a survey of people's favourite bands as people queued to enter a King Crimson concert! The responses may [sic, almost certainly would] not fully reflect the wider population due to the likely bias in such samples. Would these surveys give reliable results which could be replicated if repeated at the Santiago Bernabeu or at a Marillion concert?

How do we know what 'COVID 'really likes?
(Original Images by OpenClipart-Vectors and Gordon Johnson from Pixabay)

A representative sample of vairants?

This might cause problems with the COVID-19 virus (SARS-CoV-2). What counts as a member of the population – perhaps a viable virus particle? Can we even know how big the population actually is at the time of our survey? The virus is infecting new cells, leading to new virus particles being produced all the time, just as shed particles become non-viable all the time. So we have no reliable knowledge of population numbers.

Moreover, a survey needs a representative sample: do the numbers of people in a sample of a human population reflect the wider population in relevant terms (be that age, gender, level of educational qualifications, earnings, etc.)? There are viral variants leading to COVID-19 infection – and quite a few of them. That is, SARS-CoV-2 is a class with various subgroups. The variants replicate to different extents under particular conditions, and new variants appear from time to time.

So, the population profile is changing rapidly. In recent months in the UK nearly all infections where the variant has been determined are due to the variant VOC-21APR-02 (or B.1.617.2 or Delta) but many people will be infected asymptotically or with mild symptoms and not be tested, and so this likely does not mean that VOC-21APR-02 dominates the SARS-CoV-2 population as a whole to the extent it currently dominates in investigated cases. Assuming otherwise would be like gauging public opinion from the views of those particular people who make themselves salient by attending a protest, e.g.:

"Shock finding – 98% of the population would like to abolish the nuclear arsenal,

according to a [hypothetical] survey taken at the recent Campaign for Nuclear Disarmament march"

In any case, surveys are often fairly blunt instruments as they need to present objectively the same questions to all respondents, and elicit responses in a format that can be readily classified into a discrete number of categories. This is why many questionnaires use Likert type items:

Would you say you like Autumn and Winter:

12345
AlwaysNearly alwaysUsuallySometimesNever

Such 'objective' measures are often considered to avoid the subjective nature of some other types of research. It may seem that responses do not need to be interpreted – but of course this assumes that the researchers and all the respondents understand language the same way (what exactly counts as Autumn and Winter? What does 'like' mean? How is 'usually' understood – 60-80% of the time, or 51-90% of the time or…). We can usually (sic) safely assume that those with strong language competence will have somewhat similar understandings of terms, but we cannot know precisely what survey participants meant by their responses or to what extent they share a meaning for 'usually'.

There are so-called 'qualitative surveys' which eschew this kind of objectivity to get more in-depth engagement with participants. They will usually use interviews where the researcher can establish rapport with respondents and ask them about their thoughts and feelings, observe non-verbal signals such as facial expressions and gestures, and use follow-up questions… However, the greater insight into individuals comes at a cost of smaller samples as these kinds of methods are more resource-intensive.

But perhaps Mr Javid does not actually mean that COVID likes Autumn and Winter?

So, how did the Department of Health & Social Care, or the Health Secretary's scientific advisors, find out that COVID (or the COVID virus) likes Autumn and Winter? The virus does not think, or feel, and it does not have preferences in the way we do. It does not perceive hot or cold, and it does not have a sense of time passing, or of the seasons.2 COVID does not like or dislike anything.

Mr Javid needs to make himself clear to a broad public audience, so he has to avoid too much technical jargon. It is not easy to pitch a presentation for such an audience and be pithy, accurate, and engaging, but it is easy for someone (such as me) to be critical when not having to face this challenge. Cabinet ministers, unlike science teachers, cannot be expected to have skills in communicating complex and abstract scientific ideas in simplified and accessible forms that remain authentic to the science.

It is easy and perhaps convenient to use anthropomorphic language to talk about the virus, and this will likely make the topic seem accessible to listeners, but it is less clear what is actually meant by a virus liking a certain time of year. In teaching the use of anthropomorphic language can be engaging, but it can also come to stand in place of scientific understanding when anthropomorphic statements are simply accepted uncritically at face value. For example, if the science teacher suggests "the atom wants a full shell of electrons" then we should not be surprised that students may think this is a scientific explanation, and that atoms do want to fill their shells. (They do not of course. 3)

Image by Gordon Johnson from Pixabay

Of course Mr Javid's statements cannot be taken as a literal claim about what the virus likes – my point in this posting is to provoke the question of what this might be intended to mean? This is surely intended metaphorically (at least if Mr Javid had thought about his claim critically): perhaps that there is higher incidence of infection or serious illness caused by the COVID virus in the Winter. But by that logic, I guess turkeys really would vote for Christmas (or Thanksgiving) after all.

Typically, some viruses cause more infection in the Winter when people are more likely to mix indoors and when buildings and transport are not well ventilated (both factors being addressed in public health measures and advice in regard to COVID-19). Perhaps 'likes' here simply means that the conditions associated with a higher frequency/population of virus particles occur in Autumn and Winter?

A snowflake.
The conditions suitable for a higher frequency of snowflakes are more common in Winter.
So do snowflakes also 'like' Winter?
(Image by Gerd Altmann from Pixabay)

However, this is some way from assigning 'likes' to the virus. After all, in evolutionary terms, a virus might 'prefer', so to speak, to only be transmitted asymptomatically, as it cannot be in the virus's 'interests', so to speak, to encourage a public health response that will lead to vaccines or measures to limit the mixing of people.

If COVID could like anything (and of course it cannot), I would suggest it would like to go 'under the radar' (another metaphor) and be endemic in a population that was not concerned about it (perhaps doing so little harm it is not even noticed, such that people do not change their behaviours). It would then only 'prefer' a Season to the extent that that time of year brings conditions which allow it to go about its life cycle without attracting attention – from Mr Javid or anyone else.

Keith S. Taber, September 2021

Addendum: 1st December 2021

Déjà vu?

The health secretary was interviewed on 1st December

"…we have always known that when it gets darker, it gets colder, the virus likes that, the flu virus likes that and we should not forget that's still lurking around as well…"

Rt. Hon. Sajid Javid MP, the U.K. Secretary of State for Health and Social Care, interviewed on BBC Radio 4 Today programme, 1st December, 2021
Works cited:
Footnotes:

1. It would also seem to be a generalisation based on the only two Winters that the COVID-19 virus had 'experienced'

2. Strictly I cannot know what it is like to be a virus particle. But a lot of well-established and strongly evidenced scientific principles would be challenged if a virus particle is sentient.

3. Yet this is a VERY common alternative conceptions among school children studying chemistry: The full outer shells explanatory principle

Related reading:

So who's not a clever little virus then?

COVID is like a fire because…

Anthropomorphism in public science discourse