Study reports that non-representative sample of students has average knowledge of earthquakes

When is a cross-sectional study not a cross-sectional study?


Keith S. Taber


A biomedical paper?

I only came to this paper because I was criticising the Biomedical Journal of Scientific & Technical Research's claimed Impact Factor which seems to be a fabrication. I saw this particular paper being featured in a recent tweet from the journal and wondered how it fitted in a biomedical journal. The paper is on an important topic – what young people know about how to respond to an earthquake, but I was not sure why it fitted in this particular journal.

Respectable journals normally have a clear scope (i.e., the range of topics within which they consider submissions for publication) – whereas predatory journals are often primarily interested in publishing as many papers as possible (and so attracting publication fees from as many authors as possible) and so may have no qualms about publishing material that would seem to be out of scope.

This paper reports a questionnaire about secondary age students' knowledge of earthquakes. It would seem to be an education study, possibly even a science education study, rather than a 'biomedical' study. (The journal invites papers from a wide range of fields 1, some of which – geology, chemical engineering – are not obviously 'biomedical' in nature; but not education.)

The paper reports research (so I assume is classed as 'research' in terms of the scale of charges) and comes from Bangladesh (which I assume the journal publishers consider a low income country) and so it would seem that the author's would have been charged $799 to be published in this journal. Part of what authors are supposed to get for that fee is for editors to arrange peer review to provide evaluation of, feedback on, and recommendations for improving, their work.

Peer review

Respectable journals employ rigorous peer review to ensure that only work of quality is published.

Read about peer review

According to the Biomedical Journal of Scientific & Technical Research website:

Peer review process is the system used to assess the quality of a manuscript before it is published online. Independent professionals/experts/researchers in the relevant research area are subjected to assess the submitted manuscripts for originality, validity and significance to help editors determine whether a manuscript should be published in their journal. 

This Peer review process helps in validating the research works, establish a method by which it can be evaluated and increase networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation

Only the articles that meet good scientific standards, explanations, records and proofs of their work presented with Bibliographic reasoning (e.g., acknowledge and build upon other work in the field, rely on logical reasoning and well-designed studies, back up claims with evidence etc.) are accepted for publication in the Journal.

https://biomedres.us/peer-review-process.php

Which seems reassuring. It seems 'Preventive Practice on Earthquake Preparedness Among Higher Level Students of Dhaka City' should then only have been published after evaluation in rigorous peer review. Presumably any weaknesses in the submission would have been highlighted in the review process helping the authors to improve their work before publication. Presumably, the (unamed) editor did not approve publication until peer reviewers were satisfied the paper made a valid new contribution to knowledge and, accordingly, recommended publication. 2


The paper was, apparently, submitted; screened by editors; sent to selected expert peer reviewers; evaluated by reviewers, so reports could be returned to the editor who collated them, and passed them to the authors with her/his decision; revised as indicated; checked by editors and reviewers, leading to a decision to publish; copy edited, allowing proofs to be sent to authors for checking; and published, all in less than three weeks.

Although supposedly published in July 2021, the paper seems to be assigned to an issue published a year before it was submitted

Although one might wonder if a journal which seems to advertise with an inflated Impact Factor can be trusted to follow the procedures it claims. So, I had a quick look at the paper.

The abstract begins:

The present study was descriptive Cross-sectional study conducted in Higher Secondary Level Students of Dhaka, Bangladesh, during 2017. The knowledge of respondent seems to be average regarding earthquake. There is a found to have a gap between knowledge and practice of the respondents.

Gurung & Khanum, 2021, p.29274

Sampling a population (or not)

So, this seems to be a survey, and the population sampled was Higher Secondary Level Students of Dhaka, Bangladesh. Dhaka has a population of about 22.5 million people. I could not readily find out how many of these might be considered 'Higher Secondary Level', but clearly it will be many, many thousands – I would imagine about half a million as a 'ball-park' figure.


Dhaka has a large population of 'higher secondary level students'
(Image by Mohammad Rahmatullah from Pixabay)

For a survey of a population to be valid it needs to be based on a sample which is large enough to minimise errors in extrapolating to the full population, and (even more importantly) the sample needs to be representative of the population.

Read about sampling

Here:

"Due to time constrain the sample of 115."

Gurung & Khanum, 2021, p.29276

So, the sample size was limited to 115 because of time constraints. This would likely lead to large errors in inferring population statistics from the sample, but could at least give some indication of the population as long as the 115 were known to be reasonable representative of the wider population being surveyed.

The reader is told

"the study sample was from Mirpur Cantonment Public School and College , (11 and 12 class)."

Gurung & Khanum, 2021, p.29275

It seems very unlikely that a sample taken from any one school among hundreds could be considered representative of the age cohort across such a large City.

Is the school 'typical' of Dhaka?

The school website has the following evaluation by the school's 'sponsor':

"…one the finest academic institutions of Bangladesh in terms of aesthetic beauty, uncompromised quality of education and, most importantly, the sheer appeal among its learners to enrich themselves in humanity and realism."

Major General Md Zahirul Islam

The school Principal notes:

"Our visionary and inspiring teachers are committed to provide learners with all-rounded educational experiences by means of modern teaching techniques and incorporation of diverse state-of-the-art technological aids so that our students can prepare themselves to face the future challenges."

Lieutenant Colonel G M Asaduzzaman

While both of these officers would be expected to be advocates for the school, this does not give a strong impression that the researchers have sought a school that is typical of Dhakar schools.

It also seems unlikely that this sample of 115 reflects all of the students in these grades. According to the school website, there are 7 classes in each of these two grades so the 115 students were drawn from 14 classes. Interestingly, in each year 5 of the 7 classes are following a science programme 3 – alongside with one business studies and one humanities class. The paper does not report which programme(s) were being followed by the students in the sample. Indeed no information is given regarding how the 115 were selected. (Did the researchers just administer the research instrument to the first students they came across in the school? Were all the students in these grades asked to contribute, and only 115 returned responses?)

Yet, if the paper was seen and evaluated by "independent professionals/experts/researchers in the relevant research area" they seem to have not questioned whether such a small and unrepresentative sample invalidated the study as being a survey of the population specified.

Cross-sectional studies

A cross-sectional study examines and compares different slices of a population – so here, different grades. Yet only two grades were sampled, and these were adjacent grades – 11 and 12 – which is not usually ideal to make comparisons across ages.

There could be a good reason to select two grades that are adjacent in this way. However, the authors do not present separate data for year 11 and year 12, but rather pool it. So they make no comparisons between these two year groups. This "Cross-sectional study" was then NOT actually a cross-sectional study.

If the paper did get sent to "independent professionals/experts/researchers in the relevant research area" for review, it seems these experts missed that error.

Theory and practice?

The abstract of the paper claims

"There is a found to have a gap between knowledge and practice of the respondents. The association of the knowledge and the practice of the students were done in which after the cross-tabulation P value was 0.810 i.e., there is not any [statistically significant?] association between knowledge and the practice in this study."

Gurung & Khanum, 2021, p.29274

This seems to suggest that student knowledge (what they knew about earthquakes) was compared in some way with practice (how they acted during an earthquake or earthquake warning). But the authors seem to have only collected data with (what they label) a questionnaire. They do not have any data on practice. The distinction they seem to really be making is between

  • knowledge about earthquakes, and
  • knowledge about what to do in the event of an earthquake.

That might be a useful thing to examine, but any "independent professionals/experts/researchers in the relevant research area"asked to look at the submission do not seem to have noted that the authors do not investigate practice and so needed to change the descriptions they use an claims they make.

Average levels of knowledge

Another point that any expert reviewer 'worth their salt' would have queried is the use of descriptors like 'average' in evaluating students responses. The study concluded that

"The knowledge of earthquake and its preparedness among Higher Secondary Student were average."

Gurung & Khanum, 2021, p.29280

But how do the authors know what counts as 'average'?

This might mean that there is some agreed standard here described in extant literature – but, if so, this is not revealed. It might mean that the same instrument had previously been used to survey nationally or internationally to offer a baseline – but this is not reported. Some studies on similar themes carried out elsewhere are referred to, but it is not clear they used the same instrumentation or analytical scheme. Indeed, the reader is explicitly told very little about the instrument used:

"Semi-structured both open ended and close ended questionnaire was used for this study."

Gurung & Khanum, 2021, p.29276

The authors seem to have forgotten to discuss the development, validation and contents of the questionnaire – and any experts asked to evaluate the submission seem to have forgotten to look for this. I would actually suggest that the authors did not really use a questionnaire, but rather an assessment instrument.

Read about questionnaires

A questionnaire is used to survey opinions, views and so forth – and there are no right or wrong answers. (What type of music do you like? Oh jazz, sorry that's not the right answer.) As the authors evaluated and scored the student responses this was really an assessment.

The authors suggest:

"In this study the poor knowledge score was 15 (13%), average 80 (69.6%) and good knowledge score 20 (17.4%) among the 115 respondents. Out of the 115 respondents most of the respondent has average knowledge and very few 20 (17.4%) has good knowledge about earthquake and the preparedness of it."

Gurung & Khanum, 2021, p.29280

Perhaps this means that the authors had used some principled (but not revealed) technique to decide what counted as poor, average and good.

ScoreDescription
15poor knowledge
80average knowledge
20good knowledge
Descriptors applied to student scores on the 'questionnaire'

Alternatively, perhaps "poor knowledge score was 15 (13%), average 80 (69.6%) and good knowledge score 20 (17.4%)" is reporting what was found in terms of the distribution in this sample – that is, they empirically found these outcomes in this distribution.

Well, not actually these outcomes, of course, as that would suggest that a score of 20 is better than a score of 80, but presumably that is just a typographic error that was somehow missed by the authors when they made their submission, then missed by the editor who screened the paper for suitability (if there is actually an editor involved in the 'editorial' process for this journal), then missed by expert reviewers asked to scrutinise the manuscript (if there really were any), then missed by production staff when preparing proofs (i.e., one would expect this to have been raised as an 'author query' on proofs 4), and then missed again by authors when checking the proofs for publication.

If so, the authors found that most respondents got fairly typical scores, and fewer scored at the tails of the distribution – as one would expect. On any particular assessment, the average performance is (as the authors report here)…average.


Work cited:
  • Gurung, N. and Khanum, H. (2021) Preventive Practice on Earthquake Preparedness Among Higher Level Students of Dhaka City. Biomedical Journal of Scientific & Technical Research, July, 2020, Volume 37, 2, pp 29274-29281

Note:

1 The Biomedical Journal of Scientific & Technical Research defines its scope as including:

  • Agri and Aquaculture 
  • Biochemistry
  • Bioinformatics & Systems Biology 
  • Biomedical Sciences
  • Clinical Sciences
  • Chemical Engineering
  • Chemistry
  • Computer Science 
  • Economics & Accounting 
  • Engineering
  • Environmental Sciences
  • Food & Nutrition
  • General Science
  • Genetics & Molecular Biology
  • Geology & Earth Science
  • Immunology & Microbiology
  • Informatics
  • Materials Science
  • Orthopaedics
  • Mathematics
  • Medical Sciences
  • Nanotechnology
  • Neuroscience & Psychology
  • Nursing & Health Care
  • Pharmaceutical Sciences
  • Physics
  • Plant Sciences
  • Social & Political Sciences 
  • Veterinary Sciences 
  • Clinical & Medical 
  • Anesthesiology
  • Cardiology
  • Clinical Research 
  • Dentistry
  • Dermatology
  • Diabetes & Endocrinology
  • Gastroenterology
  • Genetics
  • Haematology
  • Healthcare
  • Immunology
  • Infectious Diseases
  • Medicine
  • Microbiology
  • Molecular Biology
  • Nephrology
  • Neurology
  • Nursing
  • Nutrition
  • Oncology
  • Ophthalmology
  • Pathology
  • Pediatrics
  • Physicaltherapy & Rehabilitation 
  • Psychiatry
  • Pulmonology
  • Radiology
  • Reproductive Medicine
  • Surgery
  • Toxicology

Such broad scope is a common characteristic of predatory journals.


2 The editor(s) of a research journal is normally a highly regarded academic in the field of the journal. I could not find the name of the editor of this journal although it has seven associate editors and dozens of people named as being on an 'editorial committee'. Whether any of these people actually carry out the functions of an academic editor or whether this work is delegated to non-academic office staff is a moot point.


3 The classes are given names. So, nursery classes include Lotus and Tulip and so forth. In the senior grades, the science classes are called:

  • Flora
  • Neon
  • Meson
  • Sigma
  • Platinam [sic]
  • Argon
  • Electron
  • Neutron
  • Proton
  • Redon [sic]

4 Production staff are not expected to be experts in the topic of the paper, but they do note any obvious omissions (such as missing references) or likely errors and list these as 'author queries' for authors to respond to when checking 'proofs', i.e., the article set in the journal format as it will be published.