Protect the integrity of scholarly writing

Protect the integrity of scholarly writing: an open letter to academic publishers

Keith S. Taber

How do you know the scholar's own words not been changed by a publisher like OUP? [Image by adege from Pixabay]
I am writing this open letter to ask responsible academic publishers to respect the right of scholars to protect the integrity of their work.

I have recently been invited to sign a contract offered by a major academic publisher – very well established and of high reputation, OUP, a department of the University of Oxford – which asked me to waive one of the moral rights authors get in law: the right to protect the integrity of their work.

Scholars' reputations are in a large part determined by their writing, and therefore it is important to scholars, and those who read and cite them, that the works purporting to be authored by particular academics do actually represent their scholarship. The right to protect the integrity of one's authored works prevents an author's work being substantially changed and yet still presented as their work. Authors who agree to waive this right are allowing publishers to change their work, potentially without their knowledge or approval, yet still present it as the work of that author.

A respectable academic publisher is unlikely to ever deliberately make changes that substantially alter an author's work in ways that misrepresent that author's thinking (and this would not be in their interest), however, in asking authors to waive their right to protect the integrity of their work it becomes almost inevitable that such misrepresentation will inadvertently happen once publishers habitually take it upon themselves to modify and update scholarly writing without the input of the named author (as this waiver allows).

When an academic publisher commissions an academic to write a specialised article, they do so because (a) they recognise that the author is an expert who can offer a work that brings the authority of their expertise; and (b) they believe that the wider community will also recognise that the work has been authored by an expert, and so it is the publisher's interest to be able to publish work under the name of that author. If publishers wish to claim those goods then they need to respect the integrity of the expert's own words. If they do not feel that there is sufficient value in employing named experts, the publisher can instead contract on the basis of a work-made-for-hire, retain the authorship rights (and so the right to modify the work) but not recognise the writers as authors. I suspect most academics would have less interest in contributing on that basis.

Of course, if the intention is to produce authoritative reference works (as was the case where I was recently asked to accept the waiver) there is a good reason to want the readership to think that the article they are reading was written (in the form they are reading it) by an expert. If that is so, then the cost of having named expert authors should be that their work should not be modified without their consent or knowledge.

I appreciate that on-line works offer a potential for updating that is in the interest of all concerned: however it would be possible to develop an approach

  • (i) which never changes work appearing under a scholar's name without first seeking their input, or at least approval of the changes, and
  • (ii) that where this proves impossible, to at least indicate to readers where a work has been updated by a party other than the named author.

Such an approach would be more honest with readers of your publications, as well as respecting the rights of your authors and their status as experts.

I am hoping that other academics will appreciate the logic of this argument, and so appreciate the risks to their reputation of selling their name to publishers to use, to give authority to works that could be changed in whatever ways the publisher later feels appropriate. If so, experts will preferentially agree to write for those publishers who find an alternative approach that does not ask authors to waive the protections they are given in law.

Yours faithfully …

Sign a petition

Update – a petition on this issue has been started at https://chn.ge/2wy8Nmd: if you agree that publishers should respect authors' moral rights, then you may wish to sign this petition.

Read more about this topic at Defend the moral right to the integrity of your scholarly work.

First published 11th April 2018 at http://people.ds.cam.ac.uk/kst24/

The application of Cronbach's alpha in Mechatronical Engineering

Keith S. Taber

Image by Tumisu from Pixabay 

Invitation to Join Editorial Board

"Dear …

Thank you for your message, 'Invitation to Join Editorial Board'.

I am pleased that you find my work "The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education" very interesting. Thank you for your invitation to publish in 'Frontiers of Mechatronical Engineering', to be published by EnPress Publisher.

Thank you also for the invitation to join the Editorial Board of 'Frontiers of Mechatronical Engineering'. I see from your website that the scope of the journal "includes mechanical engineering, electromechanical system[s], industrial engineering, production engineering, robotics, system design, modeling, automative, nuclear engineering, nanotechnology, computer intelligence, and aerospace engineering". This is a diverse range as might be expected from a journal tackling an interdisciplinary field, yet I cannot immediately see that my own expertise would be a good fit for the journal.

Best wishes

Keith"

Read more about 'Journals and poor academic practice'

First published 1st January 2018 at http://people.ds.cam.ac.uk/kst24/

An unpublished Theory of Everything

Keith S. Taber

A TOE? (Image by congerdesign from Pixabay)

Dear Dr. Prof. Tambara Federico


Thank you for sending me your manuscript reporting your "revolutionary" paper

offering your

"own comprehensive, mass-related physical-mathematical Research Study, proposing new scientific data and formulas [sic] with a view to making it possible to unify the four universal interaction fields…, which as a matter of fact cover all possible physical as well as scientific-mathematical aspects and domains of reality itself…"

and incorporating your "FOUR REMARKABLE CONCLUSIVE THEORETICAL CONSIDERATIONS".

You ask that I (and the others among the "500 SCIENTIFIC ADDRESSEES" to whom you sent the paper) "will kindly agree to publish" your "Research Study in Your worldwide famous scientific Reviews and / or Journals as soon as possible". I assume you have contacted me, inter alia, seeking publication in Chemistry Education Research and Practice?

I must decline your request, on several grounds.

Your paper does not seen to be within the scope of the journal. That may seem odd when you propose a TOE (Theory of Everything). I am certainly open to the argument that in principle all academic fields could be reduced to fundamental physics, but not that this is always sensible. So for example in chemistry we have concepts such as acidity, resonance, hyperconjugation, oxidation, and so forth. These are probably, in principle, capable of being redescribed in terms of fundamental physics – but any such description is likely to be too cumbersome to be of practical value in chemistry. We have these specifically chemical concepts because the complexity of the phenomena leads to emergent properties that are most usefully considered at the level of chemistry, not physics.

How much more so the concepts related to teaching and learning chemistry! Perhaps pedagogy could (again, in principle) be reduced to physics – but that would be little more than an impressive technical achievement of no practical value. Sadly, a theory of everything tells us very little of value about most things.

Secondly, the journal has peer review processes that need to be followed, and editorial fiat is not used to publish a paper without following these processes. You may well have made major breakthroughs in this fundamental area of science, but science is communal, and your work has no status in the field until other experts have critiqued and evaluated it.

So, thirdly, any submission needs to be made through the journal's on-line review system, allowing proper editorial screening and then – should it be considered suitable (which it would not in this case, see above) allowing it to be sent to review.

However, submitting a manuscript for formal review requires you to make a number of declarations. One of these is that the manuscript you wish to be considered is not published, under review or consideration, or has been submitted to, any other journal. As you have adopted a 'scatter gun' approach to submitting your work, you would need to wait until you have received formal notification that the other 499 scholarly outlets approached are declining your manuscript before you could make a formal submission.

As you are concerned that unless your work is published it may be plagiarised, I suggest you deposit your paper in one of the many repositories now available for posting unpublished documents. This will make your work available and will demonstrate your priority in anything that may later be judged (in peer review) by experts in the field as novelty in your work.


First published 12th March 2017 at http://people.ds.cam.ac.uk/kst24/

Publish at speed, recant at leisure

Keith S. Taber

Image by InstagramFOTOGRAFIN from Pixabay

In scholarly circles it is sometimes said "Publish, or be dammed" (a variation on the 'Publish and be dammed" retort to blackmail), and there is no doubt that, generally, success in academia – when judged in such mundane terms as getting academic positions, keeping them, getting promoted – in large part requires academics to build up their publication list.

The value of peer review

Quality should obviously be more important than quantity, but that requires evaluations of the former. The peer review process that is used by most journals is far from perfect, but is said (like democracy) to be, despite its flaws, the best system we have. Quality journals depend upon rigorous peer review to ensure that articles published will be recognised within a field to be robust and significant. Peer review takes time. Some publishers pressure editors and reviewers to work to a short time scale – but there is always a fine balance: the review needs to be done carefully, and by experts – rather than either hastily, or by those who are not really qualified but will have a go because they want the reviewing on their c.v (resumé)

Authors are under pressure to publish, and publish quickly. (Indeed I am aware that recently university employees in one country were put under pressure to get published quickly even if that meant compromising where the work was published).

Peer review not only leads to rejection of substandard work, but also provides feedback on submitted work that could be published, suggesting improvements indicated. Such suggestions inevitably are somewhat subjective – but I think most editors would agree that generally the peer review process improves the quality of the final published article – even if it often delays the process by some months and requires authors to go back and do further work when they might have hoped to have moved on to the next article they want to write.

Ultimately peer review (when done carefully by qualified reviewers) means not only that there is quality control that rejects poor work, but that several people scrutinise published work, point out any mistakes missed by authors, and often suggest changes that will lead to work more likely to impress the readership and have lasting impact. As authors we should welcome rigorous peer review of our work – even if sometimes we do not agree with the criticisms of our precious writing.

Instant gratification – immediate publication

Since the advent of on-line publishing, it has become very easy to start a journal, and the number of journals out there has proliferated. (Read 'Challenges to academic publishing from the demand for instant open access to research'.) This makes it hard sometimes, especially for new scholars, to know which journals are of high quality. Many journal publishers are looking to get a competitive edge by speeding up processes. I know from my own role as an editor that it is now possible, sometimes, to get a paper from submission to publication in around a month – but this is still the exception and a quality journal will never look to speed up the process by deliberately selecting referees who are not thorough or avoiding author revisions that are indicated.

So I was rather surprised to get an invitation from a journal I was not familiar with, International Educational Scientific Research Journal, entitled 'Publish your paper in May issue' on 15th April. The idea that I could submit something and have it published two weeks later seemed unlikely if there was any kind of robust peer review process in place. However, the email suggested that the

"Last date for manuscript submission is 30th April, 2016 for 1st May, 2016 Issue".

Really?

Dubious impact factors

Of course it is quite possible that the 1st May issue may not appear till September (that would not be a first in journal publishing), but otherwise this seemed to shout "we publish anything, quality not an issue". This is a journal which charges fees to authors – and the homepage suggests that the cost depends on the length of the manuscript and (oddly) the number of authors. However the email also claimed an Impact factor of 3.606.

If I was a new scholar I would likely be very impressed by an impact factor of over 3, as I know many quality specialist journals in my field with much lower impact factors. However, visiting the webpage revealed that the impact factor has not be awarded by Thomson Reuters, the organisation used by most quality journals, but rather by 'SJIF'.

The impact factors used by top journals reflect how many times (on average) each of their published papers are cited in articles in the highly ranked journals over a period. I found the webpage of the SJIF and found that it evaluates journals on a wide range of criteria, such as number of papers published, language of papers, quality of graphics and many other things. Some of the criteria used are certainly relevant to journal quality – but this type of evaluation is not comparable to the impact factors that are recognised and used by the top academic publishers.

The academic publishing landscape is very diverse today. The possibility of open publishing, and the easy access to tools to publish internet journals, is to be welcomed – but makes it more difficult for scholars to know which journals are genuinely of quality. There is certainly no intrinsic value in a journal having slow processes and all authors welcome a speedy review and publication process. Ultimately, however, submit today, publish tomorrow is likely to mean ignored thereafter.

 

Read more about 'Journals and poor academic practice'

 

(First published 24th April 2016 at http://people.ds.cam.ac.uk/kst24/)

 

 

Unintentionally padding the publications list…with a Schrödinger article

Keith S. Taber

One aspect of academic life that has never sat easily with me, is having to be a publicity officer and marketing manager for oneself as a scholar. Not only does this involve keeping good records of everything one has contributed that might conceivably count in seeking a post or promotion and so forth, but making a case for just how important one's work is – its supposedly seminal status, and its inconceivably incredible impact – by seeking out and reporting various indicators. Perhaps it is something about being British, but it is one aspect of the role that I am pleased to be leaving behind.

Presentation is important. Yet, of course, one must retain one's integrity. One might need to display one's contributions in the best possible light, but lying is clearly not acceptable. Unlike on TV's 'The Apprentice' where hopeful future entrepreneurs making up achievements or exaggerating beyond any possible justification in their applications is presented as entertainment – a little naughty, but it shows they are committed and enthusiastic – this would (I assume) clearly be unacceptable in the real world of work. If that is idealistic, it is certainly not okay in the Academy.

A confession

Yet I must confess that for some years it seems I have been padding my publications list with an article that, I now find, may never have been published. Inadvertently, obviously.

But how can a scholar seriously claim they thought they had been published, when, actually, they never were? What possible defence can there be?

But ('your honour') it is true: I found out some days ago that one of my 2012 publications [sic] may not have actually been a publication at all – as it may never have been published. So my publications list may have misrepresented the status of this possibly previously* unpublished work. (*I've just 'published' it myself on the website – but of course that would not count for much in the hiring-and-tenure-and-promotions game.)

So how did this situation develop?

The invitation

On the 8th September 2009 I was invited by an esteemed colleague

As Editor-in-Chief of the Encyclopledia of the Sciences of Learning (to be published in 2011 by Springer Publ.) and in accordance with strong recommendations of the editorial advisory board I am wondering if you would be interested in joining the Encyclopedia's list of contributors…If you have the time and inclination, the Editorial Advisory Board and I would very much appreciate your support. If you agree to serve as an author on the topic(s) on molecular conceptions of research into learning

Now I was somewhat surprised to be asked to wrote to that topic, but assumed that someone must have spotted a lecture I had given as the Royal Society of Chemistry's Chemical Education Research Group's annual CERG lecture some years earlier on the theme: Molar and molecular conceptions of research into learning chemistry: towards a synthesis.

The files were lost in production

The deadline was in January 21st 2010, and I got the submission completed just before Christmas 2009. I was sent reviewer comments, and completed a revision in June 2010. I assumed all was well.

The article is shown as coded green: Manuscript accepted

However, when I saw in May 2012 that the book was published, my article did not seem to be listed. So I contacted the publisher:

Would you be able to give me an update on the current status of this project. On the project website the contribution I was invited to provide is shown as green (manuscript accepted) [see image above], and according to the Springer site the Encyclopedia is now published. However, I cannot see how to access the material (perhaps my institution does not have access (yet?), or pehraps the on-line version is not yet available?), and I cannot see my contribution (Molecular conceptions of research into learning Regular Entry 00394 894/894) in the downloadable contents. Please could you advise?

The publisher responded: "I can´t find the entry either. This is very strange as it should be there. I will check with the production team and the company that runs the website for us."

The next day came the bad news. Well, given the kinds of things that happen in the world it was only slightly bad, but as publications are seen as so important for academics, and given that I'd done the work, it was pretty disappointing!

I´m terribly sorry to tell you that your article was the only one that has not been received by the production vendor team although we sent it to them. That means that we didn't even had a proof of your article. We didn´t notice that before for which we apologize deeply. For the current printed book Encyclopedia of the Sciences of Learning it is much too late to get your article in. Please accept my apologies.

A second invitation

But I was offered a partial corrective:

We have developed a new publication platform for our Reference Works that launched in August last year. As the Encyclopedia is part of the Springer Reference Works series it is also published as Online Version …The Encyclopedia on that platform can be seen as a living book as authors and editors can add articles or update old ones at any time….Would you be interested to write/update your article on SpringerReference as part of the electronic/online version of the Encyclopedia of the Sciences of Learning?

Yes, I would. The next day I was formally invited to write my article, once more:

Thank you very much for taking part in Encyclopedia of the Sciences of Learning…With this e-mail we invite you to SpringerReference.com as author of the following topic(s): Molecular Conceptions of Research into Learning…

At SpringerReference.com, we offer an extremely innovative way of publishing that allows authors to keep articles constantly updated and make their writing efforts immediately visible online. … When the article is complete, simply click 'Submit' to submit your content. By doing so, your contribution will become immediately visible on the site for other users to read and cite–you do not need to wait until the deadline is reached should you have been given one.

The article was (now) due on July 17th, 2012. There were some technical issues using the on-line submission system, but an assistant at the publisher helped sort the problems relating to an image file and cross-links with other contributions. Finally, by the 9th July, the files were uploaded and some formatting issues were sorted, and I was told by the publisher's assistant: "You can confirm it and click on the ACCEPT button".

I did not have access to an accept button, but was able to reply "Thank you – I have clicked the submit button."

Looking back now, I am wondering if perhaps I did need to click on the accept button – except I did not have such a button on my screen. Perhaps the publisher did have an accept button at their end, and the production assistant did not realise that I did not, or had intended to accept the article later and for some reason…

However, as I could see the article on line (see below), I assumed all was well. After all, I had been told that "When the article is complete, simply click 'Submit' to submit your content. By doing so, your contribution will become immediately visible on the site for other users to read and cite…"

The article showed as being part of the on-line Reference collection, with its own DOI

As I am retiring from teaching, I have been building up this website as a central point for various things I've worked on, and have been reviewing the publications I've written over the years (some of which I have no files for, as the manuscripts were typed and submitted in hard copy in those days). When I came to check the current location of this reference article, I could not find it on line, nor indeed any reference to it in a search. Well, that is not absolutely true: I found it was referred to in a list of my publications, but nowhere else. So I reached out to the publisher:

The files were lost (again) in a migration

The publisher could not find my article, nor indeed any record of it.

I'm sorry to say that after scouring our repositories and email we've failed to find your entry "Molecular conceptions of research into learning".  Encyclopedia of the Sciences of Learning was developed on older platforms we decommissioned years ago, and email archives go back only so many years, I am sorry to say. (16th June 2020)

It seems the publisher had decided the original system, a wiki-based platform, did not fully meet their needs, and that something more sophisticated was needed, a "a file- and book-based publishing workflow" that worked better for academic publishing. That is perfectly understandable.

I can also understand that if the publisher had never accepted the submission at their end, then it might be quite possible that the files would be missed completely when the published materials were transferred to a new platform. It seems the files were lost in that process.

A Schrödinger publication?

On the other hand, if it was true that "by [clicking submit], your contribution will become immediately visible on the site for other users to read and cite" then it is not obvious to me why the files were not transferred – unless the transfer was based on a list of the files at the time of the original publication. If that is the case then the "living book [where] authors and editors can add articles or update old ones at any time" died, and reverted to the state of the initial 2012 publication, when the transfer took place.

I am not sure what the moral of this story is. After writing, submitting, and revising an invited article, having the publisher lose it; resubmitting it, a lot of back and forth sorting format issues…it is disappointing to know all that effort may have came to nought.

In retrospect, perhaps when I emailed to say "I have clicked the submit button" I should have written "I have clicked the submit button as I do not have an ACCEPT button, so if an ACCEPT button needs to be clicked please advise, or otherwise confirm no further action is needed at my end". Usually I tend to be pedantic and explicit – and so, I suspect, annoying. Annoyingly, on one of the few occasions I let my guard down, being my usual annoyingly pedantic might have stopped the article being lost a second time.

On the other hand, perhaps my article WAS published on-line in 2012 (when I could access it), but then lost some years later when the resource was transferred to a new platform. That would be a bit like having a chapter in an edited book that has gone out of print (which would not stop an academic including it on a publications list) – except that the only copies of my article which would have survived would have been any downloaded onto individual computers whilst the original platform was live.

Perhaps I should label this a Schrödinger publication – and consider it as an entanglement of two states – ⎮published but no longer available ⎮ / ⎮ accepted for publication but never published ⎮ – as there does not seem to be any observation I can make now which would collapse the wave function.

So I am not not entirely sure if my entry in the encyclopaedia actually never was a real publication (given the publisher has no record of it), or is just not a currently available publication (given that it was submitted according to the instructions that were supposed to make it live immediately). That makes keeping an accurate publications list quite challenging.

Why write about Cronbach's alpha?

Keith S. Taber

What is Cronbach's alpha?

It is a statistic that is commonly quoted by researchers when reporting the use of scales and questionnaires.

Why carry out a study of the use of this statistic?

I am primarily a qualitative researcher, so do not usually use statistics in my own work. However, I regularly came across references to alpha in manuscripts I was asked to review for journals, and in manuscripts submitted to the journal I was editing myself (i.e., Chemistry Education Research and Practice).

I did not really understand what alpha was, or what is was supposed to demonstrate, or what value was desirable – which made it difficult to evaluate that aspect of a manuscript which was citing the statistic. So, I thought I had better find out more about it.

So, what is Cronbach's alpha?

It is a statistic that tests for internal consistency in scales. It should only be applied to a scale intended to measure a unidimensional factor – something it is assumed can be treated a single underlying variable (perhaps 'confidence in physics learning', 'enjoyment of school science practicals', or 'attitude to genetic medicine').

If someone developed a set of questionnaire items intended to find out, say, how skeptical a person was regarding scientific claims in the news, and administered the items to a sample of people, then alpha would offer a measure of the similarity of the set of items in terms of the patterns of responses from that sample. As the items are meant to be measuring a single underlying factor, they should all elicit similar responses from any individual respondent. If they do, then alpha would approach 1 (its maximum value).

Does alpha not measure reliability?

Often, studies state that alpha is measuring reliability – as internal consistency is sometimes considered a kind of reliability. However, more often in research what we mean by reliability is that repeating the measurements later will give us (much) the same result – and alpha does not tell us about that kind of reliability.

I think there is a kind of metaphorical use of 'reliability' here. The technique derives from an approach used to test equivalence based on dividing the items in a scale into two subsets*, and seeing whether analysis of the two subsets gives comparable results – so one could see if the result from the 'second' measure reliably reproduced that from the 'first' (but of course the ordering of the two calculations is arbitrary, and the two subsets of items were actually administered at the same time as part of a single scale).

* In calculating alpha, all possible splits are taken into account.

Okay, so that's what alpha is – but, still, why carry out a study of the use of this statistic?

Once I understood what alpha was, I was able to see that many of the manuscripts I was reviewing did not seem to be using it appropriately. I got the impression that alpha was not well understood among researchers even though it was commonly used. I felt it would be useful to write a paper that both highlighted the issues and offered guidance on good practice in applying and reporting alpha.

In particular studies would often cite alpha for broad features like 'understanding of chemistry' where it seems obvious that we would not expect understanding of pH, understanding of resonance in benzene, understanding of oxidation numbers, and understanding of the mass spectrometer, to be the 'same' thing (or if they are, we could save a lot of time and effort by reducing exams to a single question!)

It was also common for studies using instruments with several different scales to not only quote alpha for each scale (which is appropriate), but to also give an overall alpha for the whole instrument even though it was intended to be multidimensional. So imagine a questionnaire which had a section on enjoyment of physics, another on self-confidence in genetics, and another on attitudes to science-fiction elements in popular television programmes: why would a researcher want to claim there was a high level of internal consistency across what are meant to be such distinct scales?

There was also incredible diversity in how different authors describe different values of alpha they might calculate – so the same value of alpha might be 'acceptable' in one study, 'fairly high' in another, and 'excellent' in a third (see figure 1).


Fig. 1 Qualitative descriptors used for values/ranges of values of Cronbach's alpha reported in papers in leading science education journals (The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education)

Some authors also suggested that a high value of alpha for an instrument implied it was unidimensional – that all the items were measuring the same things – which is not the case.

But isn't it the number that matters: we want alpha to be as high as possible, and at least 0.7?

Yes, and no. And no, and no.

But the number matters?

Yes of course, but it needs to be interpreted for a reader: not just 'alpha was 0.73'.

But the critical value is 0.7, is that right?

No.

It seems extremely common for authors to assume that they need alpha to reach, or exceed, 0.7 for their scale to be acceptable. But that value seems to be completely arbitrary (and was not what Cronbach was suggesting).

Well, it's a convention, just as p<0.05 is commonly taken as a critical value.

But it is not just like that. Alpha is very sensitive to how many items are included in a scale. If there are only a few items, then a value of, say, 0.6 might well be sensibly judged acceptable. In any case it is nearly always possible to increase alpha by adding more items till you reach 0.7.

But only if the added items genuinely fit for the scale?

Sadly, no.

Adding a few items that are similar to each other, but not really fitting the scale, would usually increase alpha. So adding 'I like Manchester United', 'Manchester United are the best soccer team', and 'Manchester United are great' as items to be responded to in a scale about self-efficacy in science learning would likely increase alpha.

Are you sure: have you tried it?

Well, no. But, as I pointed out above, instruments often contain unrelated scales, and authors would sometimes calculate an overall alpha (the computer found to be greater than that of each of its component scales – at least that would be the implication if it were assumed that a larger alpha means a higher internal consistency without factoring how alpha tends to be larger the more items are included in the calculation.

But still, it is clear that the bigger alpha the better?

Up to a point.

But consider a scale with five items where everybody responds to each item in exactly the same way (not, that is, different people respond in the same way as each other, just whatever response a person gives to one item – e.g., 2 on a scale of 1-7 – they also give to the other items). So alpha should be 1, as high as it can get. But Cronbach would suggest you are wasting researcher and participant effort by having many items if they all elicit the same response. The point of scales having several items is that we assume no one item directly catches perfectly what we are trying to measure. Whether they do or not, there is no point in multiple items that are effectively equivalent.

Was it necessary to survey science education journals to make the point?

I did not originally think so.

My draft manuscript made the argument by drawing on some carefully selected examples of published papers in relation to the different issues I felt needed to be highlighted and discussed. I think the draft manuscript effectively made the point that there were papers getting published in good journals that quoted alpha but seemed to simply assume it demonstrated something (unexplained) to readers, and/or used alpha when their instrument was clearly not meant to be multidimensional, and/or took 0.7 as a definitive cut-off regardless of the number of items concerned, and/or quoted alpha values for overall instruments as well as for the distinct scales as if that added some evidence of instrument quality, or claimed a high value of alpha for an instrument demonstrated it was unidimensional.

So why did you then spend time reviewing examples across four journals over a whole year of publication?

Although I did not think this was necessary, when the paper was reviewed for publication a journal reviewer felt the paper was too anecdotal: that just because a few papers included weak practice, that may not have been especially significant. I think there was also a sense that a paper critiquing a research technique did not fit in the usual categories of study published in the journal, but a study with more empirical content (even if the data were published papers) better fitted the journal.

At that point I could have decided to try and get the paper published elsewhere, but Research in Science Education is a good journal and I wanted the paper in a good science education journal. This took extra work, but satisfied the journal.

I still think the paper would have made a contribution without the survey BUT the extra work did strengthen paper. In retrospect, I am happy that I responded to review comments in that way – as it did actually show just how frequency alpha is used in science education, and the wide variety of practice in reporting the statistic. Peer review is meant to help authors improve their work, and I think it did here.

Has the work had impact?

I think so, but…

The study has been getting a lot of citations, and it is always good to think someone notices a study, given the work it involves. Perhaps a lot of people have genuinely thought about their use of alpha as a result of reading the paper, and perhaps there are papers out their which do a better job of using and reporting alpha as a result of authors reading my study. (I would like to think so.)

However, I have also noticed that a lot of papers citing this study as an authority for using alpha in the reported research are still doing the very things I was criticising, and sometimes directly justifying poor practice by citing my study! These authors either had not actually read the study (but were just looking for something about alpha to cite) or perhaps did not fully appreciate the points made.

Oh well, I think it was Oscar Wilde who said there is only one thing in academic life worse than being miscited…

'Correcting' for plagiarism

Keith S. Taber

Dear Stephen

Thank you for your email message from 'Competent Proofreaders' about the services provided by SPRINGEREDIT (viewpublishers@gmail.com / scrutinyeditors@gmail.com).

I would be interested in learning a little more about exactly what your service entails. I am pretty clear what is involved in 'Proofreading' and 'translating'. But I see you also offer 'correcting for plagiarism'. I wonder if you could tell me a little more about what your service involves here – what I would get for my 40 USD/1000 words?

My understanding is that plagiarism is when an author uses the ideas of another scholar as if their own – without acknowledging the original source. This can be considered not only poor scholarship, but academic malpractice, so I certainly understand why I should be careful to avoid plagiarism in any work I submit for publication. I can therefore see why a service that could correct for plagiarism might be worth investing in, as this could protect scholarly reputation (or in extreme cases, academic employment!)

But I cannot immediately see how you could help me with this. If I asked you to proofread a draft paper, then I know what to expect and I can see that it is very likely that a thorough proofread could technically improve my text. What would you actually do, however, if I submitted a draft paper for 'correcting for plagiarism' – how would you identify any plagiarism (that I might myself not be aware of) and correct it? What exactly would I be paying for?

Best wishes

Keith

(Read more about plagiarism)

Rich scientific content: improving the quality of manuscripts on behalf of authors

Keith S. Taber

Dear Ksenia …

Thank you for your email message headed "vacancy for an editor or a reviewer of the journal for [sic]" and beginning "Vacancy for an editor or a reviewer of the journal". Thank you for offering to answer all my questions. The introduction to your message initially gave me the impression that you have a vacancy for an editor or a reviewer for some journal – albeit one you were too modest (or embarrassed!) to specify. However, as I read on, it seems that is not the situation?

I learn that you "assist Professors from the United Arab Emirates, China, Viet Nam [sic], Russia to publish their scientific papers in the journals indexed in Scopus database or Web of Science database", and that you work with a "skillful [sic] team" and collectively "control and improve the quality of the papers that [you] receive from the authors" so as to ensure that "only manuscripts with good English language, rich scientific content and appropriate formatting style are sent to the Editors / Publishers by [your] team". So, if I understand correctly, I think you have set yourself up as a kind of intermediary between the authors of scientific papers, and the journals they wish to submit their work to?

You tell me that you "have [sic] already cooperate with good Editors and some well-known publishers". I wonder which publishers these are -would there be some I know and have worked with?

I see that you are looking for new partners and invite me to "publish some good manuscripts from my [i.e., your] side". I thought initially that you wanted me to send you my papers so you could improve them for me. But I think rather you may be asking me to help you publish papers you have already improved for other authors, so they are now good manuscripts. Is that correct?

I am a little unsure about the precise service that you and your team offer. I do certainly recognise that for authors working in English as an additional language, it can be very valuable for someone who is a native English speaker to offer some help with grammar and syntax, although this needs to be done without introducing semantic changes to manuscripts. I think there are already many organisations offering editing and poof-reading services of this kind – although they can only do this accurately when the intended meaning of the text in the original manuscript is entirely clear.

I am intrigued, however, about how you are able to "control and improve the quality of the papers that [you] receive" in terms of "rich scientific content". I would like to learn more about this. Does this mean that your team act as if independent peer-reviewers and advise authors on which manuscripts are likely to be judged suitable for publication by reputable journals? I can see that would be a feasible service, but wonder what one of your client authors would think of your service if your team 'reject' their work as lacking in sufficient quality to be placed in any of the journals that you send the improved manuscripts on to?

Or, alternatively, are your team actually able to help authors by ensuring that manuscripts which lack sufficient "rich scientific content" when submitted to your service are sufficiently improved in quality so they will later be judged as having "rich scientific content" when they are subsequently "sent to the Editors / Publishers by [your] team"? That would be a much more challenging task, and I would be very interested to know how the team can improve the quality of authors' works in that sense without having been intimately involved in the studies being reported.

I look forward to learning more about your team and services.

Best wishes

Keith

Dear 

Vacancy for an editor or a reviewer of the journal.

Hope this mail finds you well and in a good heath.
In order to save your time I will try to be concise and brief.

My name is Ksenia.
I assist Professors from the United Arab Emirates, China, Viet Nam, Russia to publish their scientific papers in the journals indexed in Scopus database or Web of Science database.
Together with my skillful team we control and improve the quality of the papers that we receive from the authors.
Only manuscripts with good English language, rich scientific content and appropriate formatting style are sent to the Editors / Publishers by my team (officially to the website of the journal or directly to the editor's mailbox). 

We have already cooperate with good Editors and some well-known publishers, but we want to find some more new partners for long fruitful cooperation.
I will be glad if you publish some good manuscripts from my side. 

If you are interested in this, please, let me know. I will forward all required information to you and answer all your questions.

Will be happy to hear from you soon.
Have a nice day.

P.S. Sorry for bothering you if you find this letter useless and not interesting for you.

Regards, ...