How much damage can eight neutrons do?

Scientific literacy and desk accessories in science fiction

Keith S. Taber


Is the principle of conservation of mass that is taught in school science falsified all the time?


I am not really a serious sci-fi buff, but I liked Star Trek (perhaps in part because it was the first television programme I got to see in colour 1) and I did enjoy Blakes7 when it was broadcast by the BBC (from 1978-1981).



Blakes7 was made with the same kind of low budget production values of Dr Who of the time. Given that space scenes in early episodes involved what seemed to be a flat image of a spacecraft moving across a star field with no sense of depth or perspective (for later series someone had built a model), and in one early episode the crew were clearly given angle-poise lamps to control the craft, it was certainly not a case of 'no expense spared'. So, it was never quite clear if the BBC budget had also fallen short of a possessive apostrophe in the show title credits or Blakes7 was to be read in some other way.

After all, it was not made explicit who was part of Blake's 7 if that was what the title meant, and no one referred to "Blake's 7" in the script (perhaps reflecting how the doctor in Dr Who was not actually called Dr Who?).


The Blakes7 team on the flight desk of the Liberator – which was the most advanced spaceship in the galaxy (and was, for plot purposes, conveniently found drifting in space without a crew) – at least until they forgot to clean the hull once too often and it corroded away while they were on an away mission.

Blake's group was formed from a kind of prison break and so Blake was something of a 'rough-hero' – but not as much as his sometime unofficial lieutenant, sometime friend, sometime apparent rival, Avon, who seemed to be ruled by self-interest (at least until the script regularly required some act of selfless heroism from him). 'Rough-heroes' are fictional characters presented in the hero role but who have some traits that the audience are likely to find morally questionable if not repugnant.

As well as Blake (a rebel condemned as a traitor, having 'recovered' from brainwashing-supported rehabilitation to rebel again) and Avon (a hacker convicted of a massive computer fraud intended to make himself extremely rich) the rest of the original team were a smuggler, a murderer and a petty thief, to which was added a terrorist (or freedom fighter if you prefer) picked up on an early mission. That aside, they seemed an entirely reasonable and decent bunch, and they set out to rid the galaxy of 'The Federation's tyrannical oppression. At least, that was Blake's aspiration even if most of his companions seemed to see this as a stop-gap activity till they had decided on something with more of a long-term future.

At the end of one season, where the fight with the Federation was temporarily put aside to deal with an intergalactic incursion, Blake went AWOL (well, intergalactic wars can be very disruptive) and was assumed dead/injured/lost/captured/?… for much of the remaining run without affecting the nature of the stories too much.

Among its positive aspects for its time were strong (if not exactly model) roles for women. The main villain, Servalan, was a woman – Supreme Commander of the Federation security forces (and later Federation president).


As the ruthless Supreme Commander of the Federation security forces, Servalan got to wear whatever she liked (a Kid Creole, or Mel and Kim, look comes to mind here) and could insist her staff wore hats that would not upstage hers

In Blake's original team (i.e., 7?), his pilot is a woman. (Reflecting other SciFi series, the spacecraft used by Blakes7 require n crew members to operate effectively, where n is an integer that varies between 0 and 6 depending on the specific plot requirements of an episode.) In a later series, after Avon has taken over the role of 'ipso facto leader-among-equals', the group recruits a female advanced weapons designer/technologist and a female sharpshooter.


The Blakes7 team later in the run. (Presumably they are checking the monitor and having a quick recount.) Was Soolin (played by Glynis Barber, far right) styled as a subtle reference to the 'Seven Samurai'?

When I saw Blakes7 was getting a rerun recently I re-watched the series I had not seen since it was first aired. Despite very silly special effects, dodgy story-lines, and morally questionable choices (the series would make a great focus for a philosophy class) the interactions between the main characters made it an enjoyable watch.

But, it is not science

Of course, the problem with science fiction is that it is fiction, not science. Star Trek may have prided itself on seeking to at least make the science sound feasible, but that is something of an outlier in the genre.

Egrorian and his young assistant Pinder (unfortunately prematurely aged somewhat by a laboratory mishap) show Avon and Vila around their lab.

This is clear, for example, in an episode called 'Orbit' where Avon discuses the tachyon funnel, an 'ultimate weapon', with Egrorian, a renegade scientist. Tachyons are hypothetical particles that travel faster than the speed of light. The theory of special relatively suggests the speed of light is the theoretical maximum speed anything can have, but some other theories suggest tachyons may exist in some circumstances. As always in science, theories that are widely accepted as our current best understanding of some aspect of nature (e.g., relativity) are still open to modification or replacement if new evidence is found that suggests this is indicated.

In the Blakes7 universe, there seemed to be a surprisingly high frequency of genius scientists/engineers who had successfully absconded from the tyrannical and paranoid Federation with sufficient resources to build private research facilities on various obscure deserted planets. Although these bases are secret and hidden away, and the scientists concerned have normally been missing for years or even decades, it usually transpires that the Blakes7 crew and the Federation manage to locate any particular renegade scientist during the same episode.

This is part of the exchange between this particular flawed genius scientist and our flawed and reluctant 'rough hero', Kerr Avon:

Egrorian: You've heard of Hoffal's radiation?

Avon: No.

Ah… Hoffal had a unique mind. Over a century ago he predicted most of the properties that would be found in neutron material.

Neutron material?

Material from a neutron star. That is a… a giant sun which has collapsed and become so tightly compressed that its electrons and protons combine, making neutrons.

I don't need a lecture in astrophysics. [But presumably the scriptwriter felt the audience would need to be told this.]

When neutrons are subjected to intense magnetic force, they form Hoffal's radiation. Poor Pinder [Egrorian's lab. assistant] was subjected for less than a millionth of a second. He aged 50 years in as many seconds. …

So neutrons are part of the tachyon funnel.

Um, eight of them … form the core of the accelerator. 

From the script of 'Orbit' (c) 1981 by the British Broadcasting Corporation – made available 'for research purposes'

Now, for anyone with any kind of science background such dialogue stretches credibility. Chadwick discovered the neutron in everyday matter in 1932, so the neutron's properties could be explored without having to obtain samples from a neutron star – which would certainly be challenging. When bound in nuclei, neutrons (which are electrically neutral, thus the name, and so not usually affected by magnetic fields) are stable.

Thinking at the scale of a neutron

However, any suspension of disbelief (which fiction demands, of course) was stretched past breaking point at the end of this exchange. Not only were the generally inert neutrons the basis of a weapon that could destroy whole worlds – but the core of the accelerator was formed of, not a neutron star, nor a tonne of 'neutron matter', but eight neutrons (i.e., one for each member of Blake's 7 with just a few left over?)

That is, the intensely destructive beam of radiation that could destroy a planet from a distant solar system was generated by subjecting to a magnetic field: a core equivalent to (the arguably less interesting) half of a single oxygen atomic nucleus.


Warning – keep this away from strong magnetic fields if you value your planet! (Image by Gerd Altmann from Pixabay )

Now free neutrons (that is, outside of an atomic nuclei – or neutron star) are unstable, and decay on a timescale of around a quarter of an hour (that is, the half-life is of this order – following the exponential decay familiar with other kinds of radioactivity), to give a proton, an electron and a neutrino. The energy 'released' in this process is significant on the scale of a subatomic particle: 782 343 eV or nearly eight hundred thousand eV.

Eight hundred thousand seems a very large number, but the unit here is electron volt, a unit used for processes at this submicroscopic scale. (An eV is the amount of work that is done when one single electron is moved though a potential difference of 1v – this is about 1.6 x10-19 J). In the more familiar units of joules, this is about 1.25 x 10-13 J. That is,

0.000 000 000 000 125 J

To boil enough water at room temperature to make a single cup of tea would require about 67 200 J. 2 So, if the energy from decaying neutrons were used to boil the water, it would require the decay of about

538 000 000 000 000 000 neutrons.3

That is just to make one cup of tea, so imagine how many more neutrons would have to decay to provide the means to destroy a planet. Certainly, one would imagine,

more than 8.

E=mc2

Now since Einstein (special relativity, again), mass and energy have been considered to have an equivalence. It is commonly thought that mass can be converted to energy and the equation E=mc2 tells you how much of one would be converted to the other: how many J per kg or kg per J. (Spoiler alert – this is not quite right.)

In that way of thinking, the energy released by a free neutron when it decays is due to a tiny part of the neutrino's mass being converted to energy.

The neutron's mass defect

The mass (or so called 'rest mass') of a neutrino is about 1.67 x 10-27 kg. In the usual mode of decay the neutrino gives rise to a proton (which is nearly, but not quite, as heavy as a neutron), an electron (which is much lighter), and a neutrino (which is considered to have zero rest mass.)


Before decayRest mass / 10-31 kgAfter decayRest mass / 10-31 kg
neutron16 749.3proton16 726.2
electron9.1
neutrino
total16 749.316 735.3
[rest] mass defect in neutrino decay

So, it seems like some mass has disappeared. (And this is the mass sometimes said to have been converted into the released energy.) This might lead us to ask the question of whether Hoffal's discovery was a way to completely annihilate neutrons, so that instead of a tiny proportion of their mass being converted to energy as in neutron decay – all of it was.

Mass as latent energy?

However, when considered from the perspective of special relativity, it is not that mass is being converted to energy in processes such as neutron decay, but rather that mass and energy are considered as being different aspects of something more unified -'mass-energy' if you like. Energy in a sense carries mass, and mass in a sense is a manifestation of energy. The table above may mislead because it only refers to 'rest mass' and that does not tell us all we need to know.

When the neutron decays, the products move apart, so have kinetic energy. According to the principle of mass-energy equivalence there is always a mass equivalence of any energy. So, in relativity, a moving object has more mass than when it is at rest. That is, the 'mass defect' table shows what the mass would be if we compared a motionless neutron with motionless products, not the actual products.

The theory of special relativity boldly asserts that mass and energy are not the independent quantities they were once thought to be. Rather, they are two measures of a single quantity. Since that single quantity does not have its own name, it is called mass-energy, and the relationship between its two measures is known as mass-energy equivalence. We may regard c2 as a conversion factor that enables us to calculate one measurement from the other. Every mass has an energy-equivalent and every energy has a mass-equivalent. If a body emits energy to its surroundings it also emits a quantity of mass equivalent to that energy. The surroundings acquire both the energy and mass in the process.

Treptow, 2005, p.1636

So, rather than thinking mass has been converted to energy, it may be more appropriate to think that the mass of a neutron has a certain (latent) energy associated with it, and that, after decay, most of this energy is divided between products (according to their rest masses), but a small proportion has been converted to kinetic energy (which can be considered to have a mass equivalence).

So, whenever any process involves some kind of energy change, there is an associated change in the equivalent masses. Every time you boil the kettle, or go up in an elevator, there is a tiny increase of mass involved – the hot water is heavier than when it was cold; you are heavier than when you were at a lower level. When you lie down or burn some natural gas, there is a tiny reduction in mass (you weigh less lying down; the products of the chemical reaction weigh less than the reactants).

How much heavier is hot water?

Only in nuclear processes does the energy change involved become large enough for any change in mass to be considered significant. In other processes, the changes are so small, they are insignificant. The water we boiled earlier to make a cup of tea required 67 200J of energy, and at the end of the process the water would not just be hotter, but also heavier by about

0.000 000 000 000 747 kg

0r about 0.000 000 000 75 g. That is easy to calculate 4, but not so easy to notice.

Is mass conserved in chemical reactions?

On this basis, we might suggest that the principle of conservation of mass that is taught in school science is falsified all the time – or at least needs to be understood differently from how it is usually presented.


Type of reactionMass change
endothermicmass of products > mass of reactants
exothermicmass of products < mass of reactants
If we just consider the masses of the substances then mass is not conserved in chemical change

Yet, the discrepancies really are tiny – so tiny that in school examinations candidates are expected to pretend there is no difference. But, strictly, when (as an example) copper carbonate is heated in a crucible and decomposes to give copper oxide and carbon dioxide there is a mass decrease even if you could capture all the CO2. But it would not be measurable with our usual laboratory equipment – so, as far as chemistry is concerned, mass is conserved. 'To all intense and purposes' (even if not absolutely true) mass is always conserved in chemical reactions.

Mass is conserved overall

But actually, according to current scientific thinking, mass is always conserved (not just very nearly conserved), as long as we make sure we consider all relevant factors. The energy that allowed us to boil the kettle or be lifted in an elevator must have been provided from some source (which has lost mass by the same extent). In an exothermic chemical reaction there is an extremely slight difference of mass between the reactants and products, but the surroundings have been warmed and so have got (ever so slightly) heavier.


Type of reactionMass change
endothermicenergy (and equivalent mass) from the surroundings
exothermicenergy (and equivalent mass) to the surroundings
If we just consider the masses of the substances then mass does not seem to be conserved in chemical change


As Einstein himself expressed it,

"The inertial mass of a system of bodies can even be regarded as a measure of its energy. The law of the conservation of the mass of a system becomes identical with the law of the conservation of energy, and is only valid provided that the system neither takes up nor sends out energy."

Einstein, 1917/2015, p.59

Annihilate the neutrons!

So, if we read about how in particle accelerators, particles are accelerated to immense speeds, and collided, and so converted to pure energy we should be suspicious. The particles may well have been destroyed – but something else has now acquired the mass (and not just the rest mass of the annihilated particles, but also the mass associated with their high kinetic energy).

So, we cannot convert all of the mass of a neutron into energy – only reconfigure and redistribute its mass-energy. But we can still ask: what if all the mass of the neutron were to be converted into some kind of radiation that carried away all of its mass as high energy rays (perhaps Hoffal's radiation?)

Perhaps the genius scientist Hoffal, with his "unique mind", had found a way to do this (hm, with a magnetic field?) Even if that does not seem very feasible, it does give us a theoretical limit to the energy that could be produced by a process that converted a neutron into radiation.6 Each neutron has a rest mass of about

1.67 x 10-27 kg

now the conversion factor is c2 (where c is the speed of light, which is near enough 3 x 108 ms-1, so c2 =(3×108ms-1)2 , i.e., about 1017m2s-2), so that mass is equivalent to about 1.50 x 10-10 J 5 or,

0.000 000 000 150 J

Now that is a lot more energy than the 1.25 x 10-13 J released in the decay of a neutron,

0.000 000 000 150 000 J

>

0.000 000 000 000 125 J

and now we could in theory boil the water to make our cup of tea with many fewer neutrons. Indeed, we could do this by annihilating 'only' about 7

448 000 000 000 000 neutrons

This is a lot less neutrons than before, i.e.,

448 000 000 000 000 neutrons

< 538 000 000 000 000 000 neutrons

but it seems fair to say that it remains the case that the number of neutrons needed (now 'only' about 448 million million) is still a good deal more than 8.

448 000 000 000 000 neutrons

> 8 neutrons

So, if over 400 million million neutrons would need to be completely annihilated to make a single cup of tea, how much damage can 8 neutrons do to a distant planet?

A common learning difficulty

In any reasonable scenario we might imagine 8 neutrons would not be significant. This is worth emphasising as it reflates to a common learning difficulty. Quanticles such as atoms, atomic nuclei, neutrons and the like are tiny. Not tiny like specs of dust or grains of salt, but tiny on a scale where specs of dust and grains of salt themselves seem gigantic. The scales involved in considering electronic charge (i.e., 10-19C) or neutron mass (10-27 kg) can reasonably said to be unimaginatively small – no one can readily visualise the shift in scale going from the familiar scale of objects we normally think of as 'small', to the scale of individual molecules or subatomic particles.

Students therefore commonly form alternative conceptions of these types of entities (atoms, electrons, etc.) being too small to see, but yet not being so far beyond reach. And it is not just learners who struggle here. I have even heard someone on a national news programme put forward as an 'expert' make a very similar suggestion to Egrorian, in this case that a "couple of molecules" could be a serious threat to public health after the use of chemical nerve agent. This is a preposterous suggestion to a chemist, but was, I am sure, made in good faith by the international chemical weapons expert.

It is this type of conceptual difficulty which allows scriptwriters to refer to 8 neutrons as being of some significance without expecting the audience to simply laugh at the suggestion (even if some of us do).

It also explains how science fiction writers get away with such plot devices given that many in their audiences will readily accept that a few especially malicious molecules or naughty neutrons is a genuine threat to life.8 But that still does not justify using angle-poise lamps as futuristic spacecraft joysticks.


Jenna pilots the most advanced spacecraft in the galaxy

Works cited:
  • Einstein, A. (1917/2015). Relativity. The special and the general theory. (100th Anniversary ed.). Princeton: Princeton Univerity Press.
  • Treptow, R. S. (2005). E = mc2 for the Chemist: When Is Mass Conserved? Journal of Chemical Education, 82(11), 1636. doi:10.1021/ed082p1636

Notes:

1 To explain: For younger readers, television was first broadcast in monochrome (black and white – in effect shades of grey). My family first got a television after I started primary school – the justification for this luxury was that the teachers sometimes suggested programmes we might watch.

Colour television did not arrive in the UK till 1967, and initially it was only used for selected broadcasts. The first colour sets were too expensive for many families, so most people initially stayed with monochrome. This led to the infamous 'helpful' statement offered by the commentator of the weekly half-hour snooker coverage: "And for those of you who are watching in black and white, the pink [ball] is next to the green". (While this is well known as a famous example of misspeaking, a commentator's blooper, those of a more suspicious mind might bear in mind the BBC chose snooker for broadcast in part because it might encourage more people to watch in colour.)

Snooker – not ideal viewing on 'black and white' television (Image by MasterTux from Pixabay )

My father had a part-time weekend job supervising washing machine rental collections (I kid you not, many people only rented such appliances in those days), to supplement income from his full time job, and this meant on Monday evenings after his day job he had to visit his part-time boss and report and they would go throughout the paperwork to ensure things tallied. I would go with him, and was allowed to watch television whilst they did this – it coincided with Star Trek, and the boss had a colour set!


2 Assuming water had to be heated from 20ËšC to 100ËšC, and the cup took 200 ml (200 cm3) of tea then the calculation is 4.2 x 80 x 200

4.2 J g-1K-1 is the approximate specific heat capacity of water.

Changing these parameters (perhaps you have a small tea cup and I use a mug?) will change the precise value.


3 That is the energy needed divided by the energy released by each neutron: 67200 J ÷ 1.25 x 10-13 J/neutron = 537 600 000 000 000 000 neutrons


4 E=mc2

so m = E/c2 = 67 200 ÷ (3.00 x 108)2 = 7.47 x 10-13


5 E=mc2 = 1.67 x 10-27 x (3.00 x 108)2 = 1.50 x 10-10


6 Well, we could imagine that somehow Hoffal had devised a process where the neutrons somehow redirect energy provided to initially generate the magnetic field, and perhaps the weapon was actually an enormous field generator producing a massive magnetic field that the funnel somehow converted into a beam (of tachyons?) that could pass across vast amounts of space without being absorbed by space dust, remaining highly collimated, and intense enough to destroy a world.

So, perhaps the neutrons are analogous to the core of a laser.

I somehow think it would still need more than 8 of them.


7 That is the energy needed divided by the energy released by each neutron: 67200 J ÷ 1.50 x 10-10 J/neutron = 4.48 x 1014 neutrons


8 Of course molecules are not actually malicious and neutrons cannot be naughty as they are inanimate entities. I am not anthropomorphising, just alliterating.


How much damage can a couple of molecules do?

Just how dangerous is Novichok?

Keith S. Taber


"We are only talking about molecules here…

There might be a couple of molecules left in the Salisbury area. . ."

Expert interviewed on national news

The subject of chemical weapons is not to be taken lightly, and is currently in the news in relation to the Russian invasion of Ukraine, and the concern that the limited progress made by the Russian invaders may lead to the use of chemical or biological weapons to supplement the deadly enough effects of projectiles and explosives.

Organophosphorus nerve agents (OPNA) were used in Syria in 2013 (Pita, & Domingo, 2014), and the Russians have used such nerve agents in illicit activities – as in the case of the poisoning of Sergey Skripal and his daughter Yulia in Salisbury. Skripal had been a Russian military intelligence officer who had acted for the British (i.e., as a double agent), and was convicted of treason – but later came to the UK in a prisoner swap and settled in Salisbury (renown among Russian secret agents for its cathedral). 1

Salisbury, England – a town that featured in the news when it was the site of a Russian 'hit' on a former spy (Image by falco from Pixabay )

These substances are very nasty,

OPNAs are odorless and colorless [and] act by blocking the binding site of acetylcholinesterase, inhibiting the breakdown of acetylcholine… The resulting buildup of acetylcholine leads to the inhibition of neural communication to muscles and glands and can lead to increased saliva and tear production, diarrhea, vomiting, muscle tremors, confusion, paralysis and even death

Kammer, et al., 2019, p.119

So, a substance that occurs normally in cells, but is kept in check by an enzyme that breaks it down, starts to accumulate because the enzyme is inactivated when molecules of the toxin bind with the enzyme molecules stopping them binding with acetylcholine molecules. Enzymes are protein based molecules which rely for their activity on complex shapes (as discussed in 'How is a well-planned curriculum like a protein?' .)


Acetylcholine is a neurotransmitter. It allows signals to pass across synapses. It is important then that acetylcholine concentrations are controlled for nerves to function (Image source: Wikipedia).


Acetylcholinesterase is a protein based enzyme that has an active site (red) that can bind and break up acetylcholine molecules (which takes about 80 microseconds per molecule). The neurotransmitter molecule is broken down into two precursors that are then available to be synthesised back into acetylcholine when appropriate. 2

Toxins (e.g., green, blue) that bind to the enzyme's active site block it from breaking down acetylcholine.

(Image source: RCSB Protein Data Bank)


A need to clear up after the release of chemical agents

The effects of these agents can be horrific – but, so of course, can the effects of 'conventional' weapons on those subjected to aggression. One reason that chemical and biological weapons are banned from use in war is their uncontrollable nature – once an agent is released in an environment it may remain active for some time – and so hurt or kill civilians or even personnel from the side using those weapons if they move into the attacked areas. The gases used in the 1912-1918 'world' war, were sometimes blown back towards those using them when the wind changed direction.


Image by Eugen Visan from Pixabay 

This is why, when small amounts of nerve agents were used in the U.K. by covert Russian agents to attack their targets, there was so much care put into tracing and decontaminating any residues in the environment. This is a specialised task, and it is right that the public are warned to keep clear of areas of suspected contamination. Very small quantities of some agents can be very harmful – depending upon what we mean by such relative terms as 'small'. Indeed, two police officers sent to the scene of the crime became ill. But what does 'very small quantities' mean in terms of molecules?

A recent posting discussed the plot of a Blakes7 television show episode where a weapon capable of destroying whole planets incorporated eight neutrons as a core component. This seemed ridiculous: how much damage can eight neutrons do?

But, I also pointed out that, sadly, not all those who watched this programme would find such a claim as comical as I did. Presumably, the train of thought suggested by the plot was that a weapon based on eight neutrons is a lot more scary than a single neutron design, and neutrons are found in super-dense neutron stars (which would instantly crush anyone getting too near), so they are clearly very dangerous entities!

A common enough misconception

This type of thinking reflects a common learning difficulty. Quanticles such as atoms, atomic nuclei, neutrons and the like are tiny. Not tiny like specs of dust or grains of salt, but tiny on a scale where specs of dust and grains of salt themselves seem gigantic. The scales involves in considering electronic charge (i.e., 10-19C) or neutron mass (10-27 kg) can reasonably be said to be unimaginatively small – no one can readily visualise the shift in scale going from the familiar scale of objects we normally experience as small (e.g., salt grains), to the scale of individual molecules or subatomic particles.

People therefore commonly form alternative conceptions of these types of entities (atoms, electrons, etc.) being too small to see, but yet not being so far beyond reach. It perhaps does not help that it is sometimes said that atoms can now be 'seen' with the most powerful microscopes. The instruments concerned are microscopes only by analogy with familiar optical microscopes, and they produce images, but these are more like computer simulations than magnified images seen through the light microscope. 3

It is this type of difficulty which allows scriptwriters to refer to eight neutrons as being of some significance without expecting the audience to simply laugh at the suggestion (even if some of us do).

An expert opinion

Although television viewers might have trouble grasping the insignificance of a handful of neutrons (or atoms or molecules), one would expect experts to be very clear about the vast difference in scale between us (people for example) and them (nanoscopic entities of the molecular realm). Yet experts may sometimes be stretched beyond their expertise without themselves apparently being aware of this – as when a highly qualified and experienced medical expert agreed with an attorney that the brain sends out signals to the body faster than the speed of light. If a scientific expert in a high profile murder trial can confidently make statements that are scientifically ridiculous then this underlines just how challenging some key scientific ideas are.

For any of us, knowing what we do not know, recognising when we are moving outside out of areas where we have a good understanding, is challenging. Part of the reason that student alternative conceptions are so relevant to science learning is that a person's misunderstanding can seem subjectively to be just as well supported, sensible, coherent and reasonable as a correct understanding. Where a teacher themself has an alternative conception (which sometimes happens, of course) they can teach this with as much enthusiasm and confidence as anything they understand canonically. Expertise always has limitations.

A chemical weapons expert

I therefore should not have been as surprised as I was when I heard a news broadcast featuring an expert who was considered to know about chemical weapons refer to the potential danger of "a couple of molecules". This was in relation to the poisoning by Russian agents of the Salisbury residents,

"During an interview on a BBC Radio 4 news programme (July 5th, 2018), Hamish de Bretton-Gordon, who brands himself as one of the world's leading chemical weapons experts, warned listeners that there may be risks to the public due to residue from the original incident in the area. Whilst that may have been the case, his suggestion that "we are only talking about molecules here. . .There might be a couple of molecules left in the Salisbury area. . ." seemed to suggest that even someone presented to the public as a chemistry expert might completely fail to appreciate the submicroscopic scale of individual molecules in relation to the macroscopic scale of a human being."

Taber, 2019, p.130
Chemical weapons expert ≠ chemistry expert

Now Colonel de Bretton-Gordon is a visiting fellow at  Magdalene College Cambridge, and the College website describes him as "a world-leading expert in Chemical and Biological weapons". I am sure he is, and I would not seek to underplay the importance of decontamination after the use of such agents; but if someone who has such expertise would assume that a couple of molecules of any substance posed a realistic threat to a human being with its something like 30 000 000 000 000 cells, each containing something like 40 000 000 molecules of protein (to just refer to one class of cellular components), then it just underlines how difficult it is to appreciate the gulf in scale between molecules and men.

Regarding samples of nerve agents, they may be deadly even in small quantities, but that still means a lot of molecules.

Novichok cocktails

The attacks in Salisbury (from which the intended victims recovered, but another person died in nearby Amesbury apparently having come into contact with material assumed to have been discarded by the criminals), were reported to have used 'Novichok', a label given to group of compounds.

"Based on analyses carried out by the British "Defence Science and Technology Laboratory" in Porton Down it was concluded that the Skripals were poisoned by a nerve agent of the so-called Novichok group. Novichok … is the name of a group of nerve agents developed and produced by Russia in the last stage of the Cold War."

Carlsen, 2019, p.1

Testing of toxins is often based on the LD50 – which means finding the dose that has an even chance of being lethal. This is not an actual amount, as clearly the amount of material that is needed to kill a large adult will be more than that to kill a small child, but the amount of the toxin needed per unit mass of victim. Although no doubt these chemicals have been directly tested on some poor test specimens of non-consenting small mammals, such information is not in the public domain.

Indeed, being based on state secrets, there is limited public data on Novichok and related agents. Carlsen (2019) estimates the LD50 for oral administration of 9 compounds in the Novichok group and some closely related agents to vary between 0.1 to 96.16 mg/kg.

Carlsen suggest the most toxic of these compounds is one known as VX. VX was actually first developed by British Scientists, although almost equivalent nerve agents were later developed elsewhere, including Russia.


'Chemical structures of V-agents.'
(Figure 2 from Nepovimova & Kuca, 2018 – subject to http://creativecommons.org/licenses/BY-NC-ND/4.0/)
n.b. This figure shows more than a couple of molecules of nerve agent – so might this be a lethal dose?


Carlsen then argues that the actual compounds in Novachok are probably less toxic than XV, which might explain…

"…why did the Skripals not die following expose to such high potent agents; just compare to the killing of Kim Jong-nam on February 13, 2017 in Kuala Lumpur International Airport, where he was attacked by the highly toxic VX, and died shortly after."

Carlsen, 2019, p1

So, for the most sensitive agent, known as XV (LD50 c. 0.1 mg/kg), a person of 50 kg mass would it is estimated have a 50% chance of being killed by an oral dose of 0.1 x 50 mg. That is 5 mg or 0.005 g by mouth. A single drop of water is said to have a volume of about 0.05 ml, and so a mass of about 0.05 g. So, a tenth of a drop of this toxin can kill. That is a very small amount. So, if as little as 0.005 g of a nerve agent will potentially kill you then that is clearly a very toxic substance.

The molecular structure of XV is given in the figure above taken from Nepovimova and Kuca (2018). These three structures shown appear to be isomeric – that is the three molecules are structural isomers. They would have the same empirical formula (and the same molecular mass).

Chemical shorthand

This type of structural formula is often used for complex organic molecules as it is easy for experts to read. It is one of many special types of representation used in chemistry. It is based on the assumption that most organic compounds can be understood as if substituted hydrocarbons. (They may or may not be derived that way – this is jut a formalism used as a thinking tool.) Hydrocarbons comprise chains of carbon atomic cores bonded to each other, and with their other valencies 'satisfied' by being bonded to hydrogen atomic cores. These compounds can easily be represented by lines where each line shows the bond between two carbon atomic cores. The hydrogen centres are not shown at all, but are implicit in the figure (they must be there to 'satisfy' the rules of valency – i.e., carbon centres in a stable structures nearly always have four bonds ).

Anything other than carbon and hydrogen is shown with elemental symbols, and in most organic compounds these other atomic centres take up on a minority of positions in the structure. So, for compounds, such as the 'VX' compounds, these kinds of structural representations are a kind of hybrid, with some atomic centres shown by their elemental symbols – but others having to be inferred.

From the point of view of the novice learner, this form of abstract representation is challenging as carbon and hydrogen centres need to be actively read into the structure (whereas an expert has learnt to do this automatically). But for the expert this type of representation is useful as complex organic molecules can contain hundreds or thousands of atomic centres (e.g., the acetylcholinesterase molecule, as represented above) and structural formulae that show all the atomic centres with elemental symbols would get very crowded.

So, below I have annotated the first version of XV:


The VX compound seems to have a molecular mass of 267

This makes the figure much more busy, but helps me count up the numbers of different types of atomic centres present and therefore work out the molecular mass – which, if I had not made a mistake, is 267. I am working here with the nearest whole numbers, so not being very precise, but this is good enough for my present purposes. That means that the molecule has a mass of 267 atomic mass units, and so (by one of the most powerful 'tricks' in chemistry) a mole of this compound, the actual substance, would have a mass of 267g.

The trick is that chemists have chosen their conversion factor between molecules and moles, the Avogadro constant of c. 6.02 x 1023, such that adding up atomic masses in a molecule gives a number that directly scales to grammes for the macroscopic quantity of choice: the mole. 5

So, if one had 267 g of this nerve agent, that would mean approximately 6.02 x 1023 molecules. Of course here we are talking about a much smaller amount – just 0.005 g (0.005/267, about 0.000 02 moles) – and so many fewer molecules. Indeed we can easily work out 0.005 g contains something like

(0.005 / 267) x 6.02 x 1o23 = 11 273 408 239 700 374 000 = 1×1019 (1 s.f.)

That is about

10 000 000 000 000 000 000 molecules

So, because of the vast gulf in scale between the amount of material we can readily see and manipulate, and the individual quanticle such as a molecule, even when we are talking about a tiny amount of material, a tenth of a drop, this still represent a very, very large number of molecules. This is something chemistry experts are very aware of, but most people (even experts in related fields) may not fully appreciate.

The calculation here is approximate, and based on various estimates and assumptions. It may typically take about 10 000 000 000 000 000 000 molecules of the most toxic Novichok-like agent to be likely to kill someone – or this estimate could be seriously wrong. Perhaps it takes a lot more, or perhaps many fewer, molecules than this.

But even if this estimate is out by several orders of magnitude and it 'only' takes a few thousand million million molecules of XV for a potential lethal dose, that can in no way be reasonably described as "a couple of molecules".

It takes very special equipment to detect individual quanticles. The human retina is in its own way very sophisticated, and comes quite close to being able to detect individual photons – but that is pretty exceptional. As a rule of thumb, when anyone tells us that a few molecules or a few atoms or a few ions or a few electrons or a few neutrons or a few gamma rays or… can produce any macroscopic effect (that we can see, feel, or notice) we should be VERY skeptical.


Work cited:

Notes:

1 Two men claiming to be the suspects whose photographs had been circulated by the British Police, and claimed by the authorities here to be Russian military intelligence officers, appeared on Russian television to explain they were tourists who had visited Salisbury sightseeing because of the Cathedral.

2 According to the RCSB Protein Data Bank website

"Acetylcholinesterase is found in the synapse between nerve cells and muscle cells. It waits patiently and springs into action soon after a signal is passed, breaking down the acetylcholine into its two component parts, acetic acid and choline."

Molecule of the month: Acetylcholinesterase

Of course, it does not 'wait patiently': that is anthropomorphism.


3 We might think it is easy to decide if we are directly observing something, or not. But perhaps not:

"If a chemist heats some white powder, and sees it turns yellow, then this seems a pretty clear example of direct observation. But what if the chemist was rightly conscious of the importance of safe working, and undertook the manipulation in a fume cupboard, observing the phenomenon through the glass screen. That would not seem to undermine our idea of direct observation – as we believe that the glass will not make any difference to what we see. Well, at least, assuming that suitable plane glass of the kind normally used in fume cupboards has been used, and not, say a decorative multicoloured glass screen more like the windows found in many churches. Assuming, also, that there is not bright sunlight passing through a window and reflecting off the glass door of the fume cupboard to obscure the chemist's view of the powder being heated. So, assuming some basic things we could reasonably expect about fume cupboards, in conjunction with favourable viewing conditions, and taking into account our knowledge of the effect of plane glass, we would likely not consider the glass screen as an impediment to something akin to direct observation.

Might we start to question an instance of direct observation if instead of looking at the phenomenon through plane glass, there was clear, colourless convex glass between the chemist and the powder being heated? This might distort the image, but should not change the colours observed. If the glass in question was in the form of spectacle lenses, without which the chemist could not readily focus on the powder, then even if – technically – the observations were mediated by an instrument, this instrument corrects for a defect of vision such that our chemist would feel that direct observation is not compromised by, but rather requires, the glasses.

If we are happy to consider the bespectacled chemist is still observing the phenomenon rather than some instrumental indication of it, then we would presumably feel much the same about an observation being made with a magnifying glass, which is basically the same technical fix as the spectacles. So, might we consider observation down a microscope as direct observation? Early microscopes were little more than magnifying glasses mounted in stands. Modern compound microscopes use more than one lens. A system of lenses (and some additional illumination, usually) reveals details not possible to the naked eye – just as the use of convex spectacles allow the longsighted chemist to focus on objects that are too close to see clearly when unaided.

If the chemist is looking down the microscope at crystal structures in a polished slice of mineral, then, it may become easier to distinguish the different grains present by using a Polaroid filter to selectively filter some of the light reaching the eye from the observed sample. This seems a little further from what we might normally think of as direct observation. Yet, this is surely analogous to someone putting on Polaroid sunglasses to help obtain clear vision when driving towards the setting sun, or donning Polaroid glasses to help when observing the living things at the bottom of a seaside rock pool on a sunny day when strong reflections from the surface prevent clear vision of what is beneath.

A further step might be the use of an electron microscope, where the visual image observed has been produced by processing the data from sensors collecting reflections from an electron beam impacting on the sample. Here, conceptually, we have a more obvious discontinuity although the perceptual process (certainly if the image is of some salt crystal surface) may make this seem no different to looking down a powerful optical microscope. An analogy here might be using night-vision goggles that allow someone to see objects in conditions where it would be too dark to see them directly. I have a camera my late wife bought me that is designed for catching images of wildlife and that switches in low light conditions to detecting infrared. I have a picture of a local cat that triggered an image when the camera was left set up in the garden overnight. The cat looks different from how it would appear in day-light, but I still see a cat in the image (where if the camera had taken a normal image I would not have been able to detect the cat as the image would have appeared like the proverbial picture of a 'black cat in a coal cellar'). Someone using night-vision goggles considers that they see the fox, or the escaped convict, not that they see an image produced by electronic circuits.

If we accept that we can see the cat in the photograph, and the surface details of crystal grains in the electron microscope image, then can we actually see atoms in the STM [scanning tunneling microscope] image? There is no cat in or on my image, it is just a pattern of pixels that my brain determines to represent a cat. I never saw the cat directly (I was presumably asleep) so I have no direct evidence there really was a cat if I do not accept the photograph taken using infrared sensors. I believe there are cats in the world, and have seen uninvited cats in my garden in daylight, and think the camera imaged one of them at night. So it seems reasonable I am seeing a cat in the image, and therefore I might wonder if it is reasonable to doubt that I can also see atoms in an STM image.

One could shift further from simple sensory experience. News media might give the impression that physicists have seen the Higgs boson in data collected at CERN. This might lead us to ask: did they see it with their eyes? Or through spectacles? Or using a microscope? Or with night-vision goggles? Of course, they actually used particle detectors.

Feyerabend suggests that if we look at cloud chamber photographs, we do not doubt that we have a 'direct' method of detecting elementary particles …. Perhaps, but CERN were not using something like a very large cloud chamber where they could see the trails of condensation left in the 'wake' of a passing alpha particle, and that could be photographed for posterity. The detection of the Higgs involved very sophisticated detectors, complex theory about the particle cascades a Higgs particle interaction might cause, and very complex simulations to allow for all kinds of issues relating to how the performance of the detectors might vary (for example as they age) and how a signal that might be close to random noise could be identified…. No one was looking at a detector hoping to see the telltale pattern that would clearly be left by a Higgs, and only a Higgs. In one sense, to borrow a phrase, 'there's nothing to see'. Interpreting the data considered to provide evidence of the Higgs was less like using a sophisticated microscope, and more like taking a mixture of many highly complex organic substances, and – without any attempt to separate them – running a mass spectrum, and hoping to make sense of the pattern of peaks obtained.

Taber, 2019, pp.158-160

4 That is not to suggest that one should automatically assume that one molecule of a toxin can only ever damage one protein molecule somewhere in one body cell. After all, one of the reasons that CFCs (chlorofluorocarbons, which used to be used as propellants in all kinds of spray cans for example) were so damaging to the ozone 'layer' was because they could initiate a chain reaction.

In reactions that involve free radicals, each propagation step can produce another free radical to continue the reaction. Eventually two free radicals are likely to interact to terminate the process – but that might only be after a great many cycles, and the removal of a great many ozone molecules from the stratosphere. However, even if one free radical initiated the destruction of many molecules of ozone, that would still be a very small quantity of ozone, as molecules are so tiny. The problem was of course that a vast number of CFC molecules were being released.


5 So one mole of hydrogen gas, H2, is 2g, and so forth.

A discriminatory scientific analogy

Animals and plants as different kinds of engines

Keith S. Taber

Specimens of two different types of natural 'engines'.
Portrait of Sir Kenelm Digby, 1603-65 (Anthony van DyckFrom Wikimedia Commons, the free media repository)

In this post I discuss a historical scientific analogy used to discuss the distinction between animals and plants. The analogy was used in a book which is said to be the first major work of philosophy published in the English language, written by one of the founders of The Royal Society of London for Improving Natural Knowledge ('The Royal Society'), Sir Kenelm Digby.

Why take interest in an out-of-date analogy?

It is quite easy to criticise some of the ideas of early modern scientists in the light of current scientific knowledge. Digby had some ideas which seem quite bizarre to today's reader, but perhaps some of today's canonical scientific ideas, and especially more speculative theories being actively proposed, may seem equally ill-informed in a few centuries time!

There is a value in considering historical scientific ideas, in part because they help us understand a little about the path that scientists took towards current scientific thinking. This might be valuable in avoiding the 'rhetoric of conclusions', where well-accepted ideas become so familiar that we come to take them for granted, and fail to appreciate the ways in which such ideas often came to be accepted in the face of competing notions and mixed experimental evidence.

For the science educator there are added benefits. It reminds us that highly intelligent and well motivated scholars, without the value of the body of scientific discourse and evidence available today, might sensibly come up with ideas that seem today ill-conceived, sometimes convoluted, and perhaps even foolish. That is useful to bear in mind when our students fail to immediately understand the science they are taught and present with alternative conceptions that may seem illogical or fantastic to the teacher. Insight into the thought of others can help us consider how to shift their thinking and so can make us better teachers.

Read about historical scientific conceptions

Analogies as tools for communicating science

Analogies are used in teaching and in science communication to help 'make the unfamiliar familiar', to show someone that something they do not (yet) know about is actually, in some sense at least, a bit like something they are already familiar with. In an analogy, there is a mapping between some aspect(s) of the structure of the target ideas and the structure of the familiar phenomenon or idea being offered as an analogue. Such teaching analogies can be useful to the extent that someone is indeed highly familiar with the 'analogue' (and more so than with the target knowledge being communicated); that there is a helpful mapping across between the analogue and the target; and that comparison is clearly explained (making clear which features of the analogue are relevant, and how).

Read about scientific analogies

Nature made engines

Digby presents his analogy for considering the difference between plants and animals in his 'Discourse on Bodies', the first part of his comprehensive text known as his 'Two Discourses' completed in 1644, and in which he sets out something of a system of the world.1 Although, to a modern scientific mind, many of Digby's ideas seem odd, and his complex schemes sometimes feel rather forced, he shared the modern scientific commitment that natural phenomena should be explained in terms of natural causes and mechanisms. (That is certainly not to suggest he was an atheist, as he was a committed Roman Catholic, but he assumed that nature had been set up to work without 'occult' influences.)

Before introducing an analogy between types of living things and types of engines, Digby had already prepared his readers by using the term 'engine' metaphorically to refer to living things. He did this after making a distinction between matter dug out of the ground as a single material, and other specimens which although highly compacted into single bodies of material clearly comprised of "differing parts" that did not work together to carry out any function, and seemed to have come together by "chance and by accident"; and where, unlike in living things (where removed parts tended to stop functioning), the separate parts could be "severed from [one] another" without destroying any underlying harmonic whole. He contrasted these accidental complexes with,

"other bodies in which this manifest and notable difference of parts, carries with it such subordination of one of them unto another, as we cannot doubt but that nature made such engines (if so I may call them) by design; and intended that this variety should be in one thing; whole unity and being what it is, should depend of the harmony of the several differing parts, and should be destroyed by their separation".

Digby emphasising the non-accidental structure of living things (language slightly tidied for a modern reader).

Digby was writing long before Charles Darwin's work, and accepted the then widely shared idea that there was design in nature. Today this would be seen as teleological, and not appropriate in a scientific account. A teleological account can be circular (tautological) if the end result of some process is explained as due to that process having a purpose. [Consider the usefulness as an 'explanation' that 'oganisms tend to become more complex over time as nature strives for complexity'. 2]

Read about teleology

Scientists today are expected to offer accounts which do not presuppose endpoints. That does not mean that a scientists cannot believe there is purpose in the world, or even that the universe was created by a purposeful God – simply that scientific accounts cannot 'cheat' by using arguments that something happens because God wished it, or nature was working towards it. That is, it should not make any difference whether a scientist believes God is the ultimate cause of some phenomena (through creating the world, and setting up the laws of nature) as science is concerned with the natural 'mechanisms' and causes of events.

Read about science and religion

Two types of engines

In the part of his treatise on bodies that concerns living things, Digby gives an account of two 'engines' he had seen many years before when he was travelling in Spain. This was prior to the invention of the modern steam engine, and these engines were driven by water (as in water mills). 3

Digby introduces two machines which he considers illustrate "the natures of these two kinds of bodies [i.e., plants and animals]"

He gives a detailed account of one of the engines, explaining that the mechanism has one basic function – to supply water to an elevated place above a river.

His other engine example (apparently recalled in less detail – he acknowledges having a "confused and cloudy remembrance" ) was installed in a mint in a mine where it had a number of different functions, including:

  • producing metal of the correct thickness for coinage
  • stamping the metal with the coinage markings
  • cutting the coins from the metal
  • transferring the completed coins into the supply room.

These days we might see it as a kind of conveyor belt moving materials through several specialist processes.

Different classes of engine

Digby seems to think this is a superior sort of engine to the single function example.

For Digby, the first type of engine is like a plant,

"Thus then; all sorts of plants, both great and small, may be compared to our first engine of the waterwork at Toledo, for in them all the motion we can discern, is of one part transmitting unto the next to it, the juice which it received from that immediately before it…"

Digby comparing a plant to a single function machine

The comments here about juice may seem a bit obscure, as Digby has an extended explanation (over several pages) of how the growth and structure of a plant are based on a single kind of vascular tissue and a one-way transport of liquid. 4 Liquid rises up through the plant just as it was raised up by the mechanism at Toldeo

The multi-function 'engine' (perhaps ironically better considered in today's terms as an industrial plant!) is however more like an animal,

"But sensible living creatures, we may fitly compare to the second machine of the mint at Segovia. For in them, though every part and member be as it were a complete thing of itself, yet every one requires to be directed and put on in its motion by another; and they must all of them (though of very different natures and kinds of motion) conspire together to effect any thing that may be for the use and service of the whole. And thus we find in them perfectly the nature of a mover and a moveable; each of them moving differently from one another, and framing to themselves their own motions, in such sort as is more agreeable to their nature, when that part which sets them on work hath stirred them up.

And now because these parts (the movers and the moved) are parts of one whole; we call the entire thing automaton or…a living creature".

Digby comparing animals to more complex machines (language slightly tidied for a modern reader)

So plants were to animals as a single purpose mechanism was to a complex production line.

Animals as super-plants

Digby thought animals and plants shared in key characteristics of generation (we would say reproduction), nutrition, and augmentation (i.e., growth), as well as suffering sickness, decay and death. But Digby did not just think animals were different to plants, but a superior kind.

He explains this both in terms of the animal having functions that be did not beleive applied to plants,

And thus you see this plant [sic] has the virtue both of sense or feeling; that is, of being moved and affected by external objects lightly striking upon it; as also of moving itself, to or from such an object; according as nature shall have ordained.

but he also related to this as animals being more complex. Whereas the plant was based on a vascular system involving only one fluid, this super-plant-like-entity, had three. In summary,

this plant [sic, the animal] is a sensitive creature, composed of three sources, the heart, the brain, and the liver: whose are the arteries, the nerves, and the veins; which are filled with vital spirits, with animal spirits, and with blood: and by these the animal is heated, nourished, and made partaker of sense and motion.

A historical analogy to explain the superiority of animals to plants

[The account here does not seem entirely consistent with other parts of the book, especially if the reader is supposed to associate a different fluid with each of the three systems. Later in the treatise, Digby refers to Harvey's work about circulation of the blood (including to the liver), leaving the heart through arteries, and to veins returning blood to the heart. His discussion of sensory nerves suggest they contain 'vital spirits'.]

Some comments on Digby's analogy

Although some of this detail seems bizarre by today's standards, Digby was discussing ideas about the body that were fairly widely accepted. As suggested above, we should not criticise those living in previous times for not sharing current understandings (just as we have to hope that future generations are kind to our reasonable mistakes). There are, however, two features of this use of analogy I thought worth commenting on from a modern point of view.

The logic of making the unfamiliar familiar

If such analogies are to be used in teaching and science communication, then they are a tactic we can use to 'make the unfamiliar familiar', that is to help others understand what are sometimes difficult (e.g., abstract, counter-intuitive) ideas by pointing out they are somewhat like something the person is already familiar with and feels comfortable that they understand.

Read about teaching as 'making the unfamiliar familiar'

In a teaching context, or when a scientist is being interviewed by a journalist, it is usually important that the analogue is chosen so it is already familiar to the audience. Otherwise either the analogy does not help explain anything, or time has to be spent first explaining the analogy, before it can be employed.

In that sense, then, we might question Digby's example as not being ideal. He has to exemplify the two types of machines he is setting up as the analogue before he can make an analogy with it. Yet this is not a major problem here for two reasons.

Firstly, a book affords a generosity to an author that may not be available to a teacher or a scientist talking to a journalist or public audience. Reading a book (unlike a magazine, say) is a commitment to engagement in depth and over time, and a reader who is still with Digby by his Chapter 23 has probably decided that continued engagement is worth the effort.

Secondly, although most of his readers will not be familiar with the specific 'engines' he discusses from his Spanish travels, they will likely be familiar enough with water mills and other machines and devices to readily appreciate the distinction he makes through those examples. The abstract distinction between two classes of 'engine' is therefore clear enough, and can then be used as an analogy for the difference between plants and animals.

A biased account

However, today we would not consider this analogy to be applicable, even in general terms, leaving aside the now discredited details of plant and animal anatomy and physiology. An assumption behind the comparison is that animals are superior to plants.

In part, this is explained in terms of the plants apparent lack of sensitivity (later 'irritability' would be added as a characteristic of living things, shared by plants) and their their lack of ability in getting around, and so not being able to cross the room to pick up some object. In part, this may be seen as an anthropocentric notion: as humans who move around and can handle objects, it clearly seems to us with our embodied experience of being in the world that a form of life that does not do this (n.b., does not NEED to do this) is inferior. This is a bit like the argument that bacteria are primitive forms of life as they have evolved so little (a simplification, of course) over billions of years: which can alternatively be understood as showing how remarkably adapted they already were, to be able to successfully occupy so many niches on earth without changing their basic form.

There is also a level of ignorance about plants. Digby saw the plant as having a mechanism that moved moisture from the soil through the plant, but had no awareness of the phloem (only named in the nineteenth century) that means that transport in a plant is not all in one direction. He also did not seem to appreciate the complexity of seasonal changes in plants which are much more complex than a mechanism carrying out a linear function (like lifting water to a privileged person who lives above a river). He saw much of the variation in plant structures as passive responses to external agents. His idea of human physiology are also flawed by today's standards, of course.

Moreover, in Digby's scheme (from simple minerals dug from the ground, to accidentally compacted complex materials, to plants and then animals) there is a clear sense of that long-standing notion of hierarchy within nature.

The great chain of being

That is, the great chain of being, which is a system for setting out the world as a kind of ladder of superior and inferior forms. Ontology is sometimes described as the study of being , and typologies of different classes of entities are sometimes referred to as ontologies. The great chain of being can be understood as a kind of ontology distinguishing the different types of things that exist – and ranking them.

Read about ontology

In this scheme (or rather schemes, as various versions with different levels of detail and specificity had been produced – for example discriminating the different classes of angels) minerals come below plants, which come below animals. To some extent Digby's analogy may reflect his own observations of animals and plants leading him to think animals were collectively and necessarily more complex than plants. However, ideas about the great chain of being were part of common metaphysical assumptions about the world. That is, most people took it for granted that there was such hierarchy in nature, and therefore they were likely to interpret what they observed in those terms.

Digby made the comparison between increasing complexity in moving from plant to animal as being a similar kind of step-up as when moving from inorganic material to plants,

But a sensitive creature, being compared to a plant, [is] as a plant is to a mixed [inorganic] body; you cannot but conceive that he must be compounded as it were of many plants, in like sort as a plant is of many mixed bodies.

Digby, then, was surely building his scheme upon his prior metaphysical commitments. Or, as we might say these days, his observations of the world were 'theory-laden'. So, Digby was not only offering an analogy to help discriminate between animals and plants, but was discriminating against plants in assuming they were inherently inferior to animals. I think that is a bias that is still common today.

Work cited:
  • Digby, K. (1644/1665). Two Treatises: In the one of which, the nature of bodies; In the other, the nature of mans soule, is looked into: in ways of the discovery of the immortality of reasonable soules. (P. S. MacDonald Ed.). London: John Williams.
  • Digby, K. (1644/2013). Two Treatises: Of Bodies and of Man's Soul (P. S. MacDonald Ed.): The Gresham Press.
  • Taber, K. S. & Watts, M. (2000) Learners' explanations for chemical phenomena, Chemistry Education: Research and Practice in Europe, 1 (3), pp.329-353. [Free access]
Notes:

1 This is a fascinating book with many interesting examples of analogies, similes, metaphor, personification and the like, and an interesting early attempt to unify forces (here, gravity and magnetism). (I expect to write more about this over time.) The version I am reading is a 2013 edition (Digby, 1644/2013) which has been edited to offer consistent spellings (as that was not something many authors or publishers concerned themselves with at the time). The illustrations, however, are from a facsimile of an original publication (Digby, 1644/1645: which is now out of copyright so can be freely reproduced).

2 Such explanations may be considered as a class of 'pseudo-explanations': that give the semblance of explanation without actually explaining very much (Taber & Watts, 2000).

3 The aeolipile (e.g., Hero's engine) was a kind of steam engine – but was little more than a novelty where water boiled in a vessel with suitably directed outlets and free to rotate, causing it to spin. However, the only 'useful' work done was in turning the engine itself.

4 This relates to his broader theory of matter which still invokes the medieval notion of the four elements, but is also an atomic theory involving tiny particles that can pass into apparently solid materials due to pores and channels much too small to be visible.

Occidently re-orienting atoms

It seems atoms are not quite as chemists imagine them not to be

Keith S. Taber

A research paper presenting a new model of atomic and molecular structure was recently brought to my attention. 1

The paper header

'New Atomic Model with Identical Electrons Position in the Orbital's and Modification of Chemical Bonds and MOT [molecular orbital theory]' 2 is published in a recently-launched journal with the impressive title of Annals of Atoms and Molecules. This is an open-access journal available free on the web – so readily accessible to chemistry experts, as well as students studying the subject and lay-people looking to learn from a scholarly source. [Spoiler alert – it may not be an ideal source for scholarly information!]

In the paper, Dr Morshed proposes a new model of the atom that he suggests overcomes many problems with the model currently used in chemistry.

A new model of atomic structure envisages East and West poles as well as North and South poles) (Morshed, 2020a, p.8)

Of course, as I have often pointed out on this blog, one of the downsides of the explosion in on-line publishing and the move to open access models of publication, is that anyone can set up as an academic journal publisher and it can be hard for the non-expert to know what reflects genuine academic quality when what gets published in many new journals often seems to depend primarily upon an author being willing to pay the publisher a hefty fee (Taber, 2013).

That is not to suggest open-access publishing has to compromise quality: the well-established, recognised-as-prestigious journals can afford to charge many hundreds of pounds for open-access publication and still be selective. But, new journals, often unable to persuade experienced experts to act as reviewers, will not attract many quality papers, and so cannot be very selective if they are to cover costs (or indeed make the hoped-for profits for their publishers).

A peer reviewed journal

The journal with the impressive title of Annals of Atom and Molecules has a website which explains that

"Annals of Atoms and Molecules is an open access, peer reviewed journal that publishes novel research insights covering but not limited to constituents of atoms, isotopes of an element, models of atoms and molecules, excitations and de-excitations, ionizations, radiation laws, temperatures and characteristic wavelengths of atoms and molecules. All the published manuscripts are subjected to standardized peer review processing".

https://scholars.direct/journal.php?jid=atoms-and-molecules

So, in principle at least, the journal has experts in the field critique submissions, and advise the editors on (i) whether a manuscript has potential to be of sufficient interest and quality to be worth publishing, and (ii) if so, what changes might be needed before publications is wise.

Read about peer review

Standardised peer review gives the impression of some kind of moderation (perhaps renormalisation given the focus of the journal? 3) of review reports, which would involve a lot of extra work and another layer of administration in the review process…but I somehow suspect this claim really just meant a 'standard' process. This does not seem to be a journal where great care is taken over the language used.

Effective peer review relies on suitable experts taking on the reviewing, and editors prepared to act on their recommendations. The website lists five members of the editorial board, most of whom seem to be associated with science departments in academic institutions:

  • Prof. Farid Menaa (Fluorotronics Inc) 4
  • Prof. Sabrin Ragab Mohamed Ibrahim (Department of Pharmacognosy and Pharmaceutical chemistry, Taibah University)
  • Prof. Mina Yoon (Department of Physics and Astronomy, University of Tennessee)
  • Dr. Christian G Parigger (Department of Physics, University of Tennessee Space Institute)
  • Dr. Essam Hammam El-Behaedi (Department of Chemistry and Biochemistry, University of North Carolina Wilmington)

The members of a journal Editorial Board will not necessarily undertake the reviewing themselves, but are the people entrusted by the publisher with scholarly oversight of the quality of the journal. For this journal it is claimed that "Initially the editorial board member handles the manuscript and may assign or the editorial staff may assign the reviewers for the received manuscript". This sounds promising, as at least (it is claimed) all submissions are initially seen by a Board member, whether or not they actually select the expert reviewers. (The 'or' means that the claim is, of course, logically true even if in actuality all of the reviewers are assigned by the unidentified office staff.)

At the time of writing only three papers have been published in the Annals. One reviews a spectroscopic method, one is a short essay on quantum ideas in chemistry – and then there is Dr Morshed's new atomic theory.

A new theory of atomic structure

The abstract of Dr Morshed's paper immediately suggests that this is a manuscript which was either not carefully prepared or has been mistreated in production. The first sentence is:

The concept of atom has undergone numerous changes in the history of chemistry, most notably the realization that atoms are divisible and have internal structure Scientists have known about atoms long before they could produce images of them with powerful magnifying tools because atoms could not be seen, the early ideas about atoms were mostly founded in philosophical and religion-based reasoning.

Morshed, 2020a, p.6

Presumably, this was intended to be more than one sentence. If the author made errors in the text, they should have been queried by the copy editor. If the production department introduced errors, then they should have been corrected by the author when sent the proofs for checking. Of course, a few errors can sometimes still slip through, but this paper has many of them. Precise language is important in a research paper, and sloppy errors do not give the reader confidence in the work being reported.

The novelty of the work is also set out in the abstract:

In my new atomic model, I have presented the definite position of electron/electron pairs in the different orbital (energy shells) with the identical distance among all nearby electron pairs and the degree position of electrons/electron pairs with the Center Point of Atoms (nucleus) in atomic structure, also in the molecular orbital.

Morshed, 2020a, p.6

This suggests more serious issues with the submission than simple typographical errors.

Orbital /energy shells

The term "orbital (energy shells)" is an obvious red flag to any chemist asked to evaluate this paper. There are serious philosophical arguments about precisely what a model is and the extent to which a model of the atom might be considered to be realistic. Arguably, models that are not mathematical and which rely on visualising the atom are inherently not realistic as atoms are not the kinds of things one could see. So, terms such as shell or orbital are either being used to refer to some feature in a mathematical description or are to some extent metaphorical. BUT, when the term shell is used, it conventionally means something different from an orbital.

That is, in the chemical community, the electron shell (sic, not energy shell) and the orbital refer to different classes of entity (even if in the case of the K shell there is only one associated orbital). Energy levels are related, but again somewhat distinct – an energy level is ontologically quite different to an orbital or a shell in a similar way to how sea level is very different in kind to a harbour or a lagoon; or how 'mains voltage' is quite different from the house's distribution box or mains ring; or how an IQ measurement is a different kind of thing to the brain of the person being assessed.

Definite positions of electrons

An orbital is often understood as a description of the distribution of the electron density – we might picture (bearing in mind my point that the most authentic models are mathematical) the electron smeared out as in a kind of time-lapse representation of where the electron moves around the volume of space designated as an orbital. Although, as an entity small enough for quantum effects to be significant (a 'quanticle'? – with some wave-like characteristics, rather than a particle that is just like a bearing ball only much smaller), it may be better not to think of the electron actually being at any specific point in space, but rather having different probabilities of being located at specific points if we could detect precisely where it was at any moment.

That is, if one wants to consider the electron as being at specific points in space then this can only be done probabilistically. The notion of "the definite position of electron/electron pairs in the different orbital" is simply nonsensical when the orbital is understood in terms of a wave function. Any expert asked to review this manuscript would surely have been troubled by this description.

It is often said that electrons are sometimes particles and sometimes waves but that is a very anthropocentric view deriving from how at the scale humans experience the world, these seem very distinct types of things. Perhaps it is better to think that electrons are neither particles nor waves as we experience them, but something else (quanticles) with more subtle behavioural repertoires. We think that there is a fundamental inherent fuzziness to matter at the scale where we describe atoms and molecules.

So, Dr Morshed wants to define 'definite positions' for electrons in his model, but electrons in atoms do not have a fixed position. (Later there is reference to circulation – so perhaps these are considered as definite relative positions?) In any case, due to the inherent fuzziness in matter, if an electron's position was known absolutely then there would would (by the Heisenberg uncertainty principle) be an infinite uncertainty in its momentum, so although we might know 'exactly' where it was 'now' (or rather 'just now' when the measurement occurred as it would take time for the signal to be processed through first our laboratory, and then our nervous, apparatus!) this would come with having little idea where it was a moment later. Over any duration of time, the electron in an atom does not have a definite position – so there is little value in any model that seeks to represent such a fixed position.

The problem addressed

Dr Morshed begins by giving some general historical introduction to ideas about the atom, before going on to set out what is argued to be the limitation of current theory:

Electrons are arranged in different orbital[s] by different numbers in pairs/unpaired around the nuclei. Electrons pairs are associated by opposite spin together to restrict opposite movement for stability in orbital rather angular movements. The structural description is obeyed for the last more than hundred years but the exact positions of electrons/pairs in the energy shells of atomic orbital are not described with the exact locations among different orbital/shells.

Morshed, 2020a, p.6

Some of this is incoherent. It may well be that English is not Dr Morshed's native language, in which case it is understandable that producing clear English prose may be challenging. What is less forgivable is that whichever of Profs. Ibrahim, Yoon, or Drs Menaa, Parigger, or El-Behaedi initially handled the manuscript did not point out that it needed to be corrected and in clear English before it could be considered for publication, which could have helped the author avoid the ignominy of having his work published with so many errors.

That assumes, of course, that whichever of Ibrahim, Yoon, Menaa, Parigger, or El-Behaedi initially handled the manuscript were so ignorant of chemistry to be excused for not spotting that a paper addressing the issue of how current atomic models fail to assign "exact positions of electrons/pairs in the energy shells of atomic orbital are not described with the exact locations among different orbital/shells" both confused distinct basic atomic concepts and seemed to be criticising a model of atomic structure that students move beyond before completing upper secondary chemistry. In other words, this paper should have been rejected on editorial screening, and never should have been sent to review, as its basic premise was inconsistent with modern chemical theory.

If, as claimed, all papers are seen by the one of the editorial board, then the person assigned as handling editor for this one does not seem to have taken the job seriously. (And as only three papers have been published since the journal started, the workload shared among five board members does not seem especially onerous.)

Just in case the handling editorial board member was not reading the text closely enough, Dr Morshed offered some images of the atomic model which is being critiqued as inadequate in the paper:

A model of the atom criticised in the paper in Annals of Atoms and Molecules (Morshed, 2020a, p.7)

I should point out that I am able to reproduce material from this paper as it is claimed as copyright of the author who has chosen to publish open access with a license that "permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited". (Although, if you look very closely at the first figure, it seems to have superimposed in red text "© Copyright www.chemistrytutotial.org", where, by an unlikely coincidence, I found what seems to be the same image on the page Atomic Structure with Examples.)

Read about copyright in academic works

Again, the handling editor should have noticed that these images in the figure reflect the basic model of the atom taught in introductory school classes as commonly represented in simple two-dimensional images. These are not the models used to progress knowledge in academic chemistry today.

These images are not being reproduced in the research paper as part of some discussion of atomic representations in school textbooks. Rather this is the model that the author is suggesting falls short as part of current chemical theory – but it is actually an introductory pedagogical model that is not the basis of any contemporary chemical research, and indeed has not been so for the best part of a century. Even though the expression "the electrons/electron pairs position is not identical by their position, alignments or distribution" does not have any clear meaning in normal English, what is clear is that these very simple models are only used today for introductory pedagogic purposes.

Symmetrical atoms?

The criticism of the model continues:

The existing electrons pair coupling model is not also shown clearly in figure by which a clear structure of opposite spine pair can be drowned. Also there are no proper distribution of electron/s around the center (nuclei) to maintain equal number of electrons/electronic charge (charge proportionality) around the total mass area of atomic circle (360°) in the existing atomic model (Figure 1). There are no clear ideas about the speed proportion and time of circulation of electrons/electron pairs in the atomic orbital/shells so there is no answer about the possibility of uneven number of electrons/electron pairs at any position /side of atomic body can arise that must make any atom unstable.

Morshed, 2020a, p.7

Again, this makes little sense (to me at least – perhaps the Editorial Board members are better at hermeneutics than I am). Now we are told that electrons are 'circulating' in the orbitals/shell which seems inconsistent with them having the "definite positions" that Dr Morshed's model supposedly offers. Although I can have a guess at some of the intended meaning, I really would love to know what is meant by "a clear structure of opposite spine pair can be drowned".

Protecting an atom from drowning? (Images by Image by ZedH  and  Clker-Free-Vector-Images from Pixabay)
A flat model of the atom

I initially thought that Dr Morshed is concerned that the model shown in figure 1 cannot effectively show how in the three dimensional atomic structure the electrons must be arranged to give a totally symmetric patterns: and (in his argument) that this would be needed else it would leave the atoms unstable. Of course, two dimensional images do not easily show three dimensional structure. So when Dr Morshed referred to the "atomic circle (360°) in the existing atomic model" I assumed he was actually referring to the sphere.

On reflection, I am not so sure. I was unimpressed by the introduction of cardinal points for the atom (see Dr Morshed's figure 2 above, and figure 4 below). I could understand the idea of a nominal North and South pole in relation to the angular momentum of the nucleus and electrons 'spinning up or down' – but surely the East and West poles are completely arbitrary for an atom as any point on the 'equator' could be used as the basis for assigning these poles. However, if Dr Morshed is actually thinking in terms of a circular (i.e., flat) model of the atom, and not circular representations of a spherical model of atomic structure then atoms would indeed have an Occident and an Orient! The East pole WOULD be to the right when the atom has the North pole at the top as is conventional in most maps today. 5

But atoms are not all symmetrical?

But surely most atoms are not fully symmetrical, and indeed this is linked to why most elements do not commonly exist as discrete atoms. The elements of those that do, the noble gas elements, are renown for not readily reacting because they (atypically for atoms) have a symmetrical electronic 'shield' for the nuclear charge. However, even some of these elements can be made cold enough to solidify – as the van der Waals forces allow transient fluctuating dipoles. So the argument seems to be based on a serious alternative conception of the usual models of atomic structure.

It is the lack of full symmetry in an atom of say, fluorine, or chlorine, which means that although it is a neutral species it has an electron affinity (that is, energy is released when the anion is formed) as an electron can be attracted to the core charge where it is not fully shielded.

The reference to "time of circulation of electrons/electron pairs in the atomic orbital/shells" seems to refer to a mechanical model of orbital motion, which again, has no part in current chemical theory.

Preventing negative electron pairs repelling each other

Dr Morshed suggests that the existing model of atomic structure cannot explain

Why the similar charged electrons don't feel repulsion among themselves within the same nearby atomic orbital of same atom or even in the molecular orbital when two or more atomic orbital come closer to form molecular orbital within tinier space though there is more possibility of repulsion between similar charged electrons according to existing atomic model.

Morshed, 2020a, p.7

Electrons do not feel repulsion for the same reason they do not feel shame or hunger or boredom – or disdain for poor quality journals. Electrons are not the kind of objects that can feel anything. However, this anthropomorphic expression is clearly being used metaphorically.

I think Dr Morshed is suggesting that the conventional models of atomic structure do not explain why electrons/electron pairs do not repel each other. Of course, they do repel each other – so there is no need to look for an explanation. This then seems to be an alternative conception of current models of the atom. (The electrons do not get ejected from the atom as they are also attracted to the nucleus – but, if they did not repel each other, there would be no equilibrium of forces, and the structure of the atom would not be stable.)

A new model of atomic structure supposedly reflects the 'proper' angles between electrons in atoms (Morshed, 2020, p.9)

Dr Morshed suggests that his model (see his Figure 4) 'proves the impossibility of repulsion between any electron pairs' – even those with similar charges. All electron pairs have negative (so similar) charges – it is part of the accepted definition of an electron that is is a negatively charged entity. I do not think Dr Morshed is actually suggesting otherwise, even if he thinks the electrons in different atoms have different magnitudes of negative charge (Morshad, 2020b).

Dr Morshed introduces a new concept that he calls 'center of electron pairs neutralization point'.

This is the pin-point situated in a middle position between two electrons of opposite spin pairs. The point is exactly between of opposite spine electron pairs so how the opposite electronic spin is neutralized to remaining a stable electron pair consisting of two opposite spin electrons. This CENP points are assumed to be situated between the cross section of opposite spine electronic pair's magnetic momentum field diameter (Figure 3).

Morshed, 2020a, p.8
The yellow dot represents a point able to neutralise the opposite spin of a pair of electrons(!), and is located at the point found by drawing a cross from the ends of the ⥯ symbols used to show the electron spin! This seems to be envisaged a real point that has real effects, despite being located in terms of the geometry of a totally arbitrary symbol.

So, the electron pair is shown as a closely bound pair of electrons with the midspot of the complex highlighted (yellow in the figure) as the 'center of electron pairs neutralization point'. Although the angular momentum of the electrons with opposite spin leads to a magnetic interaction between them, they are still giving rise to an electric field which permeates through the space around them. Dr Morshed seems to be suggesting that in his model there is no repulsion between the electron pairs. He argues that:

According to magnetic attraction/repulsion characteristics any similar charges repulse or opposite charges attract when the charges energy line is in straight points. If similar charged or opposite charged end are even close but their center of energy points is not in straight line, there will be no attraction or repulsion between the charges (positive/negative). Similarly, when electrons are arranged in energy shells around the nucleus the electrons remain in pairs within opposite spin electrons where the poses a point which represent as the center of repulsion/attraction points (CENP) and two CENP never come to a straight within the atomic orbital so the similar charged electrons pairs don't feel repulsion within the energy shells.

Morshed, 2020a, pp.8-9

A literal reading of this makes little sense as any two charges will always have their centres in a straight line (from the definition of a straight line!) regardless of whether similar or opposite charges or whether close or far apart.

My best interpretation of this (and I am happy to hear a better one) is that because the atom is flat, and because the electron pairs have spin up and spin down electrons, with are represented by a kind of ⥮ symbol, the electrons in some way shield the 'CENP' so that the electron pair can only interact with another charge that has a direct line of sight to the CENP.

Morshed seems to be suggesting that although electron pairs are aligned to allow attractions with the nucleus (e.g., blue arrows) any repulsion between electron pairs is blocked because an electron in the pair shields the central point of the pair (e.g., red arrow and lines)

There are some obvious problems here from a canonical perspective, even leaving aside the flat model of the atom. One issue is that although electrons are sometimes represented as ↿ or ⇂ to indicate spin, electrons are not actually physically shaped like ↿. Secondly, pairing allows electrons to occupy the same orbital (that is, have the same set of principal, azimuthal and magnetic quantum numbers) – but this does not mean they are meant to be fixed into a closely bound entity. Also, this model works by taking the idea of spin direction literally, when – if we do that – electrons can have only have spin of ±1/2. In a literal representation such as used by Dr Morshed he would need to have ALL his electrons orientated vertically (or at least all at the same angle from the vertical). So, the model does not work in its own terms as it would prevent most of the electron pairs being attracted to the nucleus.

Morshed's figure 4 'corrected' given that electrons can only exist in two spin states. In the (corrected version of the representation of the) Morshed model most electron pairs would not be attracted to the nucleus.

A new (mis)conception of ionic bonding

Dr Morshed argues that

In case of ionic compound formation problem with the existing atomic model is where the transferred electron will take position in the new location on transferred atom? If the electrons position is not proportionally distributed along total 360 circulating area of atom, then the position of new transferred electron will cause the polarity in every ion (both cation and anion forms by every transformation of electrons) so the desired ionization is not possible thus every atom (ion) would become dipolar. On the point of view any ionization would not possible i.e., no ionic bonded compound would have formed.

Morshed, 2020, p.7

Again, although the argument may have been very clear to the author, this seems incoherent to a reader. I think Dr Morshed may be arguing that unless atoms have totally symmetrical electrons distributions ("proportionally distributed along total 360 circulating area of atom") then when the ion is formed it will have a polarity. Yet, this seems entirely back to front.

If the atom to be ionised was totally symmetric (as Dr Morshed thinks it should be), then forming an ion from the atom would require disrupting the symmetry. Whereas, by contrast, in the current canonical model, we assume most atoms are not symmetrical, and the formation of simple ions leads to a symmetric distribution of electrons (but unlike in the noble gas atoms, a symmetrical electron distribution which does not balance the nuclear charge).

Dr Morshad illustrates his idea:

Ionic bond formation represented by an non-viable interaction between atoms (Morshed, 2020, p.10)

Now these images show interactions between discrete atoms (a chemically quite unlikely scenario, as discrete atoms of sodium and chlorine are not readily found) that are energetically non-viable. As has often been pointed out, the energy released when the chloride ion is formed is much less than the energy required to ionise the sodium atom, so although this scheme is very common on the web and in poor quality textbooks, it is a kind of chemical fairy tale that does not relate to any likely chemical context. (See, for example, Salt is like two atoms joined together.)

The only obvious difference between these two versions of the fairly tale (if we ignore that in the new version both protons and neutrons appear to be indicated by + signs which is unhelpful) seems to be that the transferred electron changes its spin for some reason that does not seem to be explained in the accompanying text. The explanation that is given is

My new atomic model with identical electrons pair angle position is able to give logical solution to the problems of ion/ionic bond formation. As follows: The metallic atom which donate electrons during ion formation from outermost orbital, the electrons are arranged maintaining definite degree angle around 360° atomic mass body shown in (Figure 4). After the transformation the transferred electron take position at the vacant place of the transferred atoms outermost orbital, then instant the near most electrons/pairs rearrange their position in the orbital changing their angle position with the CPA [central point of the atom, i.e., the nucleus] due to electromagnetic repulsion feeling among the similar charged electrons/pairs. Thus the ionic atom gets equal electron charge density around whole of their 360° atomic mass body resulting the cation and anion due to the positive and negative charge difference in atomic orbital with their respective nucleus. Thus every ion becomes non polar ion to form ionic bond within two opposite charged ion (Figure 5).

Morshed, 2020, p.9

So, I think, supposedly part (b) of Dr Morshed's figure 5 is meant to show, better than part (a), how the electron distribution is modified when the ion is formed. It would of course be quite possible to show this in the kind of representations used in (a), but in any case it does not look any more obvious in (b) to my eye!

So, figure 5 does not seem to show very well Dr Morshed's solution to a problem I do not think actually exists in the context on a non-viable chemical process. Hm.

Finding space for the forces

Another problem with the conventional models, according to Dr Morshed, is that, as suggested in his figures 6 and 7 is that the current models do not leave space for the 'intermolecular' [sic, intramolecular] force of attraction in covalent bonds.

In current models, according to Morshed's paper, electrons get in the way of the covalent bond (Morshed, 2020, p.11)

Dr Morshad writes that

According to present structural presentation of shared paired electrons remain at the juncture of the bonded atomic orbital, if they remain like such position they will restrict the Inter [sic] Molecular Force (IMF) between the bonded atomic nuclei because the shared paired electron restricts the attraction force lying at the straight attraction line of the bonded nuclei the shown in (Figure 6a).

Morshed, 2020, p.11

There seem to be several alternative conceptions operating here – reflecting some of the kind of confusions reported in the literature from studies on students' ideas.

  1. Just because the images are static two dimensional representations, this does not mean electrons are envisaged to be stationary at some point on a shell;
  2. and just because we draw representations of atoms on flat paper, this does not mean atoms are flat;
  3. The figure is meant to represent the bond, which is an overall configuration of the nuclei and the electrons, so there is not a distinct intramolecular force operating separately;
  4. Without the electrons there would be no "Inter [sic] Molecular Force (IMF) between the bonded atomic nuclei" as the nuclei repel each other: the bonding electrons do not restrict the intramolecular force (blocking it, because they lie between the nuclei), but are crucial to it existing.

Regarding the first point here, Dr Morshed suggests

Covalent bonds are formed by sharing of electrons between the bonded atoms and the shared paired electrons are formed by contribution of one electron each of the participating atoms. The shared paired electrons remain at the overlapping chamber (at the juncture of the overlapped atomic orbital).

Morshed, 2020, p.9

That is, according to Dr Morshed's account of current atomic theory, in drawing overlapping electron shells, the electrons of the bond which are 'shared' (and that is just a metaphor, of course) are limited to the area shown as overlapping. This is treating an abstract and simplistic representation as if it is realistic. There is no chamber. Indeed, the molecular orbital formed by the overlap of the atomic orbitals will 'allow' the electrons to be likely to be found within quite a (relatively – on an atomic scale) large volume of space around the bond axis. Atomic orbitals that overlap to form molecular orbitals are in effect replaced by those molecular orbitals – the new orbital geometry reflects the new wavefunction that takes into account both electrons in the orbital.

So, if there has been overlap, the contributing atomic orbitals should be considered to have been replaced (not simply formed a chamber where the circles overlap), except of course Dr Morshed 's figures 6 and 7 show shells and do not actually represent the system of atomic orbitals.

Double bonds

This same failure to interpret the intentions and limitation of the simplistic form of representation used in introductory school chemistry leads to similar issues when Dr Morshed considers double bonding.

A new model of atomic structure suggests an odd geometry for pi bonds (Morshed, 2020, p.12)

Dr Morshed objects to the kind of representation on the left in his figure 8 as two electron pairs occupy the same area of overlap ('chamber'),

It is shown for an Oxygen molecule; two electron shared pairs are formed and take place at the overlapping chamber result from the outermost orbital of two bonded Oxygen atoms. But in real séance [sic?] that is impossible because two shared paired electrons cannot remain in a single overlapping chamber because of repulsion among each pairs and among individual electrons.

Morshed, 2020, p.12.

Yet, in the model Dr Morshed employs he had claimed that electron pairs do not repel unless they are aligned to allow a direct line of sight between their CNPs. In any case, the figure he criticises does not show overlapping orbitals, but overlapping L shells. He suggests that the existing models (which of course are not models currently used in chemistry except in introductory classes) imply the double bond in oxygen must be two sigma bonds: "The present structure of O2 molecule show only two pairs of electron with head to head overlapping in the overlapping chamber i.e., two sigma bond together which is impossible" (p.12).

However, this is because a shell type presentation is being used which is suitable for considering whether a bond is single or double (or triple), but no more. In order to discuss sigma and pi bonds with their geometrical and symmetry characteristics, one must work with orbitals, not shells. 6

Yet Dr Morshed has conflated shells and orbitals throughout his paper. His figure 8a that supposedly shows "Present molecular orbital structural showing two shared paired electrons in the same overlapped chamber" does not represent (atomic, let alone molecular) orbitals, and is not intended to suggest that the space between overlapping circles is some kind of chamber.

"The remaining two opposite spin unpaired electrons in the two bonded [sic?] Oxygen's outer- most orbital [sic, shell?] getting little distorted towards the shared paired electrons in their respective atomic orbital then they feel an attraction among the opposite spin electrons thus they make a bond pairs by side to side overlapping forms the pi-bond"

Morshed, 2020, p.12.

It is not at all clear to see how this overlap occurs in this representation (i.e., 8b). Moreover, the unpaired electrons will not "feel an attraction" as they are both negatively charged even if they have anti-parallel spins. The scheme also makes it very difficult to see how the pi bond could have the right symmetry around the bond axis, if the 'new molecular orbital structure' was taken at face value.

Conclusion

Dr Morshed's paper is clearly well meant, but it does not offer any useful new ideas to progress chemistry. It is highly flawed. There is no shame in producing highly flawed manuscripts – no one is perfect, which is why we have peer review to support authors in pointing out weaknesses and mistakes in their work and so allowing them to develop their ideas till they are suitable for publication. Dr Morshed has been badly let down by the publishers and editors of Annals of Atoms and Molecules. I wonder how much he was charged for this lack of service? 7

Publishing a journal paper like this, which is clearly not ready to make a contribution to the scholarly community through publication, does not only do a disservice to the author (who will have this publication in the public domain for anyone to evaluate) but can potentially confuse or mislead students who come across the journal. Confusing shells with orbitals, misrepresenting how ionic bonds form, implying that covalent bonds are due to a force between nuclei, suggesting that electron pairs need not repel each other, suggesting a flat model of the atom with four poles… there are many points in this paper that can initiate or reinforce student misconceptions.

Supposedly, this manuscript was handled by a member of the editorial board, sent to peer reviewers and the publication decision based on those review reports. It is hard to imagine any peer reviewer who is actually an academic chemist (let alone an expert in the topics published in this journal) considering this paper would be publishable, even with extensive major revisions. The whole premise of the paper (that simple representations of atoms with concentric shells of electrons reflect the models of atomic and molecular structure used today in chemistry research) is fundamentally flawed. So:

  • were there actually any reviews? (Really?)
  • if so, were the reviews carried out by experts in the field? (Or even graduate chemists or physicists?)
  • were the reviews positive enough to justify publication?

If the journal feels I am being unfair, then I am happy to publish any response submitted as a comment below.

Dr Menaa, Prof. Ibrahim, Prof. Yoon, Dr Parigger, Dr El-Behaedi…

If you were the Board Member who handled this submission and you feel my criticisms are unfair, please feel free to submit a comment. I am happy to publish your response.

Or, if you were not the Board Member who (allegedly) handled this submission, and would like to make that clear…

Works cited:
Note:

1 I thank Professor Eric Scerri of UCLA for bringing my attention to the deliciously named 'Annals of Atoms and Molecules', and this specific contribution.

2 That is my reading of the abbreviation, although the author uses the term a number of times before rather imprecisely defining it: "Similar solution can be made for molecular orbital (MOT) as such as: The molecular orbital (MO) theory…" (p.10).

3 Renormalisation is the name given to a set of mathematical techniques used in areas such as quantum field theory when calculations give implausible infinite results in order to 'lose' the unwanted infinities. Whilst this might seem like cheating – it is tolerated as it works very well.

4 I was intrigued that 'Prof.' Farid Menaa seemed to work for a non-academic institution, as generally companies cannot award the title of Professor. Of course, Prof. Meena may also have an appointment at a university that partners the company, or could have emeritus status having retired from academia.

I found him profiled on another publisher's site as "Professor, Principal Investigator, Director, Consultant Editor, Reviewer, Event Organizer and Entrepreneur,…" who had worked in oncology, dermatology, haemotology (when "he pioneered new genetic variants of stroke in sickle cell anemia patients" which presumably is much more positive than it reads). Reading on, I found he had 'followed' complementary formations in "Medecine [sic], Pharmacy, Biology, Biochemistry, Food Sciences and Technology, Marine Biology, Chemistry, Physics, Nano-Biotechnology, Bio-Computation, and Bio-Statistics" and was "involved in various R&D projects in multiple areas of medicine, pharmacy, biology, genetics, genomics, chemistry, biophysics, food science, and technology". All of which seemed very impressive (nearly as wide a range of expertise as predatory journal publishers claim for me), but made me none the wiser about the source of his Professorial title.

5 Today. Although interestingly, in the first major comprehensive account of magnetism, Gilbert (1600/2016) tended to draw the North-South axis of the earth horizontally in his figures.

6 The representations we draw are simple depictions of something more subtle. If the circles did represent orbitals then they could not show the entire volume of space where the electron might be found (as this is theoretically infinite) but rather an envelope enclosing a volume where there is the highest probability (or 'electron density'). So orbitals will actually overlap to some extent even when simple images suggest otherwise.

7 I wonder because the appropriate page, https://scholars.direct/publication-charges.php, "was not found on this server" when I looked to see.

What shape should a research thesis be?

Being flummoxed by a student question was the inspiration for a teaching metaphor

Keith S. Taber

An artist's impression of the author being lost for words (Image actually by Christian Dorn from Pixabay)

In my teaching on the 'Educational Research' course I used to present a diagram of a shape something like the lemniscate – the infinity symbol, ∞ – and tell students that was the shape their research project and thesis should take. I would suggest this was a kind of visual metaphor.

This may seem a rather odd idea, but I was actually responding to a question I had previously been asked by a student. Albeit, this was a rather deferred response.

'Lost for words'

As a teacher one gets asked all kinds of questions. I've often suggested that preparing for teaching is more difficult than preparing for an examination. When taking an examination it is usually reasonable to assume that the examination question have been set by experts in the subject.

A candidate therefore has a reasonable chance of foreseeing at least the general form of the questions that night asked. There is usually a syllabus or specification which gives a good indication of the subject matter and the kinds of skills expected to be demonstrated – and usually there are past papers (or, if not, specimen papers) giving examples of what might be asked. The documentation reflects some authority's decisions about the bounds of the subject being examined (e.g., what counts as included in 'chemistry' or whatever), the selection of topics to be included in the course, and the level of treatment excepted at this level of study (Taber, 2019). Examiners may try to find novel applications and examples and contexts – but good preparation should avoid the candidate ever being completely stumped and having no basis to try to develop a response.

However, teachers are being 'examined' so to speak, by people who by definition are not experts and so may be approaching a subject or topic from a wide range of different perspectives. In science teaching, one of the key issues is how students do not simply come to class ignorant about topics to be studied, but often bring a wide range of existing ideas and intuitions ('alternative conceptions') that may match, oppose, or simply be totally unconnected with, the canonical accounts.

Read about alternative conceptions

This can happen in any subject area. But a well prepared teacher, even if never able to have ready answers to all question or suggestions learners might offer, will seldom be lost for words and have no idea how to answer. But I do recall an occasion when I was indeed flummoxed.

I was in what is known as the 'Street' in the main Faculty of Education Building (the Donald McIntyre Building) at Cambridge at a time when students were milling about as classes were just ending and starting. Suddenly out of the crowd a student I recognised from teaching the Educational Research course loomed at me and indicated he wanted to talk. I saw he was clutching a hardbound A4 notebook.

We moved out of the melee to an area where we could talk. He told me he had a pressing question about the dissertation he had to write for his M.Phil. programme.

"What should the thesis look like?"

His question sounded simple enough – "What should the thesis look like?"

Now at one level I had an answer – it should be an A4 document that would be eventually bound in blue cloth with gold lettering on the spine. However, I was pretty sure that was not what he meant.

What does a thesis look like?

I said I was not sure what he meant. He opened his notebook at a fresh double page and started sketching, as he asked me: 'Should the thesis look like this?' as he drew a grid on one page of his book. Whilst I was still trying to make good sense of this option, he started sketching on the facing page. "Or, should it look like this?"

I have often thought back to this exchange as I was really unsure how to respond. He seemed no more able to explain these suggestions than I was able to appreciate how these representations related to my understanding of the thesis. As I looked at the first option I was starting to think in terms of the cells as perhaps being the successive chapters – but the alternative option seemed to undermine this. For, surely, if the question was about whether to have 6 or 8 chapters – a question that has no sensible answer in abstract without considering the specific project – it would have been simpler just to pose the question verbally. Were the two columns (if that is what they were) meant to be significant? Were the figures somehow challenging the usual linear nature of a thesis?

I could certainly offer advice on structuring a thesis, but as a teacher – at least as the kind of constructivist teacher I aspired to be – I failed here. I was able to approach the topic from my own perspective, but not to appreciate the student's own existing conceptual framework and work from there. This if of course what research suggests teachers usually need to do to help learners with alternative conceptions shift their thinking.

Afterwards I would remember this incident (in a way I cannot recall the responses I gave to student questions on hundreds of other occasions) and reflect on it – without ever appreciating what the student was thinking. I know the student had a background in a range of artistic fields including as a composer – and I wondered if this was informing his thinking. Perhaps if I had studied music at a higher level I might have appreciated the question as being along the lines of, say, whether the should the thesis be, metaphorically speaking, in sonata form or better seen as a suite?

I think it was because the question played on my mind that later, indeed several years later, I had the insight that 'the thesis' (a 'typical' thesis) did not look like either of those rectangular shapes, but rather more like the leminscape:

A visual metaphor for a thesis project (after Taber, 2013)

The focus of a thesis

My choice of the leminscate was because its figure-of-eight nature made it two loops which are connected by a point – which can be seen as some kind of focal point of the image:

A thesis project has a kind of focal point

This 'focus' represents the research question or questions (RQ). The RQ are not the starting point of most projects, as good RQ have to be carefully chosen and refined, and that usually take a lot of reading around a topic.

However, they act as a kind of fulcrum around which the thesis is organised because the sections of the thesis leading up to the RQ are building up to them – offering a case for why those particular questions are interesting, important, and so-phrased. And everything beyond that point reflects the RQ, as the thesis then describes how evidence was collected and analysed in order to try to answer the questions.

Two cycles of activity

A thesis project cycles through expansive and focusing phases

Moreover, the research project described in a thesis reflects two cycles of activity.

The first cycle has an expansive phase where the researcher is reading around the topic, and exposing themselves to a wide range of literature and perspectives that might be relevant. Then, once a conceptual framework is developed from this reading (in the literature review), the researcher focuses in, perhaps selecting one of several relevant theoretical perspectives, and informed by prior research and scholarship, crystallises the purpose of the project in the RQ.

Then the research is planned in order to seek to answer the RQ, which involves selecting or developing instruments, going out and collecting data – often quite a substantive amount of data. After this expansive phase, there is another focusing stage. The collected data is then processed into evidence – interpreted, sifted, selected, summarised, coded and tallied, categorised – and so forth – in analysis. The data analysis is summarised in the results, allow conclusions to be formed: conclusions which reflect back to the RQ.

The lemniscate, then, acts a simple visual metaphor that I think acts as a useful device for symbolising some important features of a research project, and so, in one sense at least, what a thesis 'looks' like. If any of my students (or readers) have found this metaphor useful then they have benefited from a rare occasion when a student question left me lost for words.

Work cited:

Of opportunistic viruses and meat-eating bees

The birds viruses and the bees do it: Let's do it, let's…evolve

Keith S. Taber

bees that once were vegetarian actually decided to change their ways…

this group of bees realised that there's always animals that are dying and maybe there's enough competition on the flowers [so] they decided to switch

How the vulture bee got its taste for meat

I was struck by two different examples of anthropomorphism that I noticed in the same episode of the BBC's Science in Action radio programme/podcast.

Science in Action episode broadcast 5th December 2021

Anthropomorphism in science?

Anthropomorphism is the name given treating non-human entities as if they were human actors. An example of anthropomorphic language would be "the atom wants to donate an electron so that it can get a full outer shell" (see for example: 'A sodium atom wants to donate its electron to another atom'). In an example such as that, an event that would be explained in terms of concepts such as force and energy in a scientific account (the ionisation of an atom) is instead described as if the atom is a conscious agent that is aware of its status, has preferences, and acts to bring about desired ends.

Read about Anthropomorphism

Of course, an atom is not a complex enough entity to have mental experience that allows it to act deliberately in the world, so why might someone use such language?

  • Perhaps, if the speaker was a young learner, because they have not been taught the science.
  • Perhaps a non-scientist might use such language because they can only make sense of the abstract event in more familiar terms.

But what if the speaker was a scientist – a science teacher or a research scientist?

When fellow professionals (e.g., scientists) talk to each other they may often use a kind of shorthand that is not meant to be taken literally (e.g., 'the molecule wants to be in this configuration') simply because it can shorten and simplify more technical explanations that both parties understand. But when a teacher is talking to learners or a scientist is trying to explain their ideas to the general public, something else may be going on.

Read about Anthropomorphism in public science discourse

Anthropomorphism in science communication and education

In science teaching or science communication (scientists communicating science to the public) there is often a need to present abstract or complex ideas in ways that are accessible to the audience. At one level, teaching is about shifting what is to be taught from being unfamiliar to learners to being familiar, and one way to 'make the unfamiliar familiar' is to show it is in some sense like something already familiar.

Therefore there is much use of simile and analogy, and of telling stories that locate the focal material to be learned within a familiar narrative. Anthropomorphism is often used in this way. Inanimate objects may be said to want or need or try (etc.) as the human audience can relate to what it is to want or need or try.

Such techniques can be very useful to introduce novel ideas or phenomena in ways that are accessible and/or memorable ('weak anthropomorphism'). However, sometimes the person receiving these accounts may not appreciate their figurative nature as pedagogic / communicative aids, and may mistake what is meant to be no more than a starting point, a way into a new topic or idea, as being the scientific account itself. That is, these familiarisation techniques can work so well that the listener (or reader) may feel satisfied with them as explanatory accounts ('strong anthropomorphism').

Evolution – it's just natural (selection)

A particular issue arises with evolution, when often science only has hypothetical or incomplete accounts of how and why specific features or traits have been selected for in evolution. It is common for evolution to be misunderstood teleologically – that is, as if evolution was purposeful and nature has specific end-points in mind.

Read about teleology

The scientific account of evolution is natural selection, where none of genes, individual specimens, populations or species are considered to be deliberately driving evolution in particular directions (present company excepted perhaps – as humans are aware of evolutionary processes, and may be making some decisions with a view to the long-term future). 1

Yet describing evolutionary change in accord with the scientific account tends to need complex and convoluted language (Taber, 2017). Teleological and anthropomorphic shorthand is easier to comprehend – even if it puts a burden on the communicatee to translate the narrative into a more technical account.

What the virus tries to do

The first example from the recent Science in Action episode related to the COVID pandemic, and the omicron variant of the SARS-CoV-2 virus. This was the lead story on the broadcast/podcast, in particular how the travel ban imposed on Southern Africa (a case of putting the lid on the Petri dish after the variant had bolted?) was disrupting supplies of materials needed to address the pandemic in the countries concerned.

This was followed by a related item:

"Omicron contains many more mutations than previous variants. However scientists have produced models in the past which can help us understand what these mutations do. Rockefeller University virologist Theodora Hatziioannou produced one very similar to Omicron and she tells us why the similarities are cause for concern."

https://www.bbc.co.uk/programmes/w3ct1l4p

During this item, Dr Theodora Hatziioannou noted:

"When you give the virus the opportunity to infect so many people, then of course it is going to try not only every possible mutation, but every possible combination of mutations, until it finds one that really helps it overcome our defences."

Dr Theodora Hatziioannou interviewed on Science in Action

Dr Theodora Hatziioannou
Research Associate Professor
Laboratory of Retrovirology
The Rockefeller University

I am pretty sure that Dr Hatziioannou does not actually think that 'the virus' (which of course is composed of myriad discrete virus particles) is trying out different mutations intending to stop once it finds one which will overcome human defences. I would also be fairly confident that in making this claim she was not intending her listeners to understand that the virus had a deliberate strategy and was systematically working its way through a plan of action. A scientifically literature person should readily interpret the comments in a natural selection framework (e.g., 'random' variation, fitness, differential reproduction). In a sense, Dr Hatziioannou's comments may be seen as an anthropomorphic analogy – presenting the 'behaviour' of the virus (collectively) by analogy with human behavior.

Yet, as a science educator, such comments attract my attention as I am well aware that school age learners and some adult non-scientists may well understand evolution to work this way. Alternative conceptions of natural selection are very common. Even when students have been taught about natural selection they may misunderstand the process as Lamarckian (the inheritance of acquired characteristics – see for example 'The brain thinks: grow more fur'). So, I wonder how different members of the public hearing this interview will understand Dr Hatziioannou's analogy.

Even before COVID-19 came along, there was a tendency for scientists to describe viruses in such terms as as 'smart', 'clever' and 'sneaky' (e.g., 'So who's not a clever little virus then?'). The COVID pandemic seems to have unleashed a (metaphorical) pandemic of public comments about what the virus wants, and what it tries to achieve, and so forth. When a research scientist talks this way, I am fairly sure it is intended as figurative language. I am much less sure when, for example, I hear a politician telling the public that the virus likes cold weather ('What COVID really likes').

Vulture bees have the guts for it

The other item that struck me concerned vulture bees.

"Laura Figueroa from University of Massachusetts in Amhert [sic] in the US, has been investigating bees' digestive systems. Though these are not conventional honey bees, they are Costa Rican vulture bees. They feed on rotting meat, but still produce honey."

https://www.bbc.co.uk/programmes/w3ct1l4p
Bees do not actually make reasoned choices about their diets
(Original image by Oldiefan from Pixabay)

The background is that although bees are considered (so I learned) to have evolved from wasps, and to all have become vegetarians, there are a few groups of bees that have reverted to the more primitive habits of eating meat. To be fair to them, these bees are not cutting down the forests to set up pasture and manage livestock, but rather take advantage of the availability of dead animals in their environment as a source of protein.

These vulture bees (or carrion bees) are able to do this because their gut microbiomes consist of a mix of microbes that can support them in digesting meat, allowing them to be omnivores. This raises the usual kind of 'chicken and egg' question 1 thrown up by evolutionary developments: how did vegetarian bees manage to shift their diet: the more recently acquired microbes would not have been useful or well-resourced whilst the bees were still limiting themselves to a plant-based diet, but the vegetarian bees would not have been able to digest carrion before their microbiomes changed.

As part of the interview, Dr Figueroa explaied:

"These are more specialised bees that once they were vegetarian for a really long time and they actually decided to change their ways, there's all of this meat in the forest, why not take advantage? I find that super-fascinating as well, because how do these shifts happen?

Because the bees, really when we are thinking about them, they've got access to this incredible resource of all of the flowering plants that are all over the world, so then why switch? Why make this change?

Over evolutionary time there are these mutations, and, you know, maybe they'd have got an inkling for meat, it's hard to know how exactly that happened, but really because it is a constant resource in the forest, there's always, you know, this might sound a little morbid but there's always animals that are dying and there's always this turn over of nutrients that can happen, and so potentially this specialised group of bees realised that, and maybe there's enough competition on the flowers that they decided to switch. Or, they didn't decide, but it happened over evolutionary time.

Dr Laura Figueroa interviewed on Science in Action

Dr Figueroa does not know exactly how this happened – more research is needed. I am sure Dr Figueroa does not think the bees decided to change their ways in the way that a person might decide to change their ways – perhaps deciding to get more exercise and go to bed earlier for the sake of their health. I am also sure Dr Figueroa does not think the bees realised that there was so much competition feeding on the flowers that it might be in their interests to consider a change of diet, in the way that a person might decide to change strategy based on an evaluation of the competition. These are anthropomorphic figures of speech.

Dr Laura Figueroa, NSF Postdoctoral Research Fellow in Biology
Department of Entomology, Cornell University / University of Massachusetts in Amherst

As she said "they didn't decide, but it happened over evolutionary time". Yet it seems so natural to use that kind of language, that is to frame the account in a narrative that makes sense in terms of how people experience their lives.

Again, the scientifically literate should appreciate the figurative use of language for what it is, and it is difficult to offer an accessible account without presenting evolutionary change as purposive and the result of deliberation and strategy. Yet, I cannot help wondering if this kind of language may reinforce some listeners' alternative conceptions about how natural selection works.

Work cited:
Notes

1 The 'selfish' gene made famous by Dawkins (1976/1989) is not really selfish in the sense a person might be – rather this was an analogy which helped shift attention from changes at the individual or species level when trying to understand how evolution occurs, to changes in the level of distinct genes. If a mutation in a specific gene leads to a change in the carrying organism that (in turn) leads to that specimen having greater fitness then the gene itself has an increased chance of being replicated. So, from the perspective of focusing on the genes, the change at the species level can be seen as a side effect of the 'evolution' of the gene. The gene may be said to be (metaphorically) selfish because it does not change for the benefit of the organism, but to increase its own chances of being replicated. Of course, that is also an anthropomorphic narrative – actually the gene does not deliberately mutate, has no purpose, has no notion of replication, indeed, does not even 'know' it is a gene, and so forth.

2 Such either/or questions can be understood as posing false dichotomies (here, either the bees completely changed their diets before their microbiomes or their microbiomes changed dramatically before their diets shifted) when what often seems most likely is that change has been slow and gradual.

Climate change – either it is certain OR it is science

Is there a place for absolute certainty in science communication?

Keith S. Taber

I just got around to listening to the podcast of the 10th October episode of Science in Action. This was an episode entitled 'Youngest rock samples from the moon' which led with a story about rock samples collected on the moon and brought to earth by a Chinese mission (Chang'e-5). However, what caused me to, metaphorically at least, prick up my ears was a reference to "absolute certainty".

Now the tag line for Science in Action is "The BBC brings you all the week's science news". I think that phrase reveals something important about science journalism – it may be about science, but it is journalism, not science.

That is not meant as some kind of insult. But science in the media is not intended as science communication between scientists (they have journals and conferences and so forth), but science communicated to the public – which means it has to be represented in a form suitable for a general, non-specialist audience.

Read about science in public discourse and the media

Scientific and journalistic language games

For, surely, "all the week's science news" cannot be covered in one half-hour broadcast/podcast. 1

My point is that "The BBC brings you all the week's science news" is not intended to be understood and treated as a scientific claim, but as something rathere different. As Wittgenstein (1953/2009) famously pointed out, language has to be understood in specific contexts, and there are different 'language games'. So, in the genre of the scientific report there are particular standards and norms that apply to the claims made. Occasionally these norms are deliberately broken – perhaps a claim is made that is supported by fabricated evidence, or for which there is no supporting evidence – but this would be judged as malpractice, academic misconduct or at least incompetence. It is not within the rules of that game

However, the BBC's claim is part of a different 'language game' – no one is going to be accused of professional misconduct because, objectively, Science in Action does not brings a listener all the week's science news. The statement is not intended to be understood as an objective knowledge claim, but more a kind of motto or slogan; it is not to be considered 'false' because it not objectively correct. Rather, it is to be understood in a fuzzy, vague, impressionistic way.

To ask whether "The BBC brings you all the week's science news" through Science in Action is a true or false claim would be a kind of category error. The same kind of category error that occurs if we ask whether or not a scientist believes in the ideal gas law, the periodic table or models of climate change.

Who invented gravity?

This then raises the question of how we understand what professional academic scientists say on a science news programme that is part of the broadcast media in conversation with professional journalists. Are they, as scientists, engaged in 'science speak', or are they as guests on a news show engaged in 'media speak'?

What provoked this thought with was comments by Dr Fredi Otto who appeared on the programme "to discuss the 2021 Nobel Prizes for Science". In particular, I was struck by two specific comments. The second was:

"…you can't believe in climate change or not, that would just be, you believe in gravity, or not…"

Dr Friederike Otto speaking on Science in Action

Which I took to mean that gravity is so much part of our everyday experience that it is taken-for-granted, and it would be bizarre to have a debate on whether it exists. There are phenomena we all experience all the time that we explain in terms of gravity, and although there may be scope for debate about gravity's nature or its mode of action or even its universality, there is little sense in denying gravity. 2

Newton's notion of gravity predominated for a couple of centuries, but when Einstein proposed a completely different understanding, this did not in any sense undermine the common ('life-world' 2) experience labelled as gravity – what happens when we trip over, or drop something, or the tiring experience of climbing too many steps. And, of course, the common misconception that Newton somehow 'discovered' gravity is completely ahistorical as people had been dropping things and tripping over and noticing that fruit falls from trees for a very long time before Newton posited that the moon was in freefall around the earth in a way analogous to a falling apple!

Believing in gravity

Even if, in scientific terms, believing in a Newtonian conceptualisation of gravity as a force acting at a distance would be to believe something that was no longer considered the best scientific account (in a sense the 'force' of gravity becomes a kind of epiphenomenon in a relativistic account of gravity); in everyday day terms, believing in the phenomenon of gravity (as a way of describing a common pattern in experience of being in the world) is just plain common sense.

Dr Otto seemed to be suggesting that just as gravity is a phenomenon that we all take for granted (regardless of how it is operationalised or explained scientifically), so should climate change be. That might be something of a stretch as the phenomena we associate with gravity (e.g., dense objects falling when dropped, ending up on the floor when we fall) are more uniform than those associated with climate change – which is of course why one tends to come across more climate change deniers than gravity deniers. To the best of my knowledge, not even Donald Trump has claimed there is no gravity.

But the first comment that gave me pause for thought was:

"…we now can attribute, with absolute certainty, the increase in global mean temperature to the increase in greenhouse gases because our burning of fossil fuels…"

Dr Friederike Otto speaking on Science in Action
Dr Fredi Otto has a profile page at the The Environmental Change Unit,
University of Oxford

Absolute certainty?

That did not seem to me like a scientific statement – more like the kind of commitment associated with belief in a religious doctrine. Science produces conjectural, theoretical knowledge, but not absolute knowledge?

Surely, absolute certainty is limited to deductive logic, where proofs are possible (as in mathematics, where conclusions can be shown to inevitably follow from statements taken as axioms – as long as one accepts the axioms, then the conclusions must follow). Science deals with evidence, but not proof, and is always open to being revisited in the light of new evidence or new ways of thinking about things.

Read about the nature of scientific knowledge

Science is not about belief

For example, at one time many scientists would have said that the presence of an ether 3 was beyond question (as for example waves of light travelled from the sun to earth, and waves motion requires a medium). Its scientific characterisation -e.g., the precise nature of the ether, its motion relative to the earth – were open to investigation, but its existence seemed pretty secure.

It seemed inconceivable to many that the ether might not exist. We might say it was beyond reasonable doubt. 4 But now the ether has gone the way of caloric and phlogiston and N-rays and cold fusion and the four humours… It may have once been beyond reasonable doubt to some (given the state of the evidence and the available theoretical perspectives), but it can never have been 'absolutely certain'.

To suggest something is certain may open us to look foolish later: as when Wittgenstein himself suggested that we could be certain that "our whole system of physics forbids us to believe" that people could go to the moon.

Science is the best!

Science is the most reliable and trustworthy approach to understanding the natural world, but a large part of that strength comes from it never completely closing a case for good – from never suggesting to have provided absolute certainty. Science can be self-correcting because no scientific idea is 'beyond question'. That is not to say that we abandon, say, conversation of energy at the suggestion of the first eccentric thinker with designs for a perpetual motion machine – but in principle even the principle of conservation of energy should not be considered as absolutely certain. That would be religious faith, not scientific judgement.

So, we should not believe. It should not be considered absolutely certain that "the increase in global mean temperature [is due to] the increase in greenhouse gases because [of] our burning of fossil fuels", as that suggests we should believe it as a doctrine or dogma, rather than believe that the case is strong enough to make acting accordingly sensible. That is, if science is always provisional, technically open to review, then we can never wait for absolute certainty before we act, especially when something seems beyond reasonable doubt.

You should not believe scientific ideas

The point is that certainty and belief are not really the right concepts in science, and we should avoid them in teaching science:

"In brief, the argument to be made is that science education should aim for understanding of scientific ideas, but not for belief in those ideas. To be clear, the argument is not just that science education should not intend to bring about belief in scientific ideas, but rather that good science teaching discourages belief in the scientific ideas being taught."

Taber, 2017: 82

To be clear – to say that we do not want learners to believe in scientific ideas is NOT to say we want them to disbelieve them! Rather, belief/disbelief should be orthogonal to the focus on understanding ideas and their evidence base.

I suggested above that to ask whether "The BBC brings you all the week's science news" through Science in Action is a true or false claim would be a kind of category error. I would suggest it is a category error in the same sense as asking whether or not people should believe in the ideal gas law, the periodic table, or models of climate change.

"If science is not about belief, then having learners come out of science lessons believing in evolution, or for that matter believing that magnetic field lines are more concentrated near the poles of a magnet, or believing that energy is always conserved, or believing that acidic solutions contain solvated hydrogen ions,[5] misses the point. Science education should help students understand scientific ideas, and appreciate why these ideas are found useful, and something of their status (for example when they have a limited range of application). Once students can understand the scientific ideas then they become available as possible ways of thinking about the world, and perhaps as notions under current consideration as useful (but not final) accounts of how the world is."

Taber, 2017: 90

But how do scientists cross the borders from science to science communication?

Of course many scientists who have studied the topic are very convinced that climate change is occurring and that anthropogenic inputs into the atmosphere are a major or the major cause. In an everyday sense, they believe this (and as they have persuaded me, so do I). But in a strictly logical sense they cannot be absolutely certain. And they can never be absolutely certain. And therefore we need to act now, and not wait for certainty.

I do not know if Dr Otto would refer to 'absolute certainty' in a scientific context such as a research paper of a conference presentation. But a radio programme for a general audience – all ages, all levels of technical background, all degrees of sophistication in appreciating the nature of science – is not a professional scientific context, so perhaps a different language game applies. Perhaps scientists have to translate their message into a different kind of discourse to get their ideas across to the wider public?

The double bind

My reaction to Dr Otto's comments derived from a concern with public understanding of the nature of science. Too often learners think scientific models and theories are meant to be realistic absolute descriptions of nature. Too often they think science readily refutes false ideas and proves the true ones. Scientists talking in public about belief and absolute certainty can reinforce these misconceptions.

On the other hand, there is probably nothing more important that science can achieve today than persuade people to act to limit climate change before we might bring about shifts that are (for humanity if not for the planet) devastating. If most people think that science is about producing absolute certain knowledge, then any suggestion that there is uncertainty over whether human activity is causing climate change is likely to offer the deniers grist, and encourage a dangerous 'well let's wait till we know for sure' posture. Even when it is too late and the damage has been done, if there are any scientists left alive, they still will not know absolutely certainly what caused the changes.

"…Lord, here comes the flood
We'll say goodbye to flesh and blood
If again the seas are silent
In any still alive
It'll be those who gave their island to survive
…"

(Peter Gabriel performing on the Kate Bush TV special, 1979: BBC Birmingham)

So, perhaps climate scientists are in a double bind – they can represent the nature of science authentically, and have their scientific claims misunderstood; or they can do what they can to get across the critical significance of their science, but in doing so reinforce misconceptions of the nature of scientific knowledge.

Coda

I started drafting this yesterday: Thursday. By coincidence, this morning, I heard an excellent example of how a heavyweight broadcast journalist tried to downplay a scientific claim because it was couched as not being absolutely certain!

Works cited:

Notes

1 An alternative almost tautological interpretation might be that the BBC decides what is 'science news', and it is what is included in Science in Action, might fit some critics complaints that the BBC can be a very arrogant and self-important organisation – if only because there are stories not covered in Science in Action that do get covered in the BBC's other programmes such as BBC Inside Science.

2 This might be seen as equivalent to saying that the life-world claim that gravity (as is commonly understood and experienced) exists is taken-for-granted Schutz & Luckmann, 1973). A scientific claim would be different as gravity would need to be operationally defined in terms that were considered objective, rather that just assuming that everyone in the same language community shares a meaning for 'gravity'.

3 The 'luminiferous' aether or ether. The ether was the name given to the fifth element in the classical system where sublunary matter was composed of four elements (earth, water, air, fire) and the perfect heavens from a fifth.

(Film  director Luc Besson's sci-fi/fantasy movie 'The Fifth Element' {1997, Gaumont Film Company} borrows from this idea very loosely: Milla Jovovich was cast in the title role as a perfect being who is brought to earth to be reunited with the other four elements in order to save the world.)

4 Arguably the difference between forming an opinion on which to base everyday action (everyday as in whether to wear a rain coat, or to have marmalade on breakfast toast, not as in whether to close down the global fossil fuel industry), and proposing formal research conclusions can be compared to the difference between civil legal proceedings (decided on the balance of probabilities – what seems most likely given the available evidence) and criminal proceedings – where a conviction is supposed to depend upon guilt being judged beyond reasonable doubt given the available evidence (Taber, 2013).

Read about writing-up research

5 Whether acids do contain hydrated hydrogen ions may seem something that can reasonably be determined, at least beyond reasonable doubt, by empirical investigation. But actually not, as what counts as an acid has changed over time as chemists have redefined the concept according to what seemed most useful. (Taber, 2019, Chapter 6: Conceptualising acids: Reimagining a class of substances).

Move over Mendeleev, here comes the new Mendel

Seeking the islets of Filipenka Henadzi


Keith S. Taber


"new chemical elements with atomic numbers 72-75 and 108-111 are supposedly revealed, and also it is shown that for heavy elements starting with hafnium, the nuclei of atoms contain a larger number of protons than is generally accepted"

Henadzi, 2019, p.2

Somehow I managed to miss a 2019 paper bringing into doubt the periodic table that is widely used in chemistry. It was suggested that many of the heavier elements actually have higher atomic numbers (proton numbers) than had long been assumed, with the consequence that when these elements are correctly re-positioned it reveals two runs of elements that should be in the periodic table, but which till now have not been identified by chemists.

According to Henadzi we need to update the periodic table and look for eight missing elements (original image by Image by Gerd Altmann from Pixabay)

Henadzi (2019) suggests that "I would like to name groups of elements with the numbers 72-75 and 108-111 [that is, those not yet identified that should have these numbers], the islets of Filipenka Henadzi."

The orginal Mendeleev

This is a bit like being taken back to when Dmitri Mendeleev first proposed his periodic table and had the courage to organise elements according to patterns in their properties, even though this left gaps that Mendeleev predicted would be occupied by elements yet to be discovered. The success of (at least some) of his predictions is surely the main reason why he is considered the 'father' of the periodic table, even though others were experimenting with similar schemes.

Now it has been suggested that we still have a lot of work to do to get the periodic table right, and that the version that chemists have used (with some minor variations) for many decades is simply wrong. This major claim (which would surely be considered worthy of the Nobel prize if found correct) was not published in Nature or Science or one of the prestigious chemistry journals published by learned societies such as the Royal Society of Chemistry, but in an obscure journal that I suspect many chemists have never heard of.

The original Mendel

This is reminiscent of the story of Mendel's famous experiments with inheritance in pea plants. Mendel's experiments are now seen as seminal in establishing core ideas of genetics. But Mendel's research was ignored for many years.

He presented his results at meetings of the Natural History Society of Brno in 1865 and then published them in a local German language journal – and his ideas were ignored. Only after other scientists rediscovered 'his' principles in 1900, long after his death, was his work also rediscovered.

Moreover, the discussion of this major challenge to accepted chemistry (and physics if I have understood the paper) is buried in an appendix of a paper which is mostly about the crystal structures of metals. It seems the appendix includes a translation of work previously published in Russian, explaining why, oddly, a section part way through the appendix begins "This article sets out the views on the classification of all known chemical elements, those fundamental components of which the Earth and the entire Universe consists".

Calling out 'predatory' journals

I have been reading some papers in a journal that I believed, on the basis of its misleading title and website details, was an example of a poor-quality 'predatory journal'. That is, a journal which encourages submissions simply to be able to charge a publication fee (currently $1519, according to the website), without doing the proper job of editorial scrutiny. I wanted to test this initial evaluation by looking at the quality of some of the work published.

One of the papers I decided to read, partly because the topic looked of particular interest, was 'Nature of Chemical Elements' (Henadzi, 2019). Most of the paper is concerned with the crystal structures of metals, and presenting a new model to explain why metals have the structure they do. This is related to the number of electrons per atom that can be considered to be in the conduction band – something that was illustrated with a simple diagram that unfortunately, to my reading at least, was not sufficiently elaborated.1

The two options referred to seem to refer to n-type (movement of electrons) and p-type (movement of electrons that can be conceptualised as movement of a {relatively} positive hole, as in semi-conductor materials) – Figure 1 from Henadzi, 2019: p2

However, what really got my attention was the proposal for revising the periodic table and seeking eight new elements that chemists have so far missed.

Beyond Chadwick

Henadzi tells readers that

"The innovation of this work is that in the table of elements constructed according to the Mendeleyev's law and Van-den- Broek's rule [in effect that atomic number in the periodic table = proton number], new chemical elements with atomic numbers 72-75 and 108-111 are supposedly revealed, and also it is shown that for heavy elements starting with hafnium, the nuclei of atoms contain a larger number of protons than is generally accepted. Perhaps the mathematical apparatus of quantum mechanics missed some solutions because the atomic nucleus in calculations is taken as a point."

Henadzi, 2019, p.4

Henadzi explains

"When considering the results of measuring the charges of nuclei or atomic numbers by James Chadwick, I noticed that the charge of the core of platinum is rather equal not to 78, but to 82, which corresponds to the developed table. For almost 30 years I have raised the question of the repetition of measurements of the charges of atomic nuclei, since uranium is probably more charged than accepted, and it is used at nuclear power plants."

Henadzi, 2019, p.4

Now Chadwick is most famous for discovering the neutron – back in 1932. So he was working a long time ago, when atomic theory was still quite underdeveloped and with apparatus that would seem pretty primitive compared with the kinds of set up used today to investigate the fundamental structure of matter. That is, it is hardly surprising if his work which was seminal nearly a century ago had limitations. Henadzi however seems to feel that Chadwick's experiments accurately reveal atomic numbers more effectively than had been realised.

Sadly, Henadzi does not cite any specific papers by Chadwick in his reference list, so it is not easy to look up the original research he is discussing. But if Henadzi is suggesting that data produced almost a century ago can be interpreted as giving some elements different atomic numbers to those accepted today, the obvious question is what other work, since, establishes the accepted values, and why should it not be trusted. Henadzi does not discuss this.

Explaining a long-standing mystery

Henadzi points out that whereas for the lighter elements the mass number is about twice the atomic number (that is, the number of neutrons in a nucleus approximately matches the number of protons) as one proceeds through the period table this changes such the ratio of protons:neutrons shifts to give an increasing excess of neutrons. Henadzi also implies that this is a long standing mystery, now perhaps solved.

"Each subsequent chemical element is different from the previous in that in its core the number of protons increases by one, and the number of neutrons increases, in general, several. In the literature this strange ratio of the number of neutrons to the number of protons for any the kernel is not explained. The article proposes a model nucleus, explaining this phenomenon."

Henadzi, 2019, p.5

Now what surprised me here was not the pattern itself (something taught in school science) but the claim that the reason was not known. My, perhaps simplistic, understanding is that protons repel each other because of their similar positive electrical charges, although the strong nuclear force binds nucleons (i.e., protons and neutrons collectively) into nuclei and can overcome this.

Certainly what is taught in schools is that as the number of protons increases more neutrons are needed to be mixed in to ensure overall stability. Now I am aware that this is very much an over-simplification, what we might term a curriculum model or teaching model perhaps, but what Henadzi is basically suggesting seems to be this very point, supplemented by the idea that as the protons repel each other they are usually found at the outside of the nucleus alongside an equal number of neutrons – with any additional neutrons within.

The reason for not only putting protons on the outer shell of a large nucleus in Henadzi's model seems to relate to the stability of alpha particles (that is, clumps of two protons and two neutrons, as in the relatively stable helium nucleus). Or, at least, that was my reading of what is being suggested,

"For the construction of the [novel] atomic nucleus model, we note that with alpha-radioactivity of the helium nucleus is approximately equal to the energy.

Therefore, on the outer layer of the core shell, we place all the protons with such the same number of neutrons. At the same time, on one energy Only bosons can be in the outer shell of the alpha- particle nucleus and are. Inside the Kernel We will arrange the remaining neutrons, whose task will be weakening of electrostatic fields of repulsion of protons."

Henadzi, 2019, p.5

The lack of proper sentence structure does not help clarify the model being mooted.

Masking true atomic number

Henadzi's hypothesis seems to be that when protons are on the surface of the nucleus, the true charge, and so atomic number, of an element can be measured. But sometimes with heavier elements some of the protons leave the surface for some reason and move inside the nucleus where their charge is somehow shielded and missed when nuclear charge is measured. This is linked to the approximation of assuming that the charge on an object measured from the outside can be treated as a point charge.

This is what Henadzi suggests:

"Our nuclear charge is located on the surface, since the number of protons and the number of neutrons in the nucleus are such that protons and neutrons should be in the outer layer of the nucleus, and only neutrons inside, that is, a shell forms on the surface of the nucleus. In addition, protons must be repelled, and also attracted by an electronic fur coat. The question is whether the kernel can be considered a point in the calculations and up to what times? And the question is whether and when the proton will be inside the nucleus….if a proton gets into the nucleus for some reason, then the corresponding electron will be on the very 'low' orbit. Quantum mechanics still does not notice such electrons. Or in other words, in elements 72-75 and 108-111, some protons begin to be placed inside the nucleus and the charge of the nucleus is screened, in calculations it cannot be taken as a point."

Henadzi, 2019, p.5

So, I think Henadzi is suggesting that if a proton gets inside the nucleus, its associated electron is pulled into a very close orbit such that what is measured as nuclear charge is the real charge on the nucleus (the number of protons) partially cancelled by low lying electrons orbiting so close to the nucleus that they are within what we might call 'the observed nucleus'.

This has some similarity to the usual idea of shielding that leads to the notion of core charge. For example, a potassium atom can be modelled simplistically for some purposes as a single electron around a core charge of plus one (+19-2-8-8) as, at least as a first approximation, we can treat all the charges within the outermost N (4th) electron shell (the 19 protons and 18 electrons) as if a single composite charge at the centre of the atom. 2

Dubious physics

Whilst I suspect that the poor quality of the English and the limited detail included in this appendix may well mean I am missing part of the argument here, I am not convinced. Besides the credibility issue (how can so many scientists have missed this for so long?) which should never be seen as totally excluding unorthodox ideas (the same thing could have been asked about most revolutionary scientific breakthroughs) my understanding is that there are already some quite sophisticated models of nuclear structure which have evolved alongside programmes of emprical research and which are therefore better supported than Henadzi's somewhat speculative model.

I must confess to not understanding the relevance of the point charge issue as this assumption/simplification would seem to work with Henadzi's model – from well outside the sphere defined by the nucleus plus low lying electrons the observed charge would be the net charge as if located at a central point, so the apparent nuclear charge would indeed be less than the true nuclear charge.

But my main objection would be the way electrostatic forces are discussed and, in particular, two features of the language:

Naked protons

protons must be repelled, and also attracted by an electronic fur coat…

I was not sure what was meant by "protons must be repelled, and also attracted by an electronic fur coat". The repulsion between protons in the nucleus is balanced by the strong nuclear force – so what is this electronic 'fur coat'?

This did remind me of common alternative conceptions that school students (who have not yet learned about nuclear forces) may have, along the lines that a nucleus is held together because the repulsion between protons is balanced by their attraction to the ('orbiting') electrons. Two obvious problems with this notion are that

  • the electrons would be attracting protons out of the nucleus just as they are repelling each other (that is, these effects reinforce, not cancel), and
  • the protons are much closer to each other than to the electrons, and the magnitude of force between charges diminishes with distance.

Newton's third law and Coulomb's law would need to be dis-applied for an electronic effect to balance the protons' mutual repulsions. (On Henadzi's model the conjectured low lying electrons are presumably orbiting much closer to the nucleus than the 1s electrons in the K shell – but, even so, the proton-electron distance will be be much greater than the separation of protons in the nucleus.)3

But I may have misunderstood what Henadzi's meant here by the attraction of the fur coat and its role in the model.

A new correspondence principle?

if a proton gets into the nucleus for some reason, then the corresponding electron will be on the very 'low' orbit

Much more difficult to explain away is the suggestion that "if a proton gets into the nucleus for some reason, then the corresponding electron will be on the very 'low' orbit". Why? This is not explained, so it seems assumed readers will simply understand and agree.

In particular, I do not know what is meant by 'the corresponding electron'. This seems to imply that each proton in the nucleus has a corresponding electron. But electrons are just electrons, and as far as a proton is concerned, one electron is just like any other. All of the electrons attract, and are attracted by, all of the protons.

Confusing a teaching scheme for a mechanism?

This may not always be obvious to school level students, especially when atomic structure is taught through some kind of 'Aufbau' scheme where we add one more proton and one more electron for each consecutive element's atomic structure. That is, the hydrogen atom comprises of a proton and its 'corresponding' electron, and in moving on to helium we add another proton, with its 'corresponding' electron and some neutrons. These correspond only in the sense that to keep the atom neutral we have to add one negative charge for each positive charge. They 'correspond' in a mental accounting scheme – but not in any physical sense.

That is a conceptual scheme meant to do pedagogic work in 'building up' knowledge – but atoms themselves are just systems of fundamental particles following natural laws and are not built up by the sequential addition of components selected from some atomic construction kit. We can be misled into mistaking a pedagogic model designed to help students understand atomic structure for a representation of an actual physical process. (The nuclei of heavy elements are created in the high-energy chaos inside a star – within the plasma where it is too hot for them to capture the electrons needed to form neutral atoms.)

A similar category error (confusing a teaching scheme for a mechanism) often occurs when teachers and textbook authors draw schemes of atoms combining to form molecules (e.g., a methane molecule formed from a carbon atom and four hydrogen atoms) – it is a conceptual system to work with the psychological needs for students to have knowledge built up in manageable learning quanta – but such schemes do not reflect viable chemical processes.4

It is this kind of thinking that leads to students assuming that during homolytic bond fission each atom gets its 'own' electron back. It is not so much that this is not necessarily so, as that the notion of one of the electrons in a bond belonging to one of the atoms is a fiction.

The conservation of force conception (an alternative conception)

When asked about ionisation of atoms it is common for students to suggest that when an electron is removed from an atom (or ion) the remaining electrons are attracted more strongly because the force for the removed electron gets redistributed. It is as if within an atom each proton is taking care of attracting one electron. In this way of thinking a nucleus of a certain charge gives rise to a certain amount of force which is shared among the electrons. Removing an electron means a greater share of the force for those remaining. This all seems intuitive enough to many learners despite being at odds with basic physical principles (Taber, 1998).

I am not deducing that Henadzi, apparently a retired research scientist, shares these basic misconceptions found among students. Perhaps that is the case, but I would not be so arrogant as to diagnose this just from the quoted text. But that is my best understanding of the argument in the paper. If that is not what is meant, then I think the text needs to be clearer.

The revolution will not be televised…

In conclusion, this paper, published in what is supposedly a research journal, is unsatisfactory because (a) it makes some very major claims that if correct are extremely significant for chemistry and perhaps also physics, but (b) the claims are tucked away in an appendix, are not fully explained and justified, and do not properly cite work referred to; and the text is sprinkled with typographic errors, and seems to reflect alternative conceptions of basic science.

I very much suspect that Henadzi's revolutionary ideas are just wrong and should rightly be ignored by the scientific community, despite being published in what claims to be a peer-reviewed (self-describing 'leading international') research journal.

However, perhaps Henadzi's ideas may have merit – the peer reviewers and editor of the journal presumably thought so – in which case they are likely to be ignored anyway because the claims are tucked away in an appendix, are not fully explained and justified, and do not properly cite work referred to; and the text is sprinkled with typographic errors, and seems to reflect alternative conceptions of basic science. In this case scientific progress will be delayed (as it was when Mendel's work was missed) because of the poor presentation of revolutionary ideas.

How does the editor of a peer-reviewed journal move to a decision to publish in 4 days?
Let down by poor journal standards

So, either way, I do not criticise Henadzi for having and sharing these ideas – healthy science encompasses all sorts of wild ideas (some of which turn out not to have been so wild as first assumed) which are critiqued, tested, and judged by the community. However, Henadzi has not been well supported by the peer review process at the journal. Even if peer reviewers did not spot some of the conceptual issues that occurred to me, they should surely have noticed the incompleteness of the argument or at the very least the failures of syntax. But perhaps in order to turn the reviews around so quickly they did not read the paper carefully. And perhaps that is how the editor, Professor Nour Shafik Emam El-Gendy of the Egyptian Petroleum Research Institute, was able to move to a decision to publish four days after submission.5

If there is something interesting behind this paper, it will likely be missed because of the poor presentation and the failure of peer review to support the author in sorting the problems that obscure the case for the proposal. And if the hypothesis is as flawed as it seems, then peer review should have prevented it being published until a more convincing case could be made. Either way, this is another example of a journal rushing to publish something without proper scrutiny and concern for scientific standards.


Works cited

Footnotes:

1 My understanding of the conduction band in a metal is that due to the extensive overlap of atomic orbitals, a great many molecular orbitals are formed, mostly being quite extensive in scope ('delocalised'), and occurring with a spread of energy levels that falls within an energy band. Although strictly the molecular orbitals are at a range of different levels, the gaps between these levels are so small that at normal temperatures the 'thermal energy' available is enough for electrons to readily move between the orbitals (whereas in discrete molecules, with a modest number of molecular orbitals available, transitions usually require absorption of higher energy {visible or more often} ultraviolet radiation). So, this spread of a vast number of closely spaced energy levels is in effect a continuous band.

Given that understanding I could not make sense of these schematic diagrams. They SEEM to show the number of conduction electrons in the 'conduction band' as being located on, and moving around, a single atom. But I may be completely misreading this – as they are meant to be (cross sections through?) a tube.

"we consider a strongly simplified one- dimensional case of the conduction band. Option one: a thin closed tube, completely filled with electrons except one. The diameter of the electron is approximately equal to the diameter of the tube. With such a filling of the zone, with the local movement of the electron, there is an opposite movement of the "place" of the non-filled tube, the electron, that is, the motion of a non-negative charge. Option two: in the tube of one electron – it is possible to move only one charge – a negatively charged electron"

Henadzi, 2019, p.2

2 The shell model is a simplistic model, and for many purposes we need to use more sophisticated accounts. For example, the electrons are not strictly in concentric shells, and electronic orbitals 'interpenetrate' – so an electron considered to be in the third shell of an atom will 'sometimes' be further from the nucleus than an electron considered to be in the fourth shell. That is, a potassium 4s electron cannot be assumed to be completely/always outside of a sphere in which all the other atomic electrons (and the nucleus) are contained, so the the core cannot be considered as a point charge of +1 at the nucleus, even if this works as an approximation for some purposes. The effective nuclear charge from the perspective of the 4s electron will strictly be more than +1 as the number of shielding electrons is somewhat less than 18.

3 Whilst the model of electrons moving around the nucleus in planetary orbits may have had some heuristic value in the development of atomic theory, and may still be a useful teaching model at times (Taber, 2013), it seems it is unlikely to have the sophistication to support any further substantive developments to chemical theory.

4 It is very common for learners to think of chemistry in terms of atoms – e.g., to think of atoms as starting points for reactions; to assume that ions must derive from atoms. This way of thinking has been called the atomic ontology.

5 I find it hard to believe that any suitably qualified and conscientious referees would not raise very serious issues about this manuscript precluding publication in the form it appears in the journal. If the journal really does use peer review, as is claimed, one has to wonder who they think suitable to act as expert reviewers, and how they persuade them to write their reports so quickly.

Based on this, and other papers appearing in the journal, I suspect one of the following:

a) peer review does not actually happen, or

b) peer review is assigned to volunteers who are not experts in the field, and so are not qualified to be 'peers' in the sense intended when we talk of academic peer review, or

c) suitable reviewers are appointed, but instructed to do a very quick but light review ignoring most conceptual, logical, technical and presentation issues as long as the submission is vaguely on topic, or

di) appropriate peer reviewers are sought, but the editor does not expect authors to address reviewer concerns before approving publication, or possibly

dii) decisions to publish sub-standard work are made by administrators without reference to the peer reviews and the editor's input

Not a great experiment…

What was wrong with The Loneliness Experiment?

Keith S. Taber

The loneliness experiment, a.k.a. The BBC Loneliness Experiment was a study publicised through the BBC (British public service broadcaster), and in particular through it's radio programme All in the Mind, ("which covers psychology, neuroscience & mental health" according to presenter, Claudia Hammond's website.)1 It was launched back in February 2018 – pre-COVD.2

"All in the Mind: The Loneliness Experiment launches the world's largest ever survey of its kind on loneliness." https://www.bbc.co.uk/programmes/b09r6fvn

Claudia Hammond describes herself as an "award-winning broadcaster, author and psychology lecturer". In particular "She is Visiting Professor of the Public Understanding of Psychology at the University of Sussex" where according to the University of Sussex  "the post has been specially created for Claudia, who studied applied psychology at the University in the 1990s", so she is very well qualified for her presenting role. (I think she is very good at this role: she has a good voice for the radio and manages to balance the dual role of being expert enough to exude authority, whilst knowing how to ask necessarily naive questions of guests on behalf of non-specialist listeners.)

A serious research project

The study was a funded project based on a collaboration between academics from a number of universities, led by Prof Pamela Qualter, Professor of Education at the Manchester Institute of Education at the University of Manchester. Moreoever, "55,000 people from around the world chose to take part in the BBC Loneliness Experiment, making it the world's largest ever study on loneliness" (https://claudiahammond.com/bbc-loneliness-experiment/)

Loneliness is a serious matter that affects many people, and is not be made light of. So this was a serious study, on an important topic – yet every time I heard this mentioned on the radio (and it was publicised a good deal at the time) I felt myself mentally (and sometimes physically) cringe. Even without hearing precise details of the research design, I could tell this was simply not a good experiment.

This was not due to any great insight on my behalf, but was obvious from the way the work was being described. Readers may wish to see if they can spot for themselves what so irked me.

What is the problem with this research design?

This is how the BBC described the study at its launch:

The Loneliness Experiment, devised by Professor Pamela Qualter and colleagues, aims to look at causes and possible solutions to loneliness. And we want as many people as possible to fill in our survey, even if they've never felt lonely, because we want to know what stops people feeling lonely, so that more of us can feel connected.

https://www.bbc.co.uk/programmes/b09r6fvn

This is how Prof. Hammond described the research in retrospect:

55,000 people from around the world chose to take part in the BBC Loneliness Experiment, making it the world's largest ever study on loneliness. Researchers from the universities of Manchester, Brunel and Exeter, led by Professor Pamela Qualter and funded by the Wellcome Trust, developed a questionnaire asking people what they thought loneliness was, when they felt lonely and for how long.

https://claudiahammond.com/bbc-loneliness-experiment/

And this is how the work is described on the University of Manchester's pages:

The Loneliness Experiment was a study conducted by BBC Radio 4's All in the Mind….

The study asked respondents to give their opinions and record their experiences of loneliness and related topics, including friendship, relationships, and the use of technology – as well as recording lifestyle and background information. Respondents also engaged in a number of experiments.

The survey was developed by Professor Pamela Qualter, from The University of Manchester's Manchester Institute of Education (MIE), with colleagues from Brunel University London, and the University of Exeter. The work was funded by a grant from The Wellcome Trust.

https://www.seed.manchester.ac.uk/education/research/impact/bbc-loneliness-experiment/

When is an experiment not an experiment?

These descriptions make it obvious that the The Loneliness Experiment was not an experiment. Experiment is a specific kind of research – a methodology where the researchers randomly assign participants randomly to conditions, intervene in the experimental condition,and take measurements to see what effect the intervention has by comparing with measurements in a control condition. True experiments are extremely difficult to do in the social sciences (Taber, 2019), and often quasi-experiments or natural experiments are used which do not meet all the expectations for true experiments. BUT, to be an experiment there has to be something that can be measured as changing over time in relation to specified different conditions.

Experiment involves intervention (Image by Gerd Altmann from Pixabay)

Experiment is not the only methodology used in research – there are also case studies, there is action research and grounded theory, for example – and non-experimental research may be entirely appropriate in certain situations, and can be of very high quality. One alternative methodology is the survey which collects data form a sample of a population at some particular time. Although surveys can be carried out in various ways (for example, through a series of observations), especially common in social science is the survey (a methodology) carried out by using participant self-responses to a questionnaire (a research instrument).

it is clear from the descriptions given by the BBC, Professor Hammond and the University of Manchester that the The Loneliness Experiment was not actually an experiment at all, but basically a survey (even if, tantalisingly, the Manchester website suggests that "Respondents also [sic] engaged in a number of experiments". )

The answer to the question 'when is an experiment not an experiment?' might simply be: when it is something other than an experiment

Completing a questionnaire (Image by Andreas Breitling from Pixabay)

What's in a name: does it really matter?

Okay, so I am being pedantic again.

But I do think this matters.

I think it is safe to assume that Prof. Hammond, Prof. Qualter and colleagues know the difference between an experiment and a survey. Presumably someone decided that labelling the research as the loneliness study or the loneliness survey would not be accessible (or perhaps not as impressive) to a general audience and so decided to incorrectly use the label experiment as if experiment was synonymous with study/research.

As a former research methods lecturer, that clearly irks as part of my job was to teach new researchers about key research concepts. But I would hope that people actually doing research or learning to do research are not going to be confused by this mislabelling.

But, as a former school science teacher, I know that there is widespread public misunderstanding of key nature of science terms such as theory and experiment. School age students do need to learn what is meant by the word experiment, and what counts as an experiment, and the BBC is being unhelpful in presenting research that is not experimental as an experiment – as this will simply reinforce common misconceptions of what the term experiment is actually used to denote in research .

So, in summary, I'll score The BBC Loneliness Experiment

  • motivation – excellent;
  • reach – impressive;
  • presentation – unfortunate and misleading
Further reading:

Read about methodology

Read about experiments

Read about surveys

Work cited:

Taber, K. S. (2019). Experimental research into teaching innovations: responding to methodological and ethical challenges. Studies in Science Education, 55(1), 69-119. doi:10.1080/03057267.2019.1658058 [Download manuscript version]

Note:

1: Websites cited accessed on 28th August, 2021.

2: It would have been interesting to repeat when so many people around the world were in 'lock-down'. (A comparison between pre-COVID and pandemic conditions might have offered something of a natural experiment.)

Shock! A typical honey bee colony comprises only six chemicals!

Is it half a dozen of one, or six of the other?

Keith S. Taber

Bee-ware chemicals!
(Images by PollyDot and Clker-Free-Vector-Images from Pixabay)

A recent episode of the BBC Inside science radio programme and podcast was entitled 'Bees and multiple pesticide exposure'. This discussed a very important issue that I have no wish to make light of. Researchers were looking at the stressors which might be harming honey bees, very important pollinators for many plants, and concluded that these likely act synergistically. That is a colony suffering from, say a drought and at the same time a mite infection, will show more damage that one would expect from simply adding the typical harm of each as if independent effects.  Rather there are interactions.

This is hardly surprising, but is none-the-less a worrying finding.

Bees and multiple pesticide exposure episode of BBC Inside Science

However,  my 'science teacher' radar honed in on an aspect of the language used to explain the research. The researcher interviewed was Dr Harry Siviter of the University of Texas at Austin. As part of his presentation he suggested that…

"Exposure to multiple pesticides is the norm, not the exception. So, for example a study in North America showed that the average number of chemicals found in a honey bee colony is six, with a high of 42. So, we know that bees are exposed to multiple chemicals…"

Dr Harry Siviter

The phrase that stood out for me was "the average number of chemicals found in a honey bee colony is six" as clearly that did not make any sense scientifically. At least, not if the term 'chemical' was meant to refer to 'chemical substance'. I cannot claim to know just how many different substances would be found if one analysed honey bee colonies, but I am pretty confident the average would be orders of magnitude greater than six. An organism such as a bee (leaving aside for a moment the hive in which it lives) will be, chemically, 'made up' of a great many different proteins, amino acids, lipids, sugars, nuclei acids, and so forth.

"the average number of chemicals found in a honey bee colony is six"

From the context, I understood that Dr Siviter was not really talking about chemicals in general, but pesticides. So, I am (not for the first time) being a pedant in pointing out that technically he was wrong to suggest "the average number of chemicals found in a honey bee colony is six" as any suitably informed listener would have immediately, and unproblematically, understood what he meant by 'chemicals' in this context.

Yet, as a teacher, my instinct is to consider that programmes such as this, designed to inform the public about science, are not only heard by those who are already well-versed in the sciences. By its nature, BBC Inside Science is intended to engage with a broad audience, and has a role in educating the public about science. I also knew that this particular pedantic point linked to a genuine issue in science teaching.

A common alternative conception

The term chemical is not usually used in science discourse as such, but rather the term substance. Chemical substances are ubiquitous, although in most everyday contexts we do not come across many pure samples of single substances. Tap water is nearly all water, and table salt is usually about 99% sodium chloride, and sometimes metals such as copper or aluminium are used in more or less pure form. But these tend to be exceptions – most material entities we engage with are not pure substances ('chemicals'), rather being mixtures or even more complex (e.g., wood or carrot or hair).

In everyday life, the term chemical tends to be used more loosely – so, for example, household bleach may be considered 'a chemical'. More problematically 'chemicals' tends to be seen as hazardous, and often even poisonous. So, people object to there being 'chemicals' in their food – when of course their food comprises chemicals and we eat food to access those chemicals because we are also made up of a great many chemicals. Food with the chemicals removed is not food, or indeed, anything at all!

In everyday discourse 'chemical' is often associated with 'dangerous' (Image by Arek Socha from Pixabay)

So, science teachers not only have the problem that in everyday discourse the term 'chemical' does not map unproblematically on 'substance' (as it is often used also for mixtures), but even more seriously that chemicals are assumed to be bad, harmful, undesirable – something to be avoided and excluded. By contrast, the scientific perspective is that whilst some chemicals are potentially very harmful, others are essential for life. Therefore, it is unhelpful when science communicators (whether journalists, or scientists themselves) use the term 'chemical' to refer only to potentially undesirable chemicals (which even then tend to be undesirable only in certain contexts), such as pesticides which are found in, and may harm, pollinators.

I decided to dig into the background of the item.

The news item

I found a news item in 'the Conversation' that discuses the work.

Dr Siviter's Article in the Conversation

It began

"A doctor will always ask if you are on any other medication before they write you a prescription. This is because pharmaceuticals can interact with each other and potentially disrupt the treatment, or even harm the patient. But when agrochemicals, such as pesticides, are licensed for use on farms, little attention is paid to how they interact with one another, and so their environmental impact is underestimated."

Siviter, 2021

This seemed a very good point, made with an analogy that seemed very telling.

(Read about science analogies)

This was important because:

"We analysed data gathered in scientific studies from the last two decades and found that when bees are exposed to a combination of pesticides, parasites and poor nutrition, the negative impact of each is exacerbated. We say that the cumulative effect of all these things is synergistic, meaning that the number of bees that are killed is more than we would predict if the negative effects were merely added together."

Siviter, 2021

This seems important work, and raises an issue we should be concerned about. The language used here was subtly different from in the radio programme:

"Many agrochemicals, such as neonicotinoids, are systemic, meaning they accumulate in the environment over several months, and in some cases years. It is perhaps not surprising then that honeybee colonies across the US have on average six different agrochemicals present in their wax, with one hive contaminated with 39 [sic, not 42]. It's not just honeybees which are at risk, though: wild bees such as bumblebees are also routinely exposed."

Siviter, 2021

So, here it was not 'chemicals' that were being counted but 'agrochemicals' (and the average figure of 6 now referred not to the colony as a whole, but only to the beeswax.)

The meta-analysis

'Agrochemicals' was also the term used in the research paper in the prestigious journal Nature where the research had been first reported,

"we conducted a meta-analysis of 356 interaction effect sizes from 90 studies in which bees were exposed to combinations of agrochemicals, nutritional stressors and/or parasites."

Siviter, et al., 2021

A meta-analysis is a type of secondary research study which collects results form a range of related published studies and seeks to identify overall patterns.

The original research

Moreover, the primary study being referred to as the source of the dubious statistic (i.e., that "the average number of chemicals found in a honey bee colony is six") referred not to 'chemicals' but to "pesticides and metabolites" (that is, substances which would be produced as the bee's metabolism broke the pesticides down):

"We have found 121 different pesticides and metabolites within 887 wax, pollen, bee and associated hive samples….

Almost all comb and foundation wax samples (98%) were contaminated with up to 204 and 94 ppm [parts per million], respectively, of fluvalinate and coumaphos, and lower amounts of amitraz degradates and chlorothalonil, with an average of 6 pesticide detections per sample and a high of 39."

Mullin, et al., 2010

Translation and representation

Scientific research is reported in research journals primarily for the benefit of other researchers in the field, and so is formatted and framed accordingly – and this is reflected in the language used in primary sources.

A model of the flow of scientific to public knowledge (after McInerney et al., 2004)

Fig. 10.2 from Taber, 2013

It is important that science which impacts on us all, and is often funded from public funds, is accessible to the public. Science journalism, is an important conduit for the communication of science, and for his to be effective it has to be composed with non-experts in the public in mind.

(Read about science in public discourse and the media)

It is perfectly sensible and desirable for a scientist engaging with a public audience to moderate technical language to make the account of research more accessible for a non-specialist audience. This kind of simplification is also a core process in developing science curriculum and teaching.

(Read about representing science in the curriculum)

However, in the case of 'chemical' I would suggest scientists take care with using the term (and avoid it if possible), as science teachers commonly have to persuade students that chemicals are all around of us, are not always bad for us, are part of us, and are essential. That pesticides and their breakdown products have been so widely detected in bee colonies is a matter of concern, as pesticides are substances that are used because of their detrimental effects on many insects and other organisms that might damage crops.

Whilst that is science deserving public attention, there are a good many more than 6 chemicals in any bee colony, and – indeed – we would want most of them to be there.

References:

Balding black holes – a shaggy dog story

Resurrecting an analogy from a dead metaphor?

Keith S. Taber

Now there's a look in your eyes, like black holes in the sky…(Image by Garik Barseghyan from Pixabay)

I was intrigued by an analogy in a tweet

Like a shaggy dog in springtime, some black holes have to shed their "hair."

The link led me to an item at a webpage at 'Science News' entitled 'Black holes born with magnetic fields quickly shed them' written by Emily Conover. This, in turn, referred to an article in Physical Review Letters.

Now Physical Review Letters is a high status, peer-reviewed, journal.

(Read about peer review)

As part of the primary scientific literature, it publishes articles written by specialist scientists in a technical language intended to be understood by other specialists. Dense scientific terminology is not used to deliberately exclude general readers (as sometimes suggested), but is necessary for scientists to make a convincing case for new knowledge claims that seem persuasive to other specialists. This requires being precise, using unambiguous technical language."The thingamajig kind of, er, attaches to the erm, floppy bit, sort of" would not do the job.

(Read about research writing)

Science News however is news media – it publishes journalism (indeed, 'since 1921' the site reports – although that's the publication and not its website of course.) While science journalism is not essential to the internal processes of science (which rely on researchers engaging with each other's work though  scholarly critique and dialogue) it is very important for the public's engagement with science, and for the accountability of researchers to the wider community.

Science journalists have a job similar to science teachers – to communicate abstract ideas in a way that makes sense to their audience. So, they need to interpret research and explain it in ways that non-specialists can understand.

The news article told me

"Like a shaggy dog in springtime, some black holes have to shed…
Unlike dogs with their varied fur coats, isolated black holes are mostly identical. They are characterized by only their mass, spin and electric charge. According to a rule known as the no-hair theorem, any other distinguishing characteristics, or "hair," are quickly cast off. That includes magnetic fields."

Conover, 2013

Here there is clearly the use of an analogy – as a black hole is not the kind of thing that has actual hair. This would seem to be an example of a journalist creating an analogy (just as a science teacher would) to help 'make the unfamiliar familiar' to her readers:

just as

dogs with lots of hair need to shed some ready for the warmer weather (a reference to a familiar everyday situation)

so, too, do

black holes (no so familiar to most people) need to lose their hair

(Read about making the unfamiliar familiar)

But hair?

Surely a better analogy would be along the lines that just as dogs with lots of hair need to shed some ready for the warmer weather, so to do black holes need to lose their magnetic fields

An analogy is used to show a novel conceptual structure (here, relating to magnetic fields around black holes) maps onto a more familiar, or more readily appreciated, one (here, that a shaggy dog will shed some of its fur). A teaching analogy may not reflect a deep parallel between two systems, as its function may be just to introduce an abstract principle.

(Read about science analogies)

Why talk of black holes having 'hair'?

Conover did not invent the 'hair' reference for her ScienceNews piece – rather she built her analogy on  a term used by the scientists themselves. Indeed, the title of the cited research journal article was "Magnetic Hair and Reconnection in Black Hole Magnetospheres", and it was a study exploring the consequences of the "no-hair theorem" – as the authors explained in their abstract:

"The no-hair theorem of general relativity states that isolated black holes are characterized [completely described] by three parameters: mass, spin, and charge."

Bransgrove, Ripperda & Philippov, 2021

However, some black holes "are born with magnetic fields" or may "acquire magnetic flux later in life", in which case the fields will vary between black holes (giving an additional parameter for distinguishing them). The theory suggests that these black holes should somehow lose any such field: that is, "The fate of the magnetic flux (hair) on the event horizon should be in accordance with the no-hair theorem of general relativity" (Bransgrove, Ripperda & Philippov, 2021: 1). There would have to be a mechanism by which this occurs (as energy will be conserved, even when dealing with black holes).

So, the study was designed to explore whether such black holes would indeed lose their 'hair'.  Despite the use of this accessible comparison (magnetic flux as 'hair'), the text of the paper is pretty heavy going for someone not familiar with that area of science:

"stationary, asymptotically flat BH spacetimes…multipole component l of a magnetic field…self-regulated plasma…electron-positron discharges…nonzero stress-energy tensor…instability…plasmoids…reconnection layer…relativistic velocities…highly magnetized collisionless plasma…Lundquist number regime…Kerr-schild coordinates…dimensionless BH spin…ergosphere volume…spatial hypersurfaces…[…and so it continues]"

(Bransgrove, Ripperda & Philippov, 2021: 1).

"Come on Harry, you know full well that 'the characteristic minimum plasma density required to support the rotating magnetosphere is the Goldreich-Julian number density' [Bransgrove, Ripperda & Philippov, 2021: 2], so hand me that hyperspanner."
Image from Star Trek: Voyager (Paramount Pictures)

Spoiler alert

I do not think I will spoil anything by revealing that Bransgrove and colleague conclude from their work that "the no-hair theorem holds": that there is a 'balding process' – the magnetic field decays ("all components of the stress-energy tensor decay exponentially in time"). If any one reading this is wondering how they did this work, given that  most laboratory stores do not keep black holes in stock to issue to researchers on request, it is worth noting the study was based on a computer simulation.

That may seem to be rather underwhelming as the researchers are just reporting what happens in a computer model, but a lot of cutting-edge science is done that way. Moreover, their simulations produced predictions of how the collapsing magnetic fields of real black holes might actually be detected in terms of the kinds of radiation that should be produced.

As the news item explained matters:

Magnetic reconnection in balding black holes could spew X-rays that astronomers could detect. So scientists may one day glimpse a black hole losing its hair.

Conover, 2013

So, we have hairy black holes that go through a balding process when they lose their hair – which can be tested in principle because they will be spewing radiation.

Balding is to hair, as…

Here we have an example of an analogy for a scientific concept. Analogies compare one phenomenon or concept to another which is considered to have some structural similarity (as in the figure above). When used in teaching and science communication such analogies offer one way to make the unfamiliar familiar, by showing how the unfamiliar system maps in some sense onto a more familiar one.

hair = magnetic field

balding = shedding the magnetic field

Black holes are expected to be, or at least to become, 'hairless' – so without having magnetic fields detectable from outside the event horizon (the 'surface' connecting points beyond which everything, even light, is unable to 'escape' the gravitational field and leave the black hole). If black holes are formed with, or acquire, such magnetic fields, then there is expected to be a 'balding' process. This study explored how this might work in certain types of (simulated) black holes – as magnetic field lines (that initially cross the event horizon) break apart and reconnect. (Note that in this description the magnetic field lines – imaginary lines invented by Michael Faraday as a mental tool to think about and visualise magnetic fields – are treated as though they are real objects!)

Some such comparisons are deliberately intended to help scientists explain their ideas to the public – but scientists also use such tactics to communicate to each other (sometimes in frivolous or humorous ways) and in these cases such expressions may do useful work as short-hand expressions.

So, in this context hair denotes anything that can be detected and measured from outside a black hole apart form its mass, spin, and charge (see, it is much easier to say 'hair')- such as magnetic flux density if there is a magnetic field emerging from the black hole.

A dead metaphor?

In the research paper, Bransgrove, Ripperda and Philippov do not use the 'hair' comparison as an analogy to explain ideas about black holes. Rather they take the already well-established no-hair theorem as given background to their study ("The original no-hair conjecture states that…"), and simply explain their work in relation to it  ("The fate of the magnetic flux (hair) on the event horizon should be in accordance with the no-hair theorem of general relativity.")

Whereas an analogy uses an explicit comparison (this is like that because…), a comparison that is not explained is best seen as a metaphor. A metaphor has 'hidden meaning'. Unlike in an analogy, the meaning is only implied.

  • "The no-hair theorem of general relativity states that isolated black holes are characterized by three parameters: mass, spin, and charge";
  • "The original no-hair conjecture states that all stationary, asymptotically flat BH [black hole] spacetimes should be completely described by the mass, angular momentum, and electric charge"

(Read adbout science metaphors)

Bransgrove and colleagues do not need to explain why they use the term 'hair' in their research report as in their community it has become an accepted expression where researchers already know what it is intended to mean. We might consider it a dead metaphor, an expression which was originally used to imply meaning through some kind of comparison, but which through habitual use has taken on literal meaning.

Science has lots of these dead metaphors – terms like electrical charge and electron spin have with repeated use over time earned their meanings without now needing recourse to their origins as metaphors. This can cause confusion as, for example, a learner may  develop alternative conceptions about electron spin if they do not appreciate its origin as a metaphor, and assumes an electron spins in the same sense as as spinning top or the earth in space. Then there is an associative learning impediment as the learner assumes an electron is spinning on its axis because of the learner's (perfectly reasonable) associations for the word 'spin'.

The journalist or 'science writer' (such as Emily Conover), however, is writing for a non-specialist readership, so does need to explain the 'hair' reference.  So, I would characterise the same use of the terms hair/no-hair and balding as comprising a science analogy in the news item, but a dead metaphor in the context of the research paper. The meaning of language, after all, is in the mind of the reader.

Work cited:

Excavating a cognitive dinosaur

Keith S. Taber

Filling-in; and digging-out a teaching analogy

Is the work of cognition like the work of a palaeontologist? (Image by Brenda Geisse from Pixabay)

I like the reflexive nature of this account – of someone reconstructing an analogy

about how cognition reconstructs coherent wholes from partial, fragmented data

from a partial, fragmented memory representation.

I was reading something about memory function that piqued my interest in an analogy:

"Neisser, using an analogy initially developed by Hebb (1949) to characterize [sic] perception, likened the rememberer to a paleontologist who attempts to reconstruct a dinosaur from fragmentary remains: 'out of a few stored bone chips, we remember a dinosaur'…"

Schacter, 1995, p.10

I was interested enough to look up the original use of this analogy (as I report below).

This links to three things that have separately interested me:

  • the nature of memory
  • the constructivist account of learning and cognition
  • using analogies in teaching and comunicating science

The nature of our memories

I have long been interested in what memory is and how it works – and its role in academic learning (Taber,  2003). In part this perhaps derives from the limits of my own memory – I have been reasonably successful academically, but have never felt I had a good memory (and I seem to get more 'absent minded' all the time). This interest grew as it became clearer to me that our memory experiences seem to be quite different – my late wife Philippa would automatically and effortlessly remember things  in a way that that seemed to me to be a kind of superpower. (She was once genuinely surprised that I could not picture what a family member had been wearing on arriving at a family event years before, whereas I thought I was doing pretty well to even remember I had been there.) Now that neurodiversity is widely recognised, it seems less surprising that we do not all experience memory in the same way.

A lot of people, however, understand memory in terms of a kind of folk-model (that is, a popular everyday account which does not match current scientific understanding) – along the lines that we put information into a memory store, where – unless it gets lost and we forget – we can later access it and so remember what it was that we committed to memory. Despite the ubiquity of that notion, research suggests that is not really how memory functions. We might say that this is a common alternative conception of how memory works.

(Read about 'Memory')

The constructive nature of memory

Schacter was referring back to a tradition that began a century ago when Bartlett carried out a series of studies on memory. Bartlett (1932/1995) would, for example, expose people to a story that was unfamiliar to his study participants, and then later ask them to retell as much of the story as they could remember. As might be expected, some people remembered more details than others.

What perhaps was less predictable at the time was the extent to which people included in their retelling details that had not been part of the original story at all. These people were not deliberately embellishing or knowingly guessing, but reporting, as best they could, what their memory suggested had been part of the original story.

People who habitually exhibit this 'confabulation' to an pathological degree (perhaps remembering totally fantastic things that clearly could not be true) are recognised as having some kind of problem, but it transpires this is just an extreme of something that is normal behavior. Remembering is not the 'pulling something out of storage' that we may experience it as – as actually what we remember is more like a best guess based on insufficient data (but a guess made preconsciously, so it appears in our conscious minds as definitive) than a pristine copy of an original experience. Memory is often more a matter of constructing an account from the materials at hand than simply reading it out from something stored.

Thus the analogy. Here is some wider context for the quote presented above:

"The publication of Neisser's (1967) important monograph on cognitive psychology rekindled interest in Bartlett's ideas about schemas and reconstructive memory. According to Neisser, remembering the past is not a simple matter of reawakening a dormant engram or memory trace; past events are constructed by using preexisting knowledge and [schemata] to piece together whatever fragmentary remains of the initial episode are available in memory. Neisser, using an analogy initially developed by Hebb (1949) to characterize [sic] perception, likened the rememberer to a paleontologist who attempts to reconstruct a dinosaur from fragmentary remains: 'out of a few stored bone chips, we remember a dinosaur' (1967, p.285). In this view, all memories are constructions because they include general knowledge that was not part of a specific event, but is necessary to reconstruct it. The fundamentally constructive nature of memory in turn makes it susceptible to various kinds of distortions and inaccuracies. Not surprisingly, Neisser embraced Bartlett's observations and ideas about the nature of memory."

Schacter, 1995, p.10

These ideas will not seem strange to those who have studied science education, a field which has been strongly influenced by a 'constructivist' perspective on learning. Drawing on learning science research, the constructivist perspective focuses on how each learner has to build up their own knowledge incrementally: it is not possible for a teacher to take some complex technical knowledge and simply transfer it (or copy it) to a learner's mind wholesale.

(Read more about constructivism in education)

Excavating the analogy: what did Hebb actually say?

Hebb is remembered for his work on understanding the brain in terms of neural structures – neurons connected into assemblies through synapses.  His book 'The Organization of Behavior' has been described as "one of the most influential books in Psychology and Neuroscience" (Brown, 2020: 1).

Tachistoscope Source: Science Museum Group (This image is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence)

The analogy referred to by Schacter was used by Hebb in describing perception. He discussed studies using a tachistoscope, an instrument for displaying images for very brief periods. This could be used to show an image to a person with an exposure insufficient for them to take in all the details,

"…the pattern is perceived, first, as a familiar one, and then with something missing or something added. The something, also, is familiar; so the total perception is a mélange of the habitual.

The subject's reports [make it] clear that the subject is not only responding to the diagram as a whole; he perceives its parts as separate entities, even though presentation is so brief. Errors are prominent, and such as to show that all the subject really perceives–and then only with rough accuracy–is the slope of a few lines and their direction and distance from one another"

Hebb, 1949: pp.46-47

That is, the cognitive system uses the 'clues' available from the incomplete visual data to build  (in effect) a hypothesis of what was seen, based on correspondences between the data actually available and familiar images that match that limited data. What the person becomes consciously aware of 'seeing' is not actually a direct report from the visual field of the presented image, but a constructed image that is a kind of conjecture of what might have been seen – 'filling-in' missing data with what seems most likely based on past visual experiences.

Cognitive scientist Annette Karmiloff-Smith developed the concept of 'representational redescription' as a way of describing how initially tacit knowledge could eventually become explicit. She suggested that "intra-domain and inter-domain representational relations are the hallmark of a flexible and creative cognitive system" (Karmiloff-Smith,1996: 192). The gist was that the brain is able to re-represent its own internal representations in new forms with different affordances.

An loose analogy might be someone who takes a screenshot when displaying an image from the JPEG photo collection folder on the computer, opens the screenshot as a pdf file, and then adds some textual annotations before exporting the file to a new pdf. The representation of the original image is unchanged in the system, but a new representation has been made of it in a different form, which has then been modified and 'stored' (represented) in a different folder.

Hebb was describing how a representation of visual data at one level in the cognitive system has been represented elsewhere in the system (representational redescription?) at a level where it can be mentipulated by 'filling-in'.

Hebb then goes on to use the analogy:

"A drawing or a report of what is seen tachistoscopically is not unlike a paleontologist's reconstruction of early man from a tooth and a rib. There is a clear effect of earlier experience, filling in gaps in the actual perception, so that the end result is either something familiar or a combination of familiar things–a reconstruction on the basis of experience."

Hebb, 1949: p.47

Teaching analogies

Hebb was writing a book that can be considered as a textbook, so this can be seen as a teaching analogy, although such analogies are also used in communicating science in other contexts.

(Read about Science analogies)

Teaching is about making the unfamiliar familiar, and one way we do that is by saying that 'this unfamiliar thing you need to learn about is a bit like this other thing that you already know about'. Of course, when teaching in this way we need to say in what way there is an analogy, and it may also be important to say in what ways the two things are not alike if we do not want people to map across irrelevant elements (i.e., to develop 'associative' learning impediments).

(Read about Making the unfamiliar familiar)

Hebb is saying that visual perception is often not simply the detection of a coherent and integral image, but is rather a construction produced by building upon the available data to construct a coherent and integral image. In extremis, a good deal may be made of very little scraps of input – akin to a scientist reconstructing a model of a full humanoid body based on a couple of bits of bone or tooth.

Hebb's analogy

There are examples where palaeontologists or anthropologists have indeed suggested such complete forms based on a few fossil fragments as data. This is only possible because of their past experiences of meeting many complete forms, and the parts of which they are made. (And of course, sometimes other scientists completely disagree about their reconstructions!)

An exscientific analogy?

Often in teaching science we use teaching analogies that compare an unfamiliar scientific concept to some familiar everyday phenomenon – perhaps a reaction profile is a bit like a roller-coaster track. Perhaps we could call these adscientific analogies as the meaning is transferred to the scientific concept from the everyday.

Sometimes, however, familiar scientific phenomena or ideas are used as the source – as here. Perhaps these could be called exscientific analogies as the meaning is taken from the science concept and applied elsewhere.

Developing the palaeontology analogy

So, Hebb had originally used the palaeontology analogy in the context of discussing perception. When I looked into how Neisser had used the comparison in his "important monograph on cognitive psychology" I found he had developed the analogy, returning to it at several points in his book.

Do we analyse what we attend to?

Neisser's first reference was also in relation to perception, rather than memory. Neisser argued that before we can attend to part of a scene there must already have been the operation of "preattentive mechanisms, which form segregated objects"  from which we can select what to attend to. These processes might be referred to as analyses:

"…the detailed properties and features that we ordinarily see in an attended figure…arise…only because part of the input was selected for attention and certain operations then performed on it. Neither the object of analysis nor the nature of the analysis is inevitable, and both may vary in different observers and at different times."

Neisser, 1967, p.94

But Neisser was not sure this really was 'analysis', which he understood as drawing on another (what I labelled above) exscientific analogy:

"The very word 'analysis' may not be apt. It suggests an analogy with chemistry: a chemist 'analyses' unknown substances to find out what they 'really' are."

Neisser, 1967, p.94

Rather than refer to analysis, we could draw on  Hebb's palaeontological analogy:

"More appropriate…is Hebb's (1949, p.47) comparison of the perceiver with a paleontologist, who carefully extracts a few fragments of what might be bones from a mass of irrelevant rubble and 'reconstructs' the dinosaur that will eventually stand in the Museum of Natural History. In this sense it is important to think of focal attention as a constructive, synthetic activity rather than as purely analytic. One does not simply examine the input and make a decision; one builds an appropriate visual object."

Neisser, 1967, p.94

[If it helps to have some examples to reflect upon this account of perception, you may find it useful to look at some images that may require careful interpretation.]

Neisser draws upon the analogy repeatedly in developing his account of perception:

"Such emotion-flooded experiences [as 'physiognomic' perception: 'Everyone has perceived such traits as suppressed anger in a face, gaiety in a movement, or peaceful harmony in a picture'] can be thought of as the result of particular kinds of construction. The same fragments of bone that lead one paleontologist to make an accurate model of an unspectacular creature might lead another, perhaps more anxious or more dramatic, to 'reconstruct' a nightmarish monster." (pp.96-97)

"To 'direct attention' to a figure is to attempt a more extensive synthesis of it. Of course, synthesis presupposes some prior analysis, as the paleontologist must have some fragments of bone before he can build his dinosaur…" (p.103)

"Recognition, whether of spelling patterns or words as wholes, must be mediated by relevant features, as meaningless in themselves as the bone chips of the paleontologist." (p.114)

"The process of figural synthesis does not depend only on the features extracted from the input, just as the dinosaur constructed by a paleontologist is not based only on the bone chips he has found. Equally important is the kind of perceptual object the perceiver is prepared to construct. The importance of set and context on the perception of words has been demonstrated in a great many experiments." (pp.115-116)

Neisser, 1967

And as with perception, so memory…

When Neisser discusses memory he uses a kind of double analogy – suggesting that memory is a bit like perception, which (as already established) is a bit like the work of the palaeontologist:

"Perception is constructive, but the input information often plays the largest single role in determining the constructive process. A very similar role, it seems to me, is played by the aggregate of information stored in long-term memory.

This is not to say that the stimuli themselves are copied and stored; far from it. The analogy being offered asserts only that the role which stored information plays in recall is like the role which stimulus information plays in perception….The model of the paleontologist, which was applied to perception and focal attention in Chapter 4, applies also to memory: out of a few stored bone chips, we remember a dinosaur….one does not recall objects or responses simply because traces of them exist in the mind, but after an elaborate process of reconstruction, (which usually makes use of relevant stored information).

What is the information – the bone chips – on which reconstruction is based? The only plausible possibility is that it consists of traces of prior processes of construction. There are no stored copies of finished mental events, like images or sentences, but only traces of earlier constructive activity."

Neisser, 1967, p.285
Fleshing-out the metaphor

Neisser then pushes the analogy one step further, by pointing out that the 'fleshed-out' model of a dinosaur in the museum may be constructed in part based on the fossil fragments of bones, but those fragments themselves do not form part of the construction (the model). The bones are used as referents in building the skeletal framework (literally, the skeleton) around which the model will be built, but the model is made from other materials (wood, steel, fibreglass, whatever) and the fossil fragments themselves will be displayed separately or perhaps filed away in a drawer in the museum archives. (As in the representational redescription model – the original representation is redescribed at another level of the system.)

"The present proposal is, therefore, that we store traces of earlier cognitive acts, not of the products of those acts. The traces are not simply 'revised' or 'reactivated' in recall; instead, the stored fragments are used as information to support a new construction. It is as if the bone fragments used by the paleontologist did not appear in the model he builds at all – as indeed they need not, if it to represent a fully fleshed-out skin-covered dinosaur. The bones can be thought of, somewhat loosely, as remnants of the structure which created and supported the original dinosaur, and thus as sources of information about how to reconstruct it."

Neisser, 1967, pp.285-286

Neisser's development of Hebb's analogy

The head palaeontologist?

A final reference to the analogy is used when Neisser addresses the question of the cognitive executive: the notion that somewhere in the cognitive system there is something akin to an overseer who direct operations:

"Who does the turning, the trying, and the erring" Is there a little man in the head, a homonculus, who acts the part of the paleontologist vis-à-vis the dinosaur? p.293

Neisser, 1967, p.293

The homonculus can be pictured as a small person sitting in the brain's control room, for example, viewing the images being projected from the visual input.

It is usually considered this is a flawed model (potentially lading to an infinite regress), a failure to take a systemic view of the cognitive system. It is the system which functions and leads to our conscious experience of perceiving, attending, making decisions, planning, remembering, and so forth. Whilst there are specialist components (modules) including for the coordination of the system, there is not a discrete controller overlaying the system as a whole who is doing the seeing, hearing, thinking, etcetera based on outputs from processing by the system.

Here the homonculus would like an authority that the palaeontologist turned to in order to decide how to build her model: raising the question of how does that expert know, and who would they, in turn, ask?

Why change Hebb's orignal analogy?

Altohugh Neisser refers to the analogy as being that used by Hebb, he modifies it. A tooth and rib become fragments of bone, and the early man becomes a dinosaur. Whether the shift from the reconstruction of an early hominid to the reconstruction of a terrible lizard was a deliberate one (for greater effect? because Neisser thought it would be more familiar to his readers?) or not I do not know. The phrasing suggests that Neisser thought he was applying Hebb's original comparison – so I suspect this is how he recalled the analogy.

Perhaps Neisser had regularly used the analogy in his teaching, in which case it may have become so familiar to him that he did not feel the need to check the original version. That is, perhaps he was correctly remembering how he had previously misremembered the original analogy. That is not fanciful, as memory researchers suggest this is something that is very common. Each time we access a memory the wider representational context becomes modified by engagement with it.

That is, if what is represented (in 'long-term memory'*) is indeed "traces of prior processes of construction…traces of earlier constructive activity" then each time a 'memory' is experienced, by being constructed based on what is represented ('in memory'*), new traces of that process of constructing the memory are left in the system.

It is possible over the years to be very convinced about the accuracy of a distorted memory that has been regularly reinforced. (The extent to which this may in part be the origin of many wars, feuds, and divorces might be a useful focus for research?)

So perhaps Neisser had represented in his long-term memory the analogy of a palaeontologist with a few fossil fragments, and when he sought to access the analogy, perhaps in a classroom presentation, the other elements were filled-in: the 'tooth and rib' became 'a few fragments of what might be bones' and the 'early man' become 'a dinosaur' – details that made sense of the analogy in terms familiar to Neisser.

The account of cognition that Hebb, Neisser and Schater were presenting would suggest that if this had been the case then for Neisser there would be no apparent distinction between the parts of Hebb's analogy that Neisser was remembering accurately, and the parts his preconscious mind had filled-in to construct a coherent analogy. I like the reflexive nature of this account – of someone reconstructing an analogy about how cognition reconstructs coherent wholes from partial, fragmented data – from a partial, fragmented memory representation.

 Sources cited:
  • Bartlett, F. C. (1932/1995). Remembering: A study in experimental and social psychology Cambridge: Cambridge University Press.
  • Brown, R. E. (2020). Donald O. Hebb and the Organization of Behavior: 17 years in the writing. Molecular Brain, 13(1), 55. doi:10.1186/s13041-020-00567-8
  • Hebb, D. O. (1949). The Organisation of Behaviour. A neuropsychological theory. New York: John Wiley & Sons, Inc.
  • Karmiloff-Smith, A. (1996). Beyond Modularity: A developmental perspective on cognitive science. Cambridge, Massachusetts: MIT Press.
  • Neisser, U. (1967). Cognitive Psychology. New York: Appleton-Century-Crofts.
  • Schacter, D. L. (1995). Memory distortion: history and current status. In D. L. Schacter (Ed.), Memory Distortion. How minds, brains, and societies reconstruct the past (pp. 1-43). Cambridge, Massachusetts: Harvard University Press.
  • Taber, K. S. (2003) Lost without trace or not brought to mind? – a case study of remembering and forgetting of college science, Chemistry Education: Research and Practice, 4 (3), pp.249-277. [Free access]

* terms like 'in memory' and 'in long-term memory' may bring to mind the folk-notion of memory as somewhere in the brain where things are stored away, whereas it is probably better to think of the brain as a somewhat plastic processing system which is constantly being modified by its own functioning. The memory we experience is simply the outcome of active processing** in part of the system that has previously been modified by earlier mental activity (** active processing which is in turn itself further modifying the system).