When is V=IR the formula for Ohm's law?

"Resistance is current over voltage, I think"

Image by Gerd Altmann from Pixabay 

Adrian was a participant in the Understanding Science Project. When I interviewed him in Y12 when he was studying Advanced level physics he told me that "We have looked at resistance and conductance and the formulas that go with them". So I asked him was resistance was:

So what exactly is resistance?

Resistance is, erm (pause, c.3s) Resistance is current over, voltage, I think. (Pause, c.3s) Yeah. No.

Not sure?

I can’t remember formulas.

So Adrian's first impulse was to define resistance using a formula, although he did not feel he was able to remember formulae. He correctly knew that the formula involved resistance, current and voltage, but could not recall the relationship. Of course if he understood qualitatively how these influenced each other, then he should have been able to work out which way the formula had to go, as the formula represents the relationship between resistance, voltage and current.

So, I then proceeded to ask Adrian how he would explain resistance to a younger person, and he suggested that resistance is how much something is being slowed down or is stopped going round. After we had talked about that for a while, I brought the discussion back to the formula and the relationship between R, V and I:

And what about this resistance in electricity then, do you measure that in some kind of unit?

Yes, in, erm, (pause, c.2s) In ohms.

So what is an ohm?

Erm, an ohm is, the unit that resistance is measured in.

Fair enough.

It comes from ohm's law which is the, erm, formula that gives you resistance.

V=IR is the formula that gives you resistance, but it is a common misconception, that Ohm's law is V=IR.

Actually, Ohm's law suggests that the current through a metallic conductor (kept at constant conditions, e.g., temperature) is directly proportional to the potential difference across its ends.

So, in such a case (a metal that is not changing temperature, etc.)

I ∝ V

which is equivalent to

V ∝ I

which is equivalent to

V = kI

where k is a constant of proportionality. If we use the symbol R for the constant in this case then

V= RI

which is equivalent to

V = IR

 So, it may seem I have just contradicted myself, as I denied that V=IR was a representation of Ohm's law, yet seem to have derived V=IR from the law.

There is no contradiction as long as we keep in mind what the symbols are representing in the equation. I represented the current flowing through a metallic conductor being held at constant conditions (temperature, tension etc.), and V represented the potential difference across the ends of that metallic conductor. If we restrict V and I to this meaning then the formula could be seen as a way of representing Ohm's law.

Over-generalising

However, that is not how we usually understand these symbols in electrical work: V generally represents the potential difference across some resistive component or other, and I represents the current flowing through that component: a resistor, a graphite rod, a salt bridge, a diode, a tungsten filament in a lamp…

In this general case

V = IR

or

R = V/I

is the defining equation for resistance. If R is defined as V/I then it will always be the case, not because there is a physical law that suggests this, but simply because that is the meaning we have given to R.

This is a bit like bachelors being unmarried men (an example that seems to be a favourite of some philosophers): bachelors are not unmarried men because there is some rule or law decreeing that bachelors are not able to get married, but simply because of our definition. A bachelor who gets married and so becomes a married man ceases to be a bachelor at the moment they become a married man – in a similar way to how a butterfly is no longer a caterpillar. Not because of some law of nature, but by our conventions regarding how words are used. If V and I are going to be used as general symbols (and not restricted to our carefully controlled metallic conductor) then V = IR simply because R is defined as V/I and the formula, used in the general case, should not be confused with Ohm's law.

In Ohm's law, V=IR where R will be constant.

In general, V=IR and R will vary, as Ohm's law does not generally apply.

It would perhaps be better for helping students see this had there been a convention that the p.d. across, and the current through, a piece of metal being kept in constant conditions were represented by, say V and I, so Ohm's law could be represented as, say

V = k I

but, as this is not the usual convention, students need to keep in mind when they are dealing with the special case to which Ohm's law refers.

A flawed teaching model?

The interesting question is whether:

  • teachers are being very careful to make this distinction, but students still get confused;
  • teachers are using language carefully, but not making the discrimination explicit for students, so they miss the distinction;
  • some teachers are actually teaching that V=IR is Ohm's law.

If the latter option is the case , then it would be good to know if the teachers teaching this:

  • have the alternative conception themselves;
  • appreciate the distinction, but think it does not matter;
  • consider identifying the general formula V=IR with Ohm's law is a suitable simplification, a kind of teaching model, suitable for students who are not ready to have the distinction explained.

It would be useful to know the answers to these questions, not to blame teachers, but because we need to diagnose a problem to suggest the best cure.



'In my head, son' – mind reading commentators

Keith S. Taber

*

"Tim Howard is a little frustrated with himself that it wasn't a tidier save, because he feels he ought to have done better with the first attempt."

Thus claimed the commentator on the television highlights programme Match of the Day (BBC) commenting on the association football (soccer) match Everton vs. Spurs on May 25th 2015.  

It was not a claim that was obviously contradicted by the footage being shown, but inevitably my reaction (as someone who teaches research methods to students) was 'how do you know?" The goalkeeper was busy playing a game of football, some distance from the commentator, and there was no obvious conversation between them. The answer of course is that the commentator was a mind reader who knew what someone else was thinking and feeling.

This is not so strange, as we are all mind readers – or at least we commonly make statements about the thoughts, attitude, feels, beliefs etc. of others, based on their past or present behaviour, subtle body language, facial expressions and/or the context of their current predicament.

Of course, that is not strictly mind reading, as minds are not visible. But part of normal human development is acquiring a 'theory of mind' that allows us to draw inferences about the thoughts and feelings of others – the internal subjective experiences of others – drawing upon our own feelings and thoughts as a model. In everyday life, this ability is essential to normal social functioning – even if we do not always get it right. Yet we become so used to relying upon these skills that public commentators (well, a sports commentator here) feel no discomfort in not only interpreting the play, but the feelings and thoughts of the players they are observing.

A large part of the kind of educational research that I tend to be involved in is very similar to this – it involves using available evidence to make inferences about what others think and feel. [There are many examples in the blog posts on this site.]  Sometimes we have very strong evidence (what people tell us about their thoughts and feelings) but even then this is indirect evidence – we can never actually see another mind at work (1). We do not "see the cogs moving", even if we may like to talk as though we do.

In everyday life we forgive the kinds of under-determined claims made by sports commentators, and may not even notice when they draw such inferences and question what support their claims have. Sadly this seems to be a human quality that we often take for granted a little too much. A great deal of the research literature in science education is written as though research offers definite results about students' conceptions (and misconceptions) and whether or not they know something or understand it – as though such matters are simple, binary, and readily detected (1). Yet research actually suggests this is far from the case (2).

Research that explores students' thinking and learning is actually very challenging, and is in effect a enterprise to build and test models rather than uncover simple truths. I suspect quite a bit of the disagreement about the nature of student thinking in the science education research literature is down to researchers who forget that even if people are mind readers in everyday life, they must become careful and self-critical model builders when they are seeking to make claims presented as research (1).

References:

Taber, K. S. (2013). Modelling Learners and Learning in Science Education: Developing representations of concepts, conceptual structure and conceptual change to inform teaching and research. Dordrecht: Springer.

(2) Taber, K. S. (2014). Student Thinking and Learning in Science: Perspectives on the nature and development of learners' ideas. New York: Routledge.

* Previously published at http://people.ds.cam.ac.uk/kst24/science-education-research: 25th May 2015

Many generations later it's just naturally always having fur

Keith S. Taber

Image by MirelaSchenk from Pixabay 

Bert was a participant in the Understanding Science Project. In Y11 he reported that he had been studying about the environment in biology, and done some work on adaptation. he gave a number of examples of how animals were adapted to their environment. One of these examples was the polar bear.

our homework we did about adapting, like how polar bears adapt to their environments, and camels….

And so a polar bear has adapted to the environment?

Yeah.

So how has a polar bear adapted to the environment?

Erm, things like it has white fur for camouflage so the prey don't see it coming up. Large feet to spread out its weight when it's going over like ice. Yeah, thick fur to keep the body heat insulated.

Bert gave a number of other examples, including dogs that were bred with particular characteristics, although he explained this in terms of inheritance of acquired characteristics: suggesting that dogs that have been taught over and over to retrieve have puppies that automatically have already got that sense. Bert realised that his example was due to the work of human breeders, and took the polar bear as an example of a creature that had adapted to its environment.

Yeah, so how does adaption take place then? …

I don't know. It may have something to do with negative feedback.Like you have like, you always get like, you always get feedback, like in the body to release less insulin and stuff like that. So in time … organisms, learn to adapt to that. Because if it happens a lot that makes a feedback then it comes, yeah then they just learn to do that.

Okay. Give me an example of that. I'm trying to link it up in my head.

Okay, like the polar bear, like I don't know. It may have started off just like every other bear, but because it was put in that environment, like all the time the body was telling it to grow more fur and things like that, because it was so cold. So after a while it just adapted to, you know, always having fur instead of, you know, like dogs shed hair in the summer and stuff. But like if it was always then they'd just learn to keep shedding that hair.

So if it was an ordinary bear, not a polar bear, and you stuck it in the Arctic, it would get cold?

Yeah.

But you say the body tells it to grow more fur?

Erm, yeah.

How does that work?

I'm not sure, it just … I don't know. Like, erm, like the body senses that it's cold, it goes to the brain, and the brain thinks, well how is it going to go against that, you know, make the body warmer. And so it kind of, you know, it gives these things.

So Bert seemed to have notion of (it not the term) homoeostasis, that allowed control of such things as levels of insulin. He recognised thus was based on negative feedback – when some problematic condition was recognised (e.g. being too cold) this would trigger a response (e.g., more insulation) to bring about a countering change.

However, in Bert's model, the mechanism was not initially automatic. Bert thought that this process which initially was based on deliberation became automatic over many generations…

I see. So the bear has already got a mechanism which would enable it to have more fur, but it's turned on to some extent by being put into the cold?

Yeah.

And then over a period of time, what happens then?

Erm I guess it just it doesn't really need that impulse of being cold, it's just naturally there now, to tell it to do it more.

So how does that happen? Is this the same bear or is this many generations later?

I would probably think many generations later.

Right, so if it was just one particular bear, it wouldn't eventually just produce more hair automatically itself, but its offspring eventually might?

Yeah.

So how does that happen then?

I don't know. Probably from DNA or something. We haven't gone over that yet.

So for Bert, the individual bear could change its characteristics through activating a potential mechanism (in this case for keeping year-round thick fur) through a process of sensing and responding to environmental conditions. Over many generations this changed characteristic could become an automatic response by eventually being coded into the genetic material. As with his explanation of selective breeding, Bert invoked a model of evolution through the inheritance of acquired characteristics, rather than the operation of natural selection on the natural range of characteristics within a breeding population.

Like many students learning about evolution, Darwin's model of variation offering the basis for natural selection was not as intuitively appealing as a more Lamarckian idea that individuals managed to change their characteristics during their lives and pass on the changes to their offspring. This is an example of where student thinking reflects a historical scientific theory that has been discarded rather than the currently canonical scientific ideas taught in schools.

The brain thinks: grow more fur

The body senses that it's cold, and the brain thinks how is it going to make the body warmer?

Keith S. Taber

Image by Couleur from Pixabay 

Bert was a participant in the Understanding Science Project. In Y11 he reported that he had been studying about the environment in biology, and done some work on adaptation. he gave a number of examples of how animals were adapted to their environment. One of these examples was the polar bear.

our homework we did about adapting, like how polar bears adapt to their environments, and camels….

And so a polar bear has adapted to the environment?

Yeah.

So how has a polar bear adapted to the environment?

Erm, things like it has white fur for camouflage so the prey don't see it coming up. Large feet to spread out its weight when it's going over like ice. Yeah, thick fur to keep the body heat insulated.

Bert gave a number of other examples, including dogs that were bred with particular characteristics, although he explained this in terms of inheritance of acquired characteristics: suggesting that dogs that have been taught over and over to retrieve have puppies that automatically have already got that sense. Bert realised that this example was due to the work of human breeders, and took the polar bear as an example of a creature that had adapted to its environment.

Yeah, so how does adaption take place then? You've got a number of examples there, bears and dogs and camels and people. So how does adaption take place?

I don't know. It may have something to do with negative feedback.

That's impressive.

Like you have like, you always get like, you always get feedback, like in the body to release less insulin and stuff like that. So in time people like or whatever, organisms, learn to adapt to that. Because if it happens a lot that makes a feedback then it comes, yeah then they just learn to do that.

Okay. Give me an example of that. I'm trying to link it up in my head.

Okay, like the polar bear, like I don't know. It may have started off just like every other bear, but because it was put in that environment, like all the time the body was telling it to grow more fur and things like that, because it was so cold. So after a while it just adapted to, you know, always having fur instead of, you know, like dogs shed hair in the summer and stuff. But like if it was always then they'd just learn to keep shedding that hair.

So if it was an ordinary bear, not a polar bear, and you stuck it in the Arctic, it would get cold?

Yeah.

But you say the body tells it to grow more fur?

Erm, yeah.

How does that work?

I'm not sure, it just … I don't know. Like, erm, like the body senses that it's cold, it goes to the brain, and the brain thinks, well how is it going to go against that, you know, make the body warmer. And so it kind of, you know, it gives these things.

Is that an example of feedback?

Yes.

So Bert seemed to have notion of (it not the term) homoeostasis, that allowed control of such things as levels of insulin. He recognised thus was based on negative feedback – when some problematic condition was recognised (e.g. being too cold) this would trigger a response (e.g., more insulation)to bring about a countering change.

However, in Bert's model, the mechanism was not automatic. Rather it depended upon conscious deliberation: "the brain thinks, well how is it going to …make the body warmer". Bert thought that this process which initially was based on deliberation then became automatic over many generations.

This seems to assume that bears think in similar terms to humans, that they identify a problem and reason a way through. This might be considered an example of anthropomorphism, something that is very common in student (indeed human) thinking. To what extent it may be reasonable to assign this kind of conscious reasoning to bears is an open question.

However there was a flaws in the process described by Bert that he might have spotted himself. This model suggested that once the bear had become aware of the issue, and the needs to address, it would be able to grow its fur accordingly. That is, as a matter of will. Bert would have been aware that he is able to control some aspects of his body voluntarily (e.g., to raise his arm), but he cannot will his hair to grow at a different rate.

Of course, it may be countered that I am guilty of a kind of anthropomorphism-in-reverse: Bert is not a bear, but rather a human who does not need to control hair growth according to environment. So, just because Bert cannot consciously control his own hair growth, this need not imply the same is true for a bear. However, Bert also used the example of insulin levels, very relevant to humans, and he would presumably be aware that insulin release is controlled in his own body without his conscious intervention.

As often happens in interviewing students (or human conversations more generally) time to reflect on the exchange raises ideas one did not consider at the time, that one would like to be able to to text out by asking further questions. If things that were once deliberate become instinctive over time, then it is not unreasonable in principle to suggest things that are automatic now (adjusting insulin levels to control blood glucose levels) may have once been deliberate.

After all, people can control insulin levels to some extent by choosing to eat a different diet. And indeed people can learn biofeedback relaxation techniques that can have an effect on such variables as blood pressure, and some diabetics have used such techniques to reduce their need for medical insulin. So, did Bert think that people had once consciously controlled insulin levels, but over generations this has become automatic?

In some ways this does not seem a very likely or promising idea – but that is a judgement made from a reasonably high level of science knowledge. It is important to encourage students to use their imaginations and suggest ideas as that is an important aspect of how science woks. Most scientific conjectures are ultimately wrong, but they may still be useful tools for moving science on. In the same way, learners' flawed ideas, if explored carefully, may often be useful tools for learning. At the time of the interview, I felt Bert had not really thought his scheme through. That may well have been so, but there may have been more coherence and reflection behind his comments than I realised at the time.

A wooden table is solid…or is it?

Keith S. Taber

Wood (cork coaster captured with Veho Discovery USB microscope)

Bill was a participant in the Understanding Science Project. Bill (Y7) was explaining that he had been learning about the states of matter, and introduced the notion of there being particles:

So how do you know if something is a solid, a liquid or a gas?

Well, solids they stay same shape and their particles only move a tiny bit

So what are these particles then?

Erm, they're the bits that make it what it is, I think.

Ah. So are there any solids round here?:

Yeah, this table. [The wooden table Bill was sitting at.]

That's a solid, is it?:

Yeah

Technically the terms solid, liquid and gas refer to samples of substances and not objects. From a chemical perspective a table is not solid. A wooden table (such as those in the school laboratory where I talked to Bill) is made of a complex composite material that includes various different substances such as a lignin and cellulose in its structure.

Wood contains some water, and has air pockets, so technically wood is not a solid to a chemist. However, in everyday life we do thing of objects such as tables as being solid.

Yet if wood is heated, water can be driven off. Timber can be mostly water by weight, and is 'seasoned' to remove much of the water content before being used as a construction material. Under the microscope the complex structure of woods can be seen, including spaces containing air.

Bill also suggested that a living plant should be considered a solid.

I think teaching may be a problem here, as when the states of matter are taught it is often not made clear these distinctions only apply clearly to fairly pure samples of substances. In effect the teaching model is that materials occur as solids, liquids and gases – when a good many materials (emulsions, gels, aerosols, etc.) do not fit this model at all well.

Particles in a solid can be seen with a microscope

Keith S. Taber

Image by 2427999 from Pixabay 

Bill was a participant in the Understanding Science Project. Bill was explaining that he had been learning about the states of matter, and introduced the notion of there being particles:

So how do you know if something is a solid, a liquid or a gas?

Well, solids they stay same shape and their particles only move a tiny bit

So what are these particles then?

Erm, they're the bits that make it what it is, I think.

Ah. So are there any solids round here?:

Yeah, this table.

That's a solid, is it?:

Yeah

Technically the terms solid, liquid and gas refer to samples of substances and not objects. From a chemical perspective a table is not solid. However, I continued, accepting Bill's suggestion of a table being solid as a reasonable example.

Okay. So is that made of particles?

Yeah. You can't see them.

No I can't!

'cause they're very, very tiny.

So if I got a magnifying glass?

No.

No?

No.

What about a microscope?

Yeah.

Yeah?

Probably

Possibly?

Yeah, I haven't tried it.

You haven't tried that yet?

No.

But they are very, very tiny are they?

Yeah.

Bill knew that the particles in a solid were very tiny. He seemed to be convinced of their existence, despite not being able to see them. He considered they were too small to be seem with a magnifying glass, but large enough to probably be seen with a microscope.

Bill, like a good scientist, qualified this answer as he had not actually undertaken the necessary observation to confirm this: but his intuition seemed to be that these particles could not be so small that they would not be visible through a microscope.

Later in the interview, Bill used the term microscopic to describe the particles in a solid, where a scientist would describe them as 'submicroscopic' (or 'nanoscopic'):

Tell me the bit about the solids again? Tell me what you said about the particles in the solids?

They move a very tiny amount, but we can't see that … because they are microscopic.

The term 'particle' used in introductory science classes is often used generically to cover atoms, molecules and ion. These entities are usually much too small to be see with an optical light scope (although other instruments such as scanning tunnelling 'microscopes' provide images showing electric potential profiles that can be interpreted as indicating individual atoms).

Students have no real basis on which to understand the scale of atoms and molecules, and often assume they are particles much like the specks and grains that can just be seen. Bill did not make this error, as later in the interview he told me that "the kind of specks of dust, has lots of particles in it, to make up the shape of it".

This becomes important later because much of chemistry supposes that many of the characteristics of substances as observed in the lab. are emergent properties that results from enormous numbers of molecule-scale 'particles' (or 'quanticles') that themselves have quite different behaviour individually.

Learners however may assume that the properties of the bulk materials are due to the particles having those properties – so students may suggest that, for example, that some particles are softer than others or that in a sponge, the particles are spread out more, so it can absorb more water.

Particles are further apart in water than ice

Keith S. Taber

Image from Pixabay 

Bill was a participant in the Understanding Science Project. Bill, a Y7 student, explained what he had learnt about particles in solids, liquids and gases. Bill introduced the idea of particles when talking about what he had learn about the states of matter.

Well there's three groups, solids, liquids and gases.

So how do you know if something is a solid, a liquid or a gas?

Well, solids they stay same shape and their particles only move a tiny bit.

This point was followed up later in the interview.

So, you said that solids contain particles,

Yeah.

They don't move very much?

No.

And you've told me that ice is a solid?

Yeah.

So if I put those two things together, that tells me that ice should contain particles?

Yeah.

Yeah, and you said that liquids contain particles? Did you say they move, what did you say about the particles in liquids?

Er, they're quite, they're further apart, than the ones in erm solids, so they erm, they try and take the shape, they move away, but the volume of the water doesn't change. It just moves.

Bill reports that the particles in liquids are "further apart, than the ones in … solids". This is generally true* when comparing the same substance, but this is something that tends to be exaggerated in the basic teaching model often used in school science. Often figures in basic school texts show much greater spacing between the particles in a liquid than in the solid phase of the same material. This misrepresents the modest difference generally found in practice – as can be appreciated from the observations that volume increases on melting are usually modest.

Although generally the particles in a liquid are considered further apart than in the corresponding solid*, this need not always be so.

Indeed it is not so for water – so ice floats in cold water. The partial disruption of the hydrogen bonds in the solid that occurs on melting allows water molecules to be, on average, closer* in the liquid phase at 0˚C.

As ice float in water, it must have a lower density. If the density of some water is higher than that of the ice from which it was produced on melting then (as the mass will not have changed) the volume must have decreased. As the number of water molecules has not changed, then the smaller volume means the particles are on average taking up less space: that is, they seem to be closer together in the water, not further apart*.

Bill was no doubt aware that ice floats in water, but if Bill appreciated the relationship of mass and volume (i.e., density) and how relative density determines whether floatation occurs, he does not seem to have related this to his account here.

That is, he may have had the necessary elements of knowledge to appreciate that as ice floats, the particles in ice were not closer together than they were in water, but had not coordinated these discrete components to from an integrated conceptual framework.

Perhaps this is not surprising when we consider what the explanation would involve:

Coordinating concepts to understand the implication of ice floating

Not only do a range of ideas have to be coordinated, but these involve directly observable phenomena (floating), and abstract concepts (such as density), and conjectured nonobservable submicroscopic/nanoscopic level entities.

* A difficulty for teachers is that the entities being discussed as 'particles', often molecules, are not like familiar particles that have a definitive volume, and a clear surface. Rather these 'particles' (or quanticles) are fuzzy blobs of fields where the field intensity drops off gradually, and there is no surface as such.

Therefore, these quantiles do not actually have definite volumes, in the way a marble or snooker ball has a clear surface and a definite volume. These fields interact with the fields of other quanticles around them (that is, they form 'bonds' – such as dipole-dipole interactions), so in condensed phases (solids and liquids) there are really not any discrete particles with gaps between them. So, it is questionable whether we should describe the particles being further apart in a liquid, rather than just taking up a little more space.

Particles in ice and water have different characteristics

Making a link between particle identity and change of state

Keith S. Taber

Image by Colin Behrens from Pixabay 

Bill was a participant in the Understanding Science Project. Interviews allow learners to talk about their understanding of science topics, and so to some extent allow the researcher to gauge how well integrated or fragmented a learner's ideas are.

Occasionally there is a sense of 'seeing the cogs turn', where it appears that the interview is not just an opportunity for reporting knowledge, but a genuine site for knowledge construction (on behalf of the students, as well as the researcher) as the learner's ideas seem to change and develop in the interview itself.

One example of this occurred when Bill, a Y7 student, explained what he had learnt about particles in solids, liquids and gases. Bill seemed unsure if the particles in different states of matter were different, or just had different properties. However, when asked about a change of state Bill related heating to changes in the way particles were arranged, and seemed to realise this implied the particles themselves were the same when a substance changes state. Bill seemed to be making a link between particle identity and change of state through the process of answering the researcher's questions.

Bill introduced the idea of particles when talking about what he had learn about the states of matter

Well there's three groups, solids, liquids and gases.

So how do you know if something is a solid, a liquid or a gas?

Well, solids they stay same shape and their particles only move a tiny bit.

This point was followed up later in the interview.

So, you said that solids contain particles,

Yeah.

They don't move very much?

No.

And you've told me that ice is a solid?

Yeah.

So if I put those two things together, that tells me that ice should contain particles?

Yeah.

Yeah, and you said that liquids contain particles? Did you say they move, what did you say about the particles in liquids?

Er, they're quite, they're further apart, than the ones in erm solids, so they erm, they try and take the shape, they move away, but the volume of the water doesn't change. It just moves.

Okay. So the particles in the liquid, they seem to be doing something a bit different to particles in a solid?

Yeah.

What about the particles in the gas?

The gas, they, they're really, they're far apart and they try and expand.

Does that include steam, because you said steam was a gas?

Yeah.

Yeah?

I think.

So, we've got particles in ice?

Yeah.

And they have certain characteristics?

Yeah.

And there are particles in water?

Yeah.

That have different characteristics?

Yeah.

And particles in gas, which have different characteristics again?

Yeah.

Okay. So, are they different particles, then?

N-, I'm not sure.

There are several interesting points here. Bill reports that the particles in liquids are "further apart, than the ones in … solids". This is generally true when comparing the same substance, but not always – so ice floats in water for example. Bill uses anthropomorphic language, reporting that particles try to do things.

Of particular interest here, is that at this point in the interview Bill did not seem to have a clear idea about whether particles kept their identify across changes of state. However, the next interview question seemed to trigger a response which clarified this issue for him:

So have the solid particles, sort of gone away, when we make the liquid, and we've got liquid particles instead?

No {said firmly}, when a solid goes to a liquid, the heat gives the particles energy to spread about, and then when its a liquid, it's got even more energy to spread out into a gas.

So we're talking about the same particles, but behaving differently, in a solid to a liquid to a gas?

Yeah.

That's very clear.

It appears Bill had learnt a model of what happened to the particles when a solid melted, but had not previously appreciated the consequences of this idea for the identity of particles across the different states of matter. Being cued to bring to mind his model of the effect of heating on the particles during melting seemed to make it obvious to him that there were not different particles in the different states (for the same substance), where he had seemed quite uncertain about this a few moments earlier.

Whilst this has to remain something of a speculation, the series of questions used in research interviews can be quite similar in nature to the sequences of questions used in the method of instruction known as Socratic dialogue – a method that Plato reported being used by Socrates to lead someone towards an insight.

So, a 'eureka' moment, perhaps?

K-plus represents a potassium atom that has an extra electron

Keith S. Taber

Annie was a participant in the Understanding Chemical Bonding project. She was interviewed near the start of her college 'A level' course (equivalent to Y12 of the English school system). Annie was shown, and asked about, a sequence of images representing atoms, molecules and other sub-microscopic structures of the kinds commonly used in chemistry teaching.

Earlier in her interview she had suggested that plus and minus signs represent the charges on neutral atoms when discussing the Na-plus (Na+) and Cl-minus (Cl) symbols, suggesting an alternative conception of electrical charge in relation to atoms, ions and molecules She gave similar interpretations in the case of K-pus (K+) and F-minus (F):

Right, okay, so this one here where it's got a K and a plus, what does that represent?

Potassium…An atom that has an extra electron.

Potassium atom, and it's got one extra electron over a full shell

Yeah.

and that's what the plus means, one more electron than it wants?

Yeah.

And what about the F minus?

Represents fluorine which has one, it has an outer shell of seven which has one less electron.

So, for Annie:

K+ referred to the potassium atom (2.8.8.1), not the cation (2.8.8)

and

F referred to the fluorine atom (2.7), not the fluoride anion (2.8)

Students often present incorrect responses in class (or in interviews with researchers) and sometimes these are simply slips of the tongue or memory, or 'romanced' answer guessed to provide some kind of answer.

When a student repeats the same answer at different times it suggests the response reflects a stable aspect of their underlying 'cognitive structure'. In Annie's case she not only provided repeated answers with the same examples, bit was consistent in the way she interpreted plus and minus symbols across a range of different examples, suggested this was a stable aspect of her thinking.

Lies, damned lies, and COVID-19 statistics?

A few days ago WHO reported that the UK had had over 300 000 confirmed cases of COVID-19, but now WHO is reporting the cumulative total is many fewer. How come?

Keith S. Taber

I have been keeping an eye on the way the current pandemic has been developing around the world by looking at the World Health Organisation website (at https://covid19.who.int) which offers regularly updated statistics, globally, regionally, and in those countries with the most cases.

An example of the stats. reported by the WHO (June 23rd 2020)
Note: on this day the UK Prime Minister reported: "In total, 306,210 people have now tested positive for coronavirus" which almost matches the figure shown by WHO (306 214) the next day.

Whilst the information is very interesting (and in view of what it represents, very saddening) there are some strange patterns in the graphs presented – reminding one that measurements can never just be assumed to precise, accurate and reliable. Some of the data looks unlikely to accurate, and in at least one case what is presented is downright impossible.

Questionable stats.

One type of anomaly that stands out is how some countries where the pandemic is active suddenly have a day with no new cases – before the level returning to trend.

This appeared to be the case in both Spain and Italy on 22nd March, and the two months later the same thing happened in Iran. One assumes this has more to do with reporting procedures than blessed days when no one was found to have the infection – although if that was the case should there not be some compensation in the following days (perhaps so in Spain above, but apparently not in Italy, and certainly not in Iran)?

Less easy to explain away is a peak found in the graph for Chile.

Suddenly for one day, 18th June, a much larger number of cases is reported: but then there is an immediate return to the baseline:

How is it possible that suddenly on one day there are seven times as many cases reported – as a blip superimposed on an otherwise fairly flat trend-line? Perhaps there is a rational explanation – but unfortunately the WHO site is rich in stats, but does not seem to offer interpretation or explanations *. Without a rationale, one wonder just how trustworthy the stats actually are.

Obviously false information

Even if there are explanations for some of these odd patterns due to the practicalities of reporting, and the ongoing development of systems of testing and reporting, in different jurisdictions, there is one anomaly that cannot be feasibly explained – where the data is surely, and clearly, wrong.

An example of the stats reported by the WHO (July 6th 2020)

So the graph above shows the nations with the most reported cases as of the last few days. This is a more recent update than the similar image at the top of this page. Yet, the cumulative total of confirmed cases for the United Kingdom in this figures is something like 20 000 cases LESS than the figure quoted in the EARLIER set of graphs. (Note that this has allowed the UK to have lower cumulative totals than either Chile or Peru – which would not have been the case without this reduction in cumulative total.)

The total number of confirmed cases in the UK is now (7th July) LESS that it was a week ago (see above). How come? Well, a close look the graph below explains this. The drop in cumulative numbers is due to the number of new cases that WHO gives as reported on 3rd July, when there were -29 726 new cases. Yes, that's right minus 29 thousand odd cases.

The WHO data show negative cases (-525 new cases) for the UK on May 21st as well, but on the 3rd July the magnitude of the negative number of confirmed cases is over three times as many as the highest daily number of positive new cases on any single day (April 12th, i.e., 8719 new cases).

I can imagine that if it was identified that a previous miscalculation had occurred it might be necessary to revise previous data. But surely an adjustment would be made to the earlier data: not the cumulative total corrected by interjecting a large negative number of cases on some arbitrary date in order to put the total right. [Note: the most recent data I can find on the UK government site cites 309,360 confirmed cases as of 26th June (2020-06-26 COVID-19 Press Conference Slides) so as of yet the UK data does not show the reduction in cumulative total being published by WHO.**]

Yet surely someone at WHO must have spotted that the anomaly is bizarre and brings their reports into question. The negative cases claimed for the UK on that one day are so great that the UK line has since burrowed into the graphics for completely different countries. (See below. On the day the UK graph was located above the graphic for Mexico, the UK line actually went down so far it actually crossed below the line for Mexico.)

Of course, each unit in these figures represents someone, a fellow human somewhere in the world, who has been found to be infected with a very serious, and sometimes fatal, virus. Fixating on the stats can distract from the real human drama that many of these cases represent. Yet, when the data reflect something so important, and when data are so valuable in understanding and responding to the global pandemic, such an obvious flaw in the data is disappointing and worrying.

*I could not find a link to send an email; a tweet did not get a response from @WHO; and an invitation to type my question on the website was met by the site bot with a suggestion to return to the data I was asking about.

** If I subsequently learn of the reason for the report of negative numbers of cases in these statistics, I will post an update here.

Update at 2020-07-12: duplicate testing

As of Saturday 11 July 2020 at 6:20pm
The UK government reports
Total number of lab-confirmed UK cases
288,953
Total number of people who have had a positive test result

So this is less than they were reporting a week earlier, despite their graph (for England, where most cases are because it is the most populous county of the UK) not showing any dip:

However, I did find this explanation:

"The data published on this website are constantly being reviewed and corrected. Cumulative counts can occasionally go down from one day to the next, and on some occasions there have been major revisions that have a significant effect on local, regional, National or UK totals. Data are provided daily from several different electronic data collection systems and these can experience technical issues which can affect daily figures, usually resulting in lower daily counts. The missing data are normally included in the data published the following day.

From 2 July 2020, Pillar 2 data [from "swab testing for the wider population" i.e., than just "for those with a clinical need, and health and care workers"] has been reported separately by all 4 Nations. Pillar 2 data for England has had duplicate tests for the same person removed by PHE [Public Helth England] from 2 July 2020. This means that the cumulative total number of UK lab-confirmed cases is now around 30,000 lower than reported on 1 July 2020."

https://coronavirus.data.gov.uk/about

So that explains the mystery – but duplicate reporting at that level seems extraordinary! It does not support confidence in official statistics. An error of c.10% suggests a systemic flaw in the methodology being used. It also makes one wonder about the accuracy of some of the figures being quoted for elsewhere in the world.

This can only mean…it's the core of a giant planet

Keith S. Taber

Image by LoganArt from Pixabay 

My interest was piqued by an item in the BBC Radio 4 news bulletin broadcast at 17.01 today (Wed 1st July 2020). Science reporter, Paul Rincon, reported:

Giant planets like Jupiter and Saturn have a solid planetary core beneath a thick atmosphere of gas, but no one has ever been able to see what these solid cores are like.
Now a team of astronomers has found an object that fits the description, orbiting a star like the sun that's 730 light years away.
The team knew the object must be the core of a giant planet,
because it is unusually dense for its size. It's about three and a half times bigger than earth, but thirty nine times heavier.
Objects with these properties are thought to draw gas in to form giant planets. Astronomers are planning more observations which could test ideas about how planets form.

Item on BBC Radio 4 PM News on 1st July 2020

Some years back I wrote a Reflection on Teaching and Learning Physics column called 'Documentaries can only mean one thing', as a response to my frustration about how science in the media was presented. Even quality science programmes that I really enjoyed, such as Horizon documentaries, would commonly include the phrase "this could only mean".

This was seldom, if ever, justified. What was meant was that the scientist concerned considered one particular interpretation to be convincing. The scientists themselves were unlikely to ever claim that there was only one possible interpretation of the data – but the writers and journalists clearly thought that viewers needed certainty.

This is important, of course, because we try to teach young people about the nature of science – and that means that all theories are underdetermined by evidence, and that scientific knowledge is technically provisional – open to being revisited in the light of new evidence or a new perspective that makes better sense of the data. Science involves argumentation to seek to persuade the community, and even when consensus is reached, the community can later revisit the issue.

So any suggestion that "this can only mean [whatever]" belongs in a different domain to the natural sciences – perhaps logic or mathematics. Any journalists who insist on presenting science in this way are undermining the work of school science teachers charged with teaching young people about the nature of science. So my question here is:

[did] the team know the object [that is, the object believed to be 730 light years away] must be the core of a giant planet?

No, of course not. This might be their current preferred interpretation, but it is not something they know for certain.

Because, they are not ideologues, or prophets, but scientists.

And the one thing a scientist can know for certain, is that scientific knowledge is never certain.

Nothing random about a proper scientific evaluation?

Keith S. Taber

Image by annca from Pixabay 

I heard about an experiment comparing home-based working with office-based working on the radio today (BBC Radio 4 – Positive Thinking: Curing Our Productivity Problem, https://www.bbc.co.uk/sounds/play/m000kgsb). This was a randomised controlled trial (RCT). An RCT is, it was explained, "a proper scientific evaluation". The RCT is indeed considered to be the rigorous way of testing an idea in the social sciences (see Experimental research into teaching innovations: responding to methodological and ethical challenges).

Randomisation in RCTs

As the name suggests, a key element of a RCT is randomisation. This can occur at two levels. Firstly, research often involves selecting a sample from a larger population, and ideally one selects the sample at random from the population (so every member of the wider population has exactly the same chance of being selected for the sample), so that it can be assumed that what is found with the sample is likely to reflect what would have occurred had the entire population been participating in the experiment. This can be very difficult to organise.

More critically though, it is most important that the people in the sample each have an equal chance of being assigned to each of the conditions. So, in the simplest case there will be two conditions (e.g., here working at home most workdays vs. working in the office each workday) and each person will be assigned in such a way that they have just as much chance as being in one condition as anyone else. We do not put the cleverest, more industrious, the tallest, the fastest, the funniest, etcetera, in one group – rather, we randomise.

If we are randomising, there should be no systematic difference between the people in each condition. That is, we should not be able to use any kind of algorithm to predict who will be in each condition because assignments are made randomly – in effect, according to 'chance'. So, if we examine the composition of the two groups, there is unlikely to be any systematic pattern that distinguishes the two groups.

Two groups – with elements not selected at random (Image by hjrivas from Pixabay)

Now some scientists might suspect that nothing happens by chance – that if we could know the precise position and momentum of every particle in the universe (contra Heisenberg) … perhaps even that probabilistic effects found in quantum mechanics follow patterns due to hidden variables we have not yet uncovered…

How can we randomise?

Even if that is not so, it is clear that many ways we use to randomise may be deterministic at some level (when we throw a die, how it lands depends upon physical factors that could in principle, even if not easily in practice, be controlled) but that does not matter if that level is far enough from human comprehension or manipulation. We seek, at least, a quasi-randomisation (we throw dice; we mix up numbered balls in a bag, and then remove them one at a time 'blind'; we flip a coin for each name as we go down a list, until we have a full group for one condition; we consult a table of 'random' numbers; whatever), that is in effect random in the sense that the researchers could never know in advance who would end up in each condition.

When I was a journal editor it became clear to me that claims of randomisation reported in submitted research reports are often actually false, even if inadvertently so (see: Non-random thoughts about research). A common 'give away' here is when you ask the authors of a report how they carried out the randomisation. If they are completely at odds to answer, beyond repeating 'we chose randomly', then it is quite likely not truly random.

To randomise, one needs to adopt a technique: if one has not adopted a randomisation technique, then one used a non-random method of assignment. Asking the more confident, more willing, more experienced, less conservative, etc., teacher to teach the innovation condition is not random. For that matter, asking the first teacher one meets in the staffroom is arbitrary and not really random, even if it may feel as if it is.

…they were randomised, by even and odd birthdates…

The study I was hearing about on the radio was the work of Stanford Professor Nick Bloom, who explained how the 'randomisation' occurred:

"…for those volunteers, they were randomised, by even and odd birth dates, so anyone with an even birth date, if you were born on like the 2nd, the 4th, the 6th, the 8th, etcetera,of the month, you get to work at home for four out of five days a week, for the next nine months, and if you are odd like, I'm the 5th May, you had to stay in the office for the next nine months…"

Professor Nick Bloom interviewed on Positive Thinking: Curing Our Productivity Problem
Image by Jeevan Singla from Pixabay 

So, by my definition, that is not randomisation at all – it is totally deterministic. I would necessarily have been in the working at home condition, with zero possibility of being in the office working condition. If this had been random there would have been a 50:50 chance of Prof. Bloom and myself being assigned to the same condition – but with the non-random, systematic assignment used it was certain that we would have ended up in different conditions. So, this was a RCT without randomisation, but rather a completely systematic assignment to conditions.

This raises some questions.

  • Is it likely that a professor of economics does not understand randomisation?
  • Does it really matter?

Interestingly, I see from Prof. Bloom's website that one "area of [his] research is on the causes and consequences of uncertainty", so I suspect he actually understands randomisation very well. Presumably, Prof. Bloom knows that strictly there was no randomisation in this experiment, but is confident that it does not matter here.

Had Prof. Bloom assigned the volunteers to conditions depending on whether they were born before or after midnight on the 31st December 1989, this clearly would have introduced a major confounding variable. Had he assigned the volunteers according to those born in March to August to one condition and those born in September to February to the other, say, this might have been considered to undermine the research as it is quite conceivable that the time of year people were gestated, and born, and had to survive the first months of life, might well be a factor that makes a difference to work effectiveness later.

Even if we had no strong evidence to believe this would be so, any systematic difference where we might conjecture some mechanism that could have an effect has to be considered a potential confound that undermines confidence in the results of a RCT. Any difference found could be due to something other (e.g., greater thriving of Summer babies) than the intended difference in conditions ; any failure to find an effect might mean that a real effect (e.g., home working being more efficient than office working) is being masked by the confounding variable (e.g., season of birth).

It does not seem conceivable that even and odd birth dates could have any effect (and this assignment is much easier to organise than actually going through the process of randomisation when dealing with a large number of study participants). So, in practice, it probably does not matter here. It seems very unlikely this could undermine Prof. Bloom's conclusions. Yet, in principle, we randomise in part because we are not sure which variables will, or will not, be relevant, and so we seek to avoid any systematic basis for assigning participants to conditions. And given the liberties I've seen some other researchers take when they think they are making random choices, my instinct is to want to see an RCT where there is actual randomisation.