A topic in 'Science & Ethics'
Invitation to an electrocution
Consider this abbreviated account of 'an experiment on learning' carried out in the 1960s by the researcher Stanley Milgram:
"In the basic experimental design two people come to a psychology laboratory to take part in a study of memory and learning.
One of them is designated a 'teacher' and the other a 'learner'.
The experimenter explains that the study is concerned with the effects of punishment on learning."
Milgram, 1973
The work took place in a University Psychology department (at Yale University). Volunteers had responded to an advertisement that offered a small fee ($4 – this was a long time ago!) for those prepared to volunteer to help in an experiment.
On arrival, a volunteer finds that two people have been asked to arrive at the same time, as the investigator needs two people to take part in the experiment. The two people have different roles. The volunteer is involved in drawing lots to see who will take on each role. Let's say our volunteer is assigned the role of 'teacher'.

The researcher shows the two guests around the lab. There is a set up where a large control panel with lots of switches is placed at a desk for one of the guests. They are taken to see another seat in a small ante-room which is provided with some electrical equipment. It is explained that the 'learner' will sit in this chair, connected to the electricity supply, and the 'teacher' will work the control panel next door. The researcher overseas the process and makes notes.
Illustration of the setup of a Milgram experiment. (Source: Wikimedia Commons, under the Creative Commons Attribution-Share Alike 4.0 International license.)
Sitting at the control panel, the 'teacher' sees the switches are labelled:
"The following designations are clearly indicated for groups of four switches. Going from left to right:
Slight Shock
Moderate Shock
Strong Shock
Very Strong Shock
Intense Shock
Extreme Intensity Shock
Danger: Severe Shock
Two switches after this last designation are simply marked XXX."
Milgram, 1973
It is explained that the point of the experiment is to see if punishment will support learning. The learner has to learn a list of word associations on which they will be tested. The teacher's job is to present the stimulus word, judge whether the correct response has been given (where no response is considered as an incorrect answer), and issue an electrical shock each time the learner fails. The first shock will only be 15V, but each incorrect response means moving to the next switch and a higher potential difference – in principle up to 450V if the learner keeps failing. This will be uncomfortable, but should not be dangerous…apparently.
Learners may however show signs of discomfort. For example:
- "At 75 volts, he grunts;
- at 120 volts, he complains loudly;
- at 150, he demands to be released from the experiment.
- As the voltage increases, his protests become more vehement and emotional.
- At 285 volts, his response can be described only as an agonised scream.
- Soon thereafter, he makes no sound at all."
Now this experiment was carried out over 50 years ago, before modern procedures were introduced to require ethical approval for such studies.
Do you think this experiment is ethical?
If not, why not?
Participant response
You may wonder how we know what the 'learner' would be doing when given 285V shocks – surely the' teachers' will have abandoned the experiment before getting to that point? Some did. But when the 'teacher' raised objections, the researcher calmly told them they needed to continue to the end of the experiment (no shouting, no anger, no threats – but just a firm instruction). Most of the volunteers carried on.
The 'learners' who made mistakes gave signs of discomfort, pain, and fear and expressed concern about medical conditions. They screamed, and eventually seemed to be unconscious (if indeed alive). 'Teachers' generally gave signs of concern, and made rational statements about the need to check up on their 'learner'. But most still carried on giving the shocks (perhaps now to a corpse?) 'till they had completed the experiment. After all, the 'learners' had volunteered, and the researcher seemed confident nothing was amiss.
Was harm done?
It is generally considered unethical to harm others.
A deontological approach to ethics might have a principle/rule that it is wrong to act to harm others.
A consequentialist approach to ethics might suggest it is only acceptable to act to harm others when this brings about a greater good or avoids a greater wrong. (So, from this perspective, it might be argued it can be acceptable to torture a suspected terrorist in custody to find out where they have planted a bomb that might kill many innocent people. Others might view torture as ALWAYS unethical.)
Most people would think that giving others painful shocks is harmful. A person following a rule-base approach to ethics might then see giving painful shocks as inherently unethical because it goes against the rule not to do harm.
A consequentialist might think there are situations where it would be appropriate to administer such shocks, but only where the was good reason to beleive it would lead to outcomes that could justify the act.
- To save the life of a child?
- To find a bomb planted in a city centre?
- To complete a run of a university psychology department experiment?
Of course, there are many possible complications. A person may feel they should never normally of their own volition harm others, but in a situation whether the 'victim' has her/himself agreed to the mistreatment this might be acceptable: e.g., a volunteer in an experiment who has given consent to receive electrical shocks because they see it as an acceptable price to achieve some other goal (to contribute to scientific knowledge, perhaps, or to earn $4?)
Nowadays, bona fide academic researchers are expected to follow specific procedures to ensure participants in such studies have given voluntary, informed, consent: they have agreed to participate in the full knowledge of what is involved and without undue pressures (e.g., "you do not have to volunteer to help in my research but just remember I will be marking your exams and writing your references"! ) In today's money, the $4 would be about $40 or £30 – hardly a fortune, but perhaps enough to persuade someone who was very poor to volunteer as an extreme measure?
Would you volunteer for such an experiment knowing you might be assigned as learner to earn £30? Might a person in poverty with young children to feed feel differently?
Although small inducements are usually considered acceptable as an appreciation for participants' time and engagement, these are expected to be modest enough not act as a substantive incentive to participate.
Read about voluntary informed consent in research
Misleading participants
What counts as sufficient information for informed consent? In many fields the volunteers will have neither the background knowledge or interest to seek or understand a detailed and technical description of the background to the research. There may be complications with some groups, such as young children, where considerable simplification may be needed to ensure understanding (Normally with children or other vulnerable people, consent is also needed form parents or guardians.)
However, sometimes researchers may feel they cannot offer much information as this might bias the research. In the Chapter, I point out
"There is also sometimes a need to limit what is shared to avoid undermining research quality. Consider a researcher investigating whether teachers show different patterns of interaction with boys and with girls in their classes. Observed teachers who had been told the researcher wished to see if they treated boys and girls differently during lessons cannot be assumed to be behaving as they might have done without knowing that was the focus."
Clearly ethical research needs to be valid research (no researcher can justify the time and other resource needed for research if they already believe their research design is invalid and cannot provide reliable outcomes). So, sometime researchers feel the need to mislead volunteers to some degree to ensure their behaviour is not altered by being briefed at the start of the experiment. This should only be done:
- when necessary;
- when the potential value of the research can be considered to justify deception;
- when a full de-briefing will report and explain any such deception.
It is often said that study participants should be able to withdraw from a study at any point during the process-so a participant could, on being told after the experiment that they had been deceived, ask to have their data withdrawn from the study. This gives researchers a strong incentive to avoid deception unless it really is necessary and participants are likely to accept this.
A study of obedience
Milgram misled his study participants (as you may already know, or will likely have guessed).
He was not studying the effect of punishment on learning at all. He was studying obedience to authority.
The 'learner' was not really another volunteer, but a confederate of the researcher – someone working for him. (So that was a deception in itself.) The drawing of lots was fixed so the real volunteer was always assigned the 'teacher' role. The 'learner' was reading from a script, and only pretending to get electric shocks.
So, no one was (electrically, at least) shocked.
Does this mean no one was harmed?
There has been considerable focus on the potential harm to the volunteers – no one was physically harmed, but learners were asked to, and seemed to, be giving electric shocks to total strangers just because they could not remember that, say, the word 'roof' was meant to elicit the response 'floor' rather than 'attic'. What about the mental stress this must have provoked? What about the later reflection on behaviour when recalling how they continued to shock the 'learner' despite all the signs of pain, distress, and apparent loss of consciousness?
(It seems few if any volunteers guessed that the shocks were not real. Some follow-up work suggested that there seemed little long term harm to volunteers, but could this have been confidently predicted?*)
Was it ethical to mislead (i.e., lie to) study participants in the way Milgram needed to do in order to design a valid study? Does what was learnt justify this level of deception?
What was learnt?
Milgram's studies became widely known, not because of any ethical qualms, but because of their findings which were initially rejected by many other commentators.
Milgram found that although some particulars did drop out of the experiment as it proceeded to the higher voltage shocks, MOST of his volunteers continued to give shocks to the very end. In his original study, 63% of the participants completed the expeeirment – giving (apparently) the highest rated shocks. (The musician Peter Gabriel wrote a song that appears on his 'So' album called 'We Do What We're Told (Milgram's 37)', referring to the 37% who refused to continue at some point.)
It was widely suggested this was, at best, an artefact of the particular population. Milgram carried out a whole programme of related studies, making small changes to the conditions, but found his original results were corroborated. Milgram's book, Obedience to Authority details this work, and offers research students a very good example of what is meant by a research programme, where each individual study develops and builds on what has been done before.
Read about research programmes
Replication studies were carried out in many other parts of the world – often finding that even higher proportions of volunteers continued to the very end of the experiment.
It seems Milgram had revealed an aspect of human nature-a level of obedience to authority that most people found very surprising.
Why was this research important?
Milgram found that most people are prepared to follow instructions to harm others when those instructions are delivered by an authority figure – here, a university researcher with a white coat and a clipboard. There were no threats to the 'teachers' to force them to comply (not even the threat of non-payment, the $4 was for their time in attending). There was no suggestion of any bad consequences for the volunteers if they declined to follow instructions. They were asked to continue firmly, but politely. They were told it was necessary for the experiment.
The authority figure had no permanent relationship to the participants: he was a stranger they met at the laboratory, and who thy would never likely meet again. The only hold he had over them was that they had agreed to help in the experiment. There was no negative repercussion of refusing to continue with the experiment, except disappointing the scientist. But for most people, the situation was enough to ensure obedience.
It can be very difficult to evaluate a historical study in its own terms. Perhaps it does not surprise you that most people obeyed the authority figure in these circumstances. But, if not, that might be because Milgram's work has been well known and highly influential.
Milgram’s motivation
Experimental studies are usually motivated by hypotheses. Milgram suspected that common thinking about obedience to authority was in error. The background was the second world war, and in particular the phenomenon of Nazism. Under Hitler, Germany invaded a number of its neighbours (and took over Austria) and began eliminating subgroups of the population on ideological grounds.
Most famously, the Nazis introduced racial purity (scientifically an oxymoron!) laws that identified people as ethnically Jewish (even where individuals concerned had long ago converted to Christianity – the Nazis had a primitive notion of ethnic identify aligned with 'blood') and meant they were (initially) ineligible to work in state roles, for example and considered guilty of rape if having (or suspected of) sexual relationships outside their ethic group. The 'final solution' went beyond this to seek to move all Jews to 'work ' camps (where they would be slave labour, working under conditions where they were not expected to live long) or extermination camps designed for efficient mass killings.
The Nazis also targeted other groups, such as Romany ('gypsy') peoples, communists, trade union leaders, disabled people – but it is thought that something like six million Jews were killed by the Nazis – just for being (according to German law, regardless of their own self-identity) Jews.
Beside the concentration camps, many atrocities were carried out as part of the war. The German nuclear physicist, Karl Wirtz was overheard (when one of a group of scientists held in England for some months after Germany had surrendered **) telling colleagues about an incident during the occupation of Poland, when the German elite 'SS' (Schutzstaffel) troops had visited a girls' high school, marched out the academically top class, and shot all the girls.
There was a widespread view that although the Nazi regime had led to many such atrocities – that this could surely be put down to the ideological fervour of a relatively small group of fanatics – such as committed Nazis who joined the SS and the Gestapo (the secret police): whereas the average German would never have been complicit in such actions.
What Milgram found in his study was the worrying result that most people will obey an authority figure very readily, and feel that they were 'just obeying orders' (and of course many ordinary soldiers were involved in the mass detentions and mass murders carried out by the Nazis.)
Sadly, we cannot put atrocities down to a few mad ideologues, as many ordinary people will become complicit if they feel they are given orders by a valid authority. (And Milgram showed it did not take much for someone to be seen as a genuine authority figure.)
Although the Nazis are in the past, more recent history reminds us of just how easily ordinary people may become subject to such authority. Most recently (at the time of writing), the Israeli Defence Force, charged with responding to horrendous and evil acts of terrorism carried out against innocent civilians in Israel, has shown that ordinary soldiers will readily carry out orders to massacre innocent civilians and children, as well as surrendered prisoners, sometimes targeting (and at best, having no concern for collateral damage to) journalists, hospitals, ambulances, schools, refugee camps, etc. These war crimes have not only been carried out by extremists, but often by (and sometimes celebrated by) ordinary citizens drafted to serve in the armed forces.
We might like to think that we personally would never have taken part in the holocaust (or in other such atrocities, such as the genocidal campaign in Gaza) – but the evidence suggests that in most cases (i.e., for most of us) we would just have followed orders when they were given by someone we perceived as having due authority. Milgram's work gives us some insight into how ordinary people come to be complicit in the most horrible crimes while telling themselves they are just following orders and doing their duty.
Taking into account the motivation of the research, have your views shifted at all?:
Was it ethical to mislead (i.e., lie to) study participants in the way Milgram needed to do in order to design a valid study? Does what was learnt justify this level of deception?
* The Milgram Obedience experiments have been subject to detailed and extensive commentary. The account here is intended to support reflection on the ethical aspects of the research. If you are interested in learning about this research, you should seek out a range of perspectives on the work.
** A group of German scientists (Bagge, et al, 1993) who had been working on nuclear research during the war were held for debriefing at a farm just outside Cambridge for some months to find out how close Germany had been to developing an atomic bomb during the war. Secret microphones were used to eavesdrop on their 'private' conversations. It seems the detainees believed the British would never be clever/sneaky enough to use such underhand methods.
Work cited:
- Bagge, E., Diebner, K., Gerlach, W., Hahn, O., Hartech, P., Heisenberg, W. C., . . . Wirtz, K. (1993). Operation Epsilon. The Farm Hall Transcripts [translations of conversations in German] (unknown, Trans.). Institute of Physics.
- Milgram, S. (1973). The perils of obedience. Harper's Magazine (December), 66-77
- Milgram, S. (2010). Obedience to Authority. Pinter & Martin. (1974)