Classic human science issues

Classic human science issues


The nature versus nurture debate

One of the classic human sciences debates, that draws on many other aspects of TOK, is the ‘nature versus nurture’ debate. You are probably familiar with it: the central question asks whether our behaviour and ability to interact with other people depends on the genes which we have received from our parents, or the upbringing they have given us.
The implications of the answer are vast. If it is nature that determines our behaviour, then the new research being done into our genetic codes may well present us with a reliable prediction of how a person will turn out from the moment he or she is born. If it is nurture, then very soon you can expect demands for more legislation to be introduced, to ensure that people bring up their children in the ‘right’ way. Either reality could lead to a society in which there are two distinct classes. In the former scenario, there would be those genetically programmed to be successful and productive, and those who are not; in the former scenario, the law could identify those who do not follow the rules in child-rearing, and those who do.
The reason why this is a great debate for TOK is that it can be approached from a variety of angles – based on our ways of knowing and areas of knowledge.

An approach based on natural sciences

Natural sciences can help to some extent in evaluating the genetic makeup of a person, and the tracking that through different generations. They are most useful when it comes to assessing genetic traits that are physical (for example, blood type and eye colour), less useful when it comes to behavioural traits and traits that are shaped by both environment and genes (our abilities and skills in interacting with other people), and least useful of all in assessing the traits that are shaped exclusively by environment (our religion, language, tastes, etc.)
Natural scientists investigating this issue are very careful about their use of language. The term ‘nurture’ for them is usually abandoned for something more tangible and measurable, for example, ‘shared family traits’ (ie conditions that apply to the whole family) and ‘non-shared family traits’. In addition, they tend to use labels such as ‘heritable factors’ rather than the more wide-ranging ‘nature’.
The areas that they measure are also termed with a great deal of care. In order to assess personality, they divide our most important characteristics into the ‘Big Five’ personality traits. These comprise openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. In addition, they also assess IQ, giving a assessment of intelligence. Based on studies carried out in the US, and common sense, these aspects of our characters seem to be the result of nature – ie, inherited qualities. To give just one example of this, The New York Times in 2006 estimated that 75% of our IQ is inherited.
But how does one properly study this? If one is looking at inherited traits in animals, it is considered ethical to undertake breeding programmes to observe how the off-spring turn out. But this is obviously out of the question for humans. Much of the research is based on siblings – particularly twins, and the result of them being brought up separately or in the same household. The results of these types of studies show us that there is a sliding scale when it comes to the similarities of siblings: for biological twins, there are far similarities in traits than randomly selected pairs of people; for biological siblings, there are also significantly more similarities – though not to the extent of biological twins, and for adopted siblings some studies have led to results showing no more similarities than are found in pairs of strangers. This seems to show that, for some traits at least, environment plays little or no part.
But there have been criticisms of the studies carried out, arguing that they can never be truly scientific and therefore definitive. One vocal critic is the psychologist Jay Joseph, whose website outlines his book called The Gene Illusion.

An approach based on human sciences

In order to assess how important the role of a person’s environment is in shaping their personality, one must turn to the human sciences. One recent study carried out by the psychologist Judith Harris, which led to her book ‘The Nurture Assumption’ questions the importance of parents in the forming of a person’s character. Harris’s hypothesis is that if a trait such as aggressiveness is shared by a child, this trait is probably congenital, rather than environmental. Environment does play an important part in determining someone’s personality, but Harris says that the effect of a child’s peers is much stronger than the effect of his or her parents.
Harris never went as far as saying that the parental role was irrelevant: obviously parents have an important job to play in the selection of a child’s peers, and in helping to strengthen interpersonal relationships in the home.
Some critics of the book said that Harris did not consider other environmental effects, such as television and video games; others pointed out that she used selective data, not taken from a wide enough survey to properly prove her hypothesis.
Having said that, various ‘big name’ psychologists such as Stephen Pinker supported the findings of the book.

Combining the two

Clearly, this is no black and white issue. We are not created wholly by our genes, neither is environment responsible on its own for our character. The likelihood is that these two things work both parallel to each other, and, also, in a mutually dependent way.
For Massimo Pigliucci, the prominent philosopher and biological scientist, this is where the answer to the debate lies. He cites the ‘reaction norm’ as being the key to our personalities. In his essay entitled ‘Beyond Nature versus Nurture’ he explained this thus:

Simply put, a reaction norm is the set of all possible morphologies and behaviours that a living organism with certain genes can exhibit whenever exposed to a variety of environmental conditions. Biologists have quickly come to realise that if one changes either the genes or the environment, the resulting behaviour can be dramatically different. The trick then, is not in partitioning causes between nature and nurture, but in what is technically known as ‘genotype-environment interactions’, the way genes and environments interact dialectically to generate an organism’s appearance and behaviour.

Put more simply, the effect of genes is variable, and may be different according to different environments. Pigliucci cited the experiment done by Cooper and Zubek in the late 1950s as an example of how very different genetic conditioning can result in similar performances among maze-running rats if the environments were similar.
However, he also pointed out that this was the conclusion based on testing rats – and doing the same with human beings simply isn’t possible. Also, some scientists believe that because many genetic traits are so dependent on environment, the whole debate is fallacious. Some congenital diseases, for example, can be treated– one that is often cited is Phenylketonuria (PKU). This disease can be treated by the elimination of phenylalanine from the diet – thus permanently removing a genetic trait from our DNA by environmental means.

An approach based on reason

Following on from that last point, perhaps we can argue that the whole debate is a fallacy?

An approach based on ethics and philosophy

Partly, this whole question depends on your own standpoint. If you are someone who buys into the Platonic idea of ideal forms (ie, we are born with an unconscious knowledge of what is truly real, and only through continual questioning and thinking do we access that knowledge) then clearly, inheritance plays a big part in your view of humanity. If, instead, you are an Aristotelean ‘tabula rasa’ person (ie, we are born with a blank slate, and our experiences gradually build up on this slate to shape the person we are), then there is no need to incorporate any genetic information in your ideas of who we are.
The argument was advanced further by John Locke and Thomas Hobbes in the 17th century. Locke, an empiricist, was a firm believer in the tabula rasa, although he thought that our basic natures were essentially good, and that our society should reflect this in terms of individual rights and equal voting. Hobbes, in contrast, felt that we are under the control of our natures, which are inherently violent and selfish. Our society should therefore be built around controlling our urges, which in some ways foreshadowed some of the more authoritarian models of governments.

An approach based on history – how have our ideas on this debate developed over time?

It’s useful, also, to consider how our ideas have shifted over time. This indicates how conclusions that we believe are based on solid evidence at the time can prove to be less than certain.
The traditional (ie pre-18th century) view of human nature was that it was divinely ordained. Not only was our behaviour orchestrated by God, the hierarchy of our different societies and races was also set in stone – in other words ,the superiority of Western Europe, and the savagery of the tribes of the new world. In addition, there were inherent differences in the make-up of individuals in society, in terms of gender, class and other factors.
An ever-expanding range of political and philosophical movements sought to explain our behaviour in terms of their own take on nature versus nurture: Nazism was based on racial theories; Communism on the idea that we are shaped almost entirely by our society, leading to the set-up of their very different states.
These days, you will very possibly find yourself on unpopular ground if you suggest that our characters and traits are determined solely by our natures. This is partly because of the weight of evidence against it, but also because of Nazi connotations and associations with the thankfully discarded pseudo-science of eugenics. Scientific justifications for discriminating against unfit members of society are rightly viewed with dismay and suspicion.

Are humans inherently good or evil? The Milgram Experiment

The human sciences are also very different from natural sciences because there is almost always a purpose to their research. Whilst natural scientists investigate in order to explain – the behaviour of social insects, for example, or the composition of the earth’s crust – human scientists are investigating in order to arrive at a course of action that will reform or improve an aspect of society. So the natural scientist will have arrived at his goal after he has explained why ants move in the formation they do, but a human scientist will take the results of his research into how people commute to work, and use it to suggest how public transport might be made more efficient.
So there is an extra ethical edge to the work of human scientists that is often (though not always) absent from the work done by natural scientists. This has led some of them to argue that what it lacks in objectiveness, it makes up for in terms of significance.
How human scientists carry out their research is another issue that sets it apart from natural science. It is true that natural scientists have ethically controversial methods, for example, biologists carrying out live animal experimentation. But for human scientists, the fact that they are researching into other human beings means they have to constantly question the moral boundaries of their methods. One example of this is the famous Milgram experiment which took place in the early 1960s.

The background to the Milgram Experiment

Our need to understand a particular human phenomenon is always shaped by the social climate of the time, and the Milgram experiment was no exception. The experiment started 3 months after the trial of Adolf Eichmann, the chief architect of the Final Solution, had begun in Jerusalem. As the world was hearing about the extent of his crimes against humanity, people were asking once again, how was it possible for such things to occur, and could they happen again? Stanley Milgram, a Yale University psychologist, came up with the title of his famous experiment to directly answer these questions. He wanted to discover whether it was possible for human beings to perpetrate unimaginable crimes just by ‘following orders’.

Devising the experiment

Milgram’s experiment was fairly straightforward. In one room sat a ‘learner’ (or victim). In the other room sat the ‘experimenter’ and the ‘teacher’ (the volunteer). Only the volunteer was an actual participant – the others had been trained to act their parts, and were aware of the true nature of the experiment. Milgram advertised for volunteers, and eventually signed up 40 people for the first round of experiments.
The volunteers were told that they were involved in an experiment to investigate learning and memory. They were given lots before the experiment started to assign their roles, but this was rigged so that all the volunteers were given roles as teachers. The teachers were told that they were going to read out a word, which the learner had to match with another word, chosen from a list of four possibilities. If the learner got the word right, the teacher would proceed to the next question. If he got it wrong, the teacher was told that they had to administer an electric shock – and they were given a sample shock themselves to demonstrate what this would feel like. A second wrong answer would receive a shock increased by 15 volts; a third wrong answer a shock increased by another 15 volts, and so on, up to a maximum of 450 volts.
The teacher could not see the learner, but they could hear the supposed reaction, which of course was simulated by a tape-recorder playing a pre-recorded sound for each level of shock, and the learner faking increasingly violent screams. In addition, the learner also made sure that the teacher was aware that he had a heart condition. These screams were accompanied by bangs on the wall, but after a certain point, all sounds from the learner ceased.
The volunteers could at any point cease the experiment. But the experimenter had a four standard responses which he gave to reluctant volunteers. He gave these responses, in this order, to volunteers when they asked to stop the experiment:

The first was, ‘Please continue.’
The second was, ‘The experiment requires that you continue.’
The third was, ‘It is absolutely essential that you continue.’
The fourth was, You have no other choice, you must continue.’

If this fourth one still failed to dissuade the volunteer, then the experiment was halted.

Results and conclusions

Milgram asked various members of the university – both students and professors – what they thought the results would be. Very few believed that the teachers would continue administering electric shocks after the level of voltage became painful. They were wrong. In the first round of experiments, 65% (26 out of 40) of the volunteers preceded all the way to the maximum 450 volts charge, although many were obviously uncomfortable doing so. Only one volunteer refused outright to go over 300 volts. Milgram came to the chilling conclusion, which was all the more redolent in the social climate of the time:

Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.

The experiment has been repeated on numerous occasions, both by Milgram and by others, in different cultures and with many different adaptations. These include one carried out by Sheridan and King, who believed that the volunteers in Milgram’s original experiment knew that it was a fake experiment. So they used puppies hooked up to the electric wires, and administered real shocks. 20 out of 26 of their teachers obeyed the commands to apply the shock to the animals.

The ethical dimension

There are many ethical questions connected to this experiment. They include:

  1. Was it ethically correct to ‘fake’ an experiment, and mislead volunteers as to the nature of what was being investigated? Or given the nature of human beings studying human beings, is this the only way to properly carry out such research?
  2. Was it ethically correct to put the volunteers under so much stress (many of them were visibly disturbed during the experiment, though a poll conducted later found that 84% of them professed that they were ‘glad’ to have taken part)
  3. Can the subject matter be ethically justified – ie, the capacity of human beings to participate in something immoral – or should some things remain untouched by human scientists?
  4. What are the ethical implications of the results, and how should we act on them?


Other ethical questions in human sciences

The Milgram experiments give us one very good example of a controversial issue being investigated in a controversial way. There are many more general ethical questions that come up across the board in human sciences, all of which are worthy of further investigation. We have considered how human sciences can be used in a more cynical way – for example by advertising agencies and political campaigners. Is there a line we can draw when it comes to influencing the way people think, in terms of persuading them to part with their money or their votes? In terms of research, what are the implications of human beings studying human beings? In other words, how can the subject matter be influenced by knowing that they are being watched and studied – will their behaviour change, for example? What about the use of the date collected by human science studies? Was the Victorian politician and Prime Minister Benjamin Disraeli right when he said:

There are three kinds of lies: lies, damned lies, and statistics.

A quick glance at the methods of statisticians would seem to confirm this, whose work is often carried out not to investigate an issue, but to confirm a particular truth held by an interest group. Methods to arrive at ‘false’ statistical evidence include:

  1. Discarding unfavourable evidence
  2. Using loaded questions to guide volunteers
  3. Relying on biased samples to gather data
  4. Deliberately confusing correlation with causation
  5. Use of selective language to present evidence

There are many other ways in which statistics can give us a skewed set of data. Having said that, its use can be of vital importance – and a look at the work of Hans Rosling will admirably demonstrate this.

Cite this page as: Dunn, Michael. Classic human science issues (10th May 2013). Last accessed: 20th March 2018


Leave a Comment