I will admit, that was a slightly tedious link. I just wanted an excuse to include these very cute videos! Enjoy!
Ethical principles for psychologists have always been guidelines. There has never been a fixed set of rules that state specifically what researchers can and can’t do. This has to be down to peer reviews considering the specific needs of a particular piece of reseach. For example, most studies require a little bit of deception, if you just told the participants before hand the entire aim of the study you would get biased results. The guidelines are just in place to stop people taking it to extremes.
But what happens when research is conducted online? Should the rules be changed? Our digital age means that the internet is the biggest source of information in the world, so it was inevitable that it would be used to conduct research at some point. The advantages are undeniable. Quick, easy access to data from thousands of people from across the world… A great new way to abolish cultural biases in data, you may think.
But can the basic ethical principles really be achieved via the internet? The BPS and APS basic ethical guidelines are; informed consent, right to withdraw, protection from harm, professionalism in the role of responsibility with clients, integrity, justice and fair practice, and respect for people’s rights and dignity. More details of these guidelines here (http://www.apa.org/ethics/code/index.aspx?item=3).
Waskul (1996) uses the terms ‘publicly-private’ and ‘privately-public’ to demonstrate how the Internet will not easily fit into already existent spatial metaphors. Should we assume that as people are willing to post this information online then that means they are aware that it could be used by anyone? I would argue not. Sharf and Jones (1999) reported that when a women from a mailing list was contacted to gain her informed consent, she was “somewhat hostile, assuming that [the researcher] had behaved voyeuristically, taking advantages of people in distress” and that “the idea of using the conversations as data had not occurred to many members.”
Often times, when data is being obtained from internet sources such as chatrooms it is already anonymous. But that makes it near enough impossible to track the people who wrote the comments and ask for their consent. In addition, the right to withdraw is never an option. It may be seen as ok to use this information as there’s no way it could be tracked to the individual who wrote it, but imagine if you saw a comment you had written in what you assumed to be a private forum in a research article or magazine. It can be pretty personal stuff too. For example some online chatrooms are set up to support people with terminal illnesses. Many internet users who use these kind of support networks and chatrooms will not have even considered the possibility that what they write online may be used for research (Eysenbach & Till, 2001; http://www.bmj.com/content/323/7321/1103.full)
“[T]he very possibility of research depends on the good will of the pop- ulations studied. To needlessly alienate them by exploiting their online activities where consent and involvement might have been negotiated is not only disrespectful, but also destructive of the research enterprise” (Bakardjieva & Feenberg, 2001; http://www.sussex.ac.uk/cspt/documents/issue9-4.pdf)
Obviously not all research done online involves taking previously posted information and using it without consent. There are ways of obtaining research online that fit more nicely with ethical standards. For example setting up an online forum or questionnaire specifically for what you were trying to investigate which people could participate in through choice, meaning they would be aware of their participation, and by entering into discussions here, would be giving a form of informed consent.
However data from the internet may not be as convenient as it seems for the researchers either. There is no guarantee that information posted online is true or accurate, so online data sources should be considered with care. An example of where online information was extremely contrived was the case of two bloggers who were supposedly American female lesbians, were actually British heterosexual males. (http://www.guardian.co.uk/world/2011/jun/14/weird-world-lesbian-hoaxers) This just shows the extent to which supposedly personal information found online can be very different to what it seems…. Not exactly a reliable source for publishable research.
Ok, so last week I talked about someone who’s gender was a little unclear, but for the vast majority of us, it’s a pretty definite ‘either or’ situation. There are some undeniable differences between the males and the females of the species, and that’s a good thing.
So the question I’m asking this week is how we deal with those differences in psychological research. Is it better to study men and women completely separately and individually than having a mixed study?
The BPS Guidelines (http://www.bps.org.uk/sites/default/files/documents/code_of_ethics_and_conduct.pdf) say that we should aim to make our research valuable for all, but is that ever really possible? To get a fully representative sample of just the whole of Britain we would have to recruit participants from every city, town and village. Every age range, from every city, town and village. Every ethnicity, from every city town and village. That’s without accounting from variations in gender, disabilities, learning histories… PHEW! It’s easy to see why researchers stick to small samples and then generalise them!
Many famous studies have been critisied because of generalising issues. For example Milgram’s famous obedience study used an unrepresentative sample of only American men, and out of American men, only those who replied to his advertisement. Despite this, Milgram concluded that “obedience to authority is not a feature of German culture but a seemingly universal feature of human behaviour”. Of course he was right, and many follow up studies in various different countries have found similar results, but I don’t think he had enough evidence from his study alone to start talking about ‘universal’ features of behaviour. (http://www.holah.karoo.net/milgramstudy.htm)
All participants of Asch’s study of conformity were college age males, were they representative? (http://www.psycontent.com/content/h128830k244141t6/)
George Miller (1962) estimated that 90% of experiments done in America used college students, mostly psychology students who were required to participate for course credit (a lot like SONA) and yet the results are still generalised to the entire American population, and often as far as Britain, Western Europe, etc. (http://www.holah.karoo.net/experimental_method.htm)
So on the one hand these studies possibly aren’t very representative. They only represent American men, and clearly American men are going to have some major differences to European women, for example. On the other hand, is there such a thing as a fully representative study? Is it even possible? These studies did find some pretty significant results, and they have sparked off many follow up studies which have covered some of the generalisation issues they had (repeated in different countries, cultures, with different age groups, genders, etc.). So maybe having fairly limited samples to begin with isn’t such a bad thing. It’s certainly more reliable that trying to cover everybody all at once!
As long as it is reported correctly and doesn’t claim to over-represent, I believe small, specific samples are the way to go.
I’m not even going to get started on animal studies being generalised to humans… I’ll leave that for you to think about.