Participant Effects and Popular Science

As you’ve gathered from the last few posts, I’ve been spending the majority of my non-lecture time in uni, hiding out in my semi-underground lab and testing people. I’ve found the process of researching interesting, but it has also worried me a bit: doing my dissertation research has shown me there are many more things to take into account than I expected.

While organisation isn’t my strong point, it can be resolved fairly easily in normal lecture and seminar environments. During data collection, on the other hand, keeping track of many different variables and responsibilities becomes incredibly important, and my difficulty with it has almost got me into trouble already.

Example 1: I accidentally gave a participant only the first page of my two-page information sheet. Even though it cut off mid-answer, they didn’t indicate anything was wrong until another measure asked them their participant id code, the instructions for which were on the second (missing) page of the information sheet. While I apologised, and sorted out the missing sheet and their id code, it still felt weird. Ethically, this also meant they didn’t give full informed consent.

Similarly, I forgot to print off enough copies of the consent form on one day, so had to improvise and email the consent form to a participant after the experiment. Again, that was not the proper protocol.

On a separate occasion, I accidentally gave a participant an extra copy of one test sheets. While the sheet was obviously exactly the same as what they had just completed, they still did it again, thinking the duplicate was a trick or part of the experiment.

While in a real-life situation, the majority of people would ask for clarification or ask if the extra sheet was needed. However, because this was known to be an experiment, all bets at common sense were off.

This idea scares me, for various reasons.
Firstly, it means if I screw up when interacting with my participants, it’s noticeable.

More importantly, thinking of the big picture. How many experiments could be unknowingly flawed, but the participants, because they’re expecting an experiment, don’t think to question it? Participants in psychology experiments are usually psychology students, who are taught how to critically evaluate research. So the problem isn’t who we’re testing, in that sense, but is instead a symptom of something bigger.

What if people don’t question illogical parts of science and experiments only because they aren’t used to naturalistic expressions of science. When it comes to science, people only see the end results, successes and the figureheads.

Also, we’re generally only taught about the biggest or weirdest findings and methods, not what 90% of most research actually is.

Students, and especially the general public, aren’t taught the process of making research: of how often things have to be changed and debated every step of the way, or are piloted and found to be obviously flawed. So, what if as soon as people know they’re in an experiment, they’re expecting something unnatural, something that’s not concretely the same as they’ve learnt about, and therefore are unlikely to question when they’re given something that is against common sense.

If the process and the mistakes of science were taught as clearly as successes, would people then be more strict with their researchers? That’s not, in current thinking, going to be seen as a good thing- after all, a term we were taught even at GSCE is that disobedient/ over-perceptive participants were a “screw-you effect” to the experimenter.However, if it’s right, then it has to be done, even if it makes researchers’ jobs more complicated.

There are some places and people that have started to make this change: in popular science, Ben Goldacre’s books (Bad Science and sequel Bad Pharma) have done a lot to publicise how theories and knowledge aren’t as clear and unambiguous as research often make them. lolmythesis.com (where graduates sum up their dissertations in one pretention-busting sentence) and Twitter hashtag #overlyhonestmethods (where researchers show what they really mean by their methods), both give an unusual insight into the behind-the-scenes and human parts of research. I’ve already racked up a few #overlyhonestmethods posts myself, thanks to experiments in uni.

Both those sites have the potential to be useful in showing people both what not to do, and how easy it is for things to go wrong. Personally, I would really like to see a Research Design class use information like this to help them work out what parts of research are likely to go wrong, as well as take part in more studies to learn more about the research environment.

Let me know, would you have liked to learn about what went wrong in science, and how researchers can mess up? And do you think tags like #overlyhonestmethods will help or hinder learning about research?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s