And by this, I am of course referring to the controversial experimental results published this week that show Facebook’s ability to induce emotions in people.
Although the study was actually carried out 2 years ago (January 11th-18th, 2012), it was approved in March 2014, and first published at the end of June 2014. Since the first news stories about it broke last week, legal institutions such as the UK Information Commissioners Office, and the US National Academy of Sciences, are investigating whether the experiment should have been approved.
However, the National Academy of Science are the people who published the paper in the first place- this goes beyond locking the stable door after the horse has bolted, and more resembles investigating the door for possible structural problems after the horse has already caused property damage.
One thing that won’t be apparent to people who haven’t read the original paper (which is free-to-access, link here) is the small-scale of the results. In the positive condition, where 10% of posts containing positive content were hidden, people later used 0.1% fewer positive words and 0.04% more positive words. In the negative condition, where 10% of posts containing negative content were hidden, people later used 0.07% fewer negative words and 0.06% more positive words.
Taking just that week into account, that effect is tiny… we’re talking about one or two words in every thousand being different over the participants’ next week of status updates. So, the actual results appear to be less significant than the fact the experiment was allowed in the first place.
The main criticisms of the Facebook experiment are:
1) As participants were randomly selected based on User ID number, and as such didn’t include demographic information, we have no way of knowing who was involved. As the minimum age to use Facebook is 13, there could conceivably have been minors involved in the study, which is an added layer of ethical fault.
2) Facebook justified the inclusion of people by arguing that permission was implicit within the terms and conditions of using Facebook. In a Guardian article about this experiment, a Facebook spokesperson claimed that using the word research or not makes no difference between how the information is used. This essentially puts Facebook’s view as “all companies do this anyway; at least we admitted to it”. Even if this is true, which it might be, it is still a weak defence. This wasn’t an amateur project, this was possibly the largest-scale psychological research project ever. If the combination of the world largest social network, the US Government, and an Ivy League university all didn’t expect there to be any problem with the research being published, they were being incredibly naive.
2b) Further to this, even the legal institutions mentioned above, the university, and Facebook’s own spokespeople gave conflicting answers on whether the study breached any ethical guidelines. If the people who should know what they are doing don’t, then how do they expect people without expertise in how human-subjects research work, and without much awareness of how scientific experiments work, to be completely ok with being included in an experiment? This is an impressively large communication breakdown…ironic, for a social network.
3) Facebook could have easily, due to its size and immediacy, used other consent methods to do this study in a fairer way. (I’ll post up more details, and some potential ethical screening questions websites could use). It is understandable to not want to fully inform people, as the experiment working depended on people not knowing their News Feeds were being manipulated. However, that doesn’t mean the only option was deception. And I think a company the size of Facebook should probably have learnt that by now.
The main problem in this experiment wasn’t really the face-value results, but more the mere idea that the experiment happened without us knowing. As it was by User ID, we have literally no way of knowing who is taking part- or even if some of the people writing news stories about it were actually one of the 689,000 participants. I think its the uncertainty of knowing we could have been manipulated which is driving most of the controversy around this story.
The study found that negative emotions were slightly more contagious than positive; bearing this in mind, perhaps everyone spreading the most negative stories about this experiment is not the best way of dealing with it. A better option would be to publicise the actual article as much as possible so people can read the raw facts, and to make note of what the legal institutions investigating this say. Putting pressure on Facebook to be more transparent with their terms and conditions, and what they mean by “using our data”, also seems to be the best logical response to this study, so that future research can be carried out in a more ethical way.