Imagine a research proposal involving deliberate manipulation of participants’ emotions. The investigator proposes to subtly influence the environment to see if these changes make people feel happier or sadder, and to see if participants alter their behavior as a result. Both the environment and behavior are arguably public, and when the participants initially entered this “public” space, they had to acknowledge agreement to its terms, which include use of their data (although the likelihood that they read those terms first is close to nil.)
This kind of project gives IRB reviewers and administrators fits. Would it qualify for exempt category 2 review, in that it could be called an observation of public behavior? Do participants have the right to be informed about the project and given an opportunity to opt out? Did they give up any such rights when they electronically signed the initial terms of use? Would telling subjects about the study first change the subjects’ behavior during participation? Is it ethical to intentionally make people feel less happy?
The above scenario may sound familiar. This study was conducted by Facebook, and has been widely discussed in the popular media. Facebook , as a private company, is not obligated to follow the requirements of the Common Rule, meaning the project could proceed without any sort of informed consent process or opt-out opportunity. The Common Rule didn’t apply even though two co-authors were affiliated with Cornell University at the time. The study results were published in the Proceedings of the National Academy of Sciences (PNAS) along with an Expression of Concern from the journal editor specifically addressing the issues of consent and voluntary participation.
We encourage our IRB reviewers and administrators to take a look at the PNAS publication, particularly the Expression of Concern, at the link above. A good (and fairly brief) summation of the ethical issues involved was also published in the Los Angeles Times. The LA Times article, in particular, mentions that academicians not involved in the project may be more concerned about the ethical implications than social media users, who have “long accepted that targeted ads and extensive data collection are permanent features of life online.”
People involved in human research protection programs, such as those of us at the UAMS IRB, recognize that social media platforms can be valuable research tools. As their use in research becomes more widespread, we will need to be mindful of the resulting ethical and regulatory concerns. Under the law, Facebook had every right to carry out this project. Whether it was entirely ethical to do so, however, is not so easily decided.
Please contact Edith Paal in the IRB office if you have any trouble accessing the documents through the links above.