Business/Finance

Facebook tramples human research ethics and gets published by PNAS for the effort

Facebook may have experimented with controlling your emotions without telling you

I start out an angry bastard on most days, but that’s just before coffee. After that, I actually lighten up and quite enjoy life and laughter. I’m really not the bitter old curmudgeon I tend to unleash when I write. Even much of my political ranting is spent more tongue-in-cheek and facepalming than actually risking a real aneurysm.

But this pisses me right off.

If you’re not familiar with human research, I urge you to brush up on something called an Internal Review Board. Sure, it’s Wikipedia, but for our purposes it’s sufficient to get you up speed here. At hospitals that engage in animal testing, tons of paperwork outlining methods, protocols, etc. need to be filed with their IRB before a mouse so much as gets injected with saline as a control. In academic settings, psychologists must do much the same when testing various of their theories before they can proceed. Hell, I know a grad student in philosophy who had to jump through hoops before she could even pose thought experiments to human subjects.

To many, this might seem a bit absurd. How could asking someone questions hurt them? Or expose the institution to risk and/or litigation? It’s plausible that a question could pull the rug out from under someone, leading to what, an existential crisis? A crisis of faith? These might change the subject’s behaviors going forward, which behaviors, in retrospect, might appear in a poor light and be construed as damage. Hell, in this day and age, there’s plenty of litigious souls who would consider having a sad “damage.”

From an institutional point of view, IRBs have many functions, but at the end of the day it’s largely about mitigation of risk and liability.

More importantly, these boards are about ethics. There is a right way and a wrong way to conduct research, especially when it involves humans. What those right and wrong ways are form the body of a great deal of research and debate, but that is exactly because it is so very important. Ethicists and the professionals that rely on them have a vested interest in doing what’s right. Sometimes this is simply because doing the right thing is simply the right thing to do. Sometimes it’s risk management. Sometimes it’s about building and protecting a brand. What company today really wants to have their brand associated with unethical human studies?

We have an answer to that question now. Facebook.

Here’s the kicker, as I see it:

None of the users who were part of the experiment have been notified. Anyone who uses the platform consents to be part of these types of studies when they check “yes” on the Data Use Policy that is necessary to use the service.

Facebook users consent to have their private information used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” The company said no one’s privacy has been violated because researchers were never exposed to the content of the messages, which were rated in terms of positivity and negativity by a software algorithm programmed to read word choices and tone.

Seriously? This might pass muster for some legal beagle whose answer to the question, “what does this law mean?” is “it depends on who is paying me.” This does not pass my sniff test even remotely. Truly, when you signed up for Facebook, did you even bother to read this policy before you consented? For most of you, the answer is, “of course not. Who the hell reads these things? I just want to see pictures of kittehs.” For the rest of you, in your wildest dreams did you imagine that “research” as mentioned in the agreement meant you’d conceivably be used in…not just marketing research or computer systems testing of some kind, but actual psychological or sociological research?

Did you know that you were consenting to have your emotional state manipulated?

693,003 people in particular probably did not.

How many wives got black eyes after this experiment?

How many road rage episodes were triggered?

How many razor blades went from bad idea to suicide attempt?

We’ll never know. The risk of even one, especially in the garish context of corporate research for profit, is too great a risk. Whether or not you think I’m being silly is of no importance. The importance is that Facebook made that decision for you, back when you probably didn’t bother reading the terms, or, like me, naively thought those terms meant things other than this.

Worse, Proceedings of the National Academy of Science legitimized this travesty of human research ethics by publishing this paper. Granted, Facebook is no Mengele. Hell, like it or not, Mengele’s unethical, nay, barbaric methods, have provided valuable medical data that we benefit from to this day, data that could never have been gained in any other manner. As a global society of civilized humans, we were supposed to have learned something from that and applied it.

Apparently we didn’t. I can only hope there is sufficient and legitimate outcry from tried and true ethicists that will keep, if not the likes of Facebook from doing this again, august journals like PNAS from aiding and abetting this kind of abrogation of such basic ethics that even first year sociology students learn a thing or two about them by way of Tea Room Trade.

—-

Image credit: Tolbasiaigerim @ Wikimedia Commons. Licensed under Creative Commons.

 

17 replies »

  1. Wow. Yeah, this is out of bounds. Thing is, they could have done the same study ethically. They have the data on everyone, so they could simply have scored N users’ organically occurring (ha haha – organic, because FB doesn’t do any manipulation or targeting already) feed content for emotional charge, then measured their responses. It wouldn’t have been a little harder, maybe, but the results would have been every bit as valid – maybe moreso, in fact.

    • Indeed. Add to that the following study which they published months ago reaching similar conclusions with nary a whiff of unethical research practices. There was zero need for their methodology, especially to achieve insignificant results, and add insult to injury with “but it’s in the ToS.” Every time something like this happens with FB I get that much closer to opting out altogether. I’m just not ready to make the switch to Twitter in hopes of better from a different corporation. The price we pay for exposure, social networking, and kittehs!

      http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0090315

  2. When I was in college, back in the Pleistocene, I had to participate in a couple experiments to complete my required course in psychology (for a degree in business, BTW). As I recall, we were all informed that we were participating in an experiment, and everyone had the option to leave any time s/he felt uncomfortable. Maybe Mark Zuckerberg shouldn’t have dropped out of college. He might have learned about these protocols.

    • You and me both, and thanks to the “anonymous” aspect of “participant” selection we’ll never know. I know I tend to exaggerate a bit here and there in my overall writing to make a point, but I really don’t think this is much of a stretch when the sample size was so huge.

      A proper statistician would have my head on a pole for the following sloppy napkin math, but I think somewhere in these loose numbers there’s something horribly suggestive about the likely outcome here.

      In 2010, there were 38,364 suicides in the US. http://www.nytimes.com/2013/05/03/health/suicide-rate-rises-sharply-in-us.html?_r=0 Rates of suicide vary significantly among age groups, so I’m entirely unclear how would would easily try pairing suicide rate overall with any comparison to the Facebook study’s N of 693,003. That, and it’s believed that, for a host of reasons, suicide is under-reported. Dirty napkin math leaves me starting here.

      Population of the US in 2010 was 308,745,538. http://www.nytimes.com/2013/05/03/health/suicide-rate-rises-sharply-in-us.html?_r=0

      Overall suicides as a percentage of population: .01%

      Facebook study’s N: 693,003

      .01% of 693,003: 693

      Clearly I can’t legitimately generalize from these numbers to an actual rate of suicide among Facebook users. But in searching for info on that front I just encountered an added alarm-raising and disturbing feature of this study. From the study:

      “People who viewed Facebook in English were qualified for selection into the experiment.”

      And,

      “Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.”

      There is no mention of eliminating participants based on age! To be sure, I search for the terms child, teen, youth, minor, and age. Nothing. From what I can see, there is nothing to suggest that this study didn’t include minors, a group that poses a significant enough risk of suicide, that *this* is news: Social media raises fear of teen suicide contagion, May 3, 2014, USA Today. http://www.usatoday.com/story/news/nation/2014/05/02/social-media-raises-fear-of-teen-suicide-contagion/8641457/ That and Facebook is concerned enough about the overlap of suicide and Facebook usage that they partnered with the National Suicide Prevention Lifeline in 2011. https://www.facebook.com/notes/facebook-safety/new-partnership-between-facebook-and-the-national-suicide-prevention-lifeline/310287485658707

      693. The number of suicides that, if the percentages actually scales down like that, would occur inside a calendar year among an N of 693,003. What are the odds that this number holds? That at least one “participant” did or will commit suicide? Even worse, that such suicide might have been a minor?

      Not knowing truly can be worse than knowing.

  3. My major problem with this is that anyone over 13 can have a Facebook page. In the publication, the authors asserted that by clicking the Facebook Users agreement people were somehow giving informed consent for this … [bad word] nonsense. How about this: below the age of 18 a minor CANNOT CONSENT – of course they can assent (and that should be sought if they’re capable of understanding what they’re participating in). There are particular ethical rule for special populations, I would know I work with pregnancy and pediatrics all the time. How in the everlasting hell did these people think it was okay? And have they heard about the teen suicide rate lately? Look there aren’t any words for this.

    And lets have a conversation about peer review. Where was the peer reviewer that raised this issue? Or did these guys get a soft review because of their provenance (Facebook, Cornell)? Or were all the reviewers similarly in IT/computer science … because a responsible editor would, presumably have assigned at least ONE psychologist or psychiatrist to review this paper due to the emonitonal manipulation aspect (there are also ethical rules about the requirements for IRB when deception is used).

    Finally, where is the IRB statement in the paper. And which IRB precisely reviewed this study. And if none, how in the blue [another bad word] did PNAS let this pass through.

    I’ve been fuming about this for a few days now, and am completely outraged. I’m very pleased to hear that there are others concerned about this. Several of us, at my institution are writing a Letter to the Editor at PNAs. We’re both dumbfounded and disgusted.

  4. My son took his life 12/4/2012 and was constantly on Facebook
    I do believe social media contributed to him taking his life
    Why is the week in January being brought up? And he started messaging in early February 2012 to a friend that he had suicidal thoughts
    I just don’t understand he was a good kid but Facebook brought the worst of him out
    I’m still very depressed and deal with many kids that write me and are suicidal and POST it on Facebook yet people just scroll past most of the time
    I don’t
    I try and help but it’s very hard

  5. You’re right, Facebook probably greatly overstepped.
    I’ve just been wondering, has anyone done a serious legal analysis of whether FB’s terms & conditions really do not stipulate consent with eventually the user being conducted experiments upon? Especially when we have to take into account, that some search engines and social media sites alter search results or your dashboard on daily basis and it’s in fact their business model, so the T&C probably were approved by competent lawyers.

    But on the level of ethics or morals, they now seriously crossed the line.