Business/Finance

Open letter to Mark Zuckerberg: you owe us one hell of an explanation

Did Facebook’s scientific study contribute to user suicides? We’ll never know, but statistics demand that we ask the question.

Dear Mr. Zuckerberg:

As the title of this post indicates, you owe us one hell of an explanation. Indulge me, if you will.

As you are undoubtedly aware, your company, Facebook, recently had a scientific study published online in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). I would naturally assume, social media being your element, that you are aware of a degree of outcry about the ethical lapses that appear evident in your study’s methodology. I doubt you registered my own outrage, so ICYMI, here it is.

A key element of my expressed outrage is this:

Did you know that you were consenting to have your emotional state manipulated?

693,003 people in particular probably did not.

How many wives got black eyes after this experiment?

How many road rage episodes were triggered?

How many razor blades went from bad idea to suicide attempt?

As I noted to a commenter on my post, “I know I tend to exaggerate a bit here and there in my overall writing to make a point, but I really don’t think this is much of a stretch when the sample size was so huge.”

Allow me to explain.

A proper statistician would have my head on a pike for the following sloppy napkin math, but I think somewhere in these loose numbers there’s something horribly suggestive about the likely outcome here.

In 2010, there were 38,364 suicides in the US. Rates of suicide vary significantly among age groups, so I’m entirely unclear how one would pair suicide rate overall with any comparison to the Facebook study’s N of 693,003. That, and it’s believed that, for a host of reasons, suicide is under-reported. Dirty napkin math leaves me starting here.

Population of the US in 2010 was 308,745,538.

Overall suicides as a percentage of population: .01%

Facebook study’s N: 693,003

.01% of 693,003 = 69.3 (~69, hereinafter, 69)

Clearly I can’t legitimately generalize from these numbers to an actual rate of suicide among Facebook users. But in searching for info on that front I just encountered an added alarm-raising and disturbing feature of this study. From the study:

“People who viewed Facebook in English were qualified for selection into the experiment.”

And,

“Participants were randomly selected based on their User ID, resulting in a total of ∼155,000 participants per condition who posted at least one status update during the experimental period.”

There is no mention of eliminating participants based on age! To be sure, I searched for the following terms: child, teen, youth, minor, and age. Nothing. From what I can see, there is nothing to suggest that this study didn’t include minors, a group that poses a significant enough risk of suicide, that this is news: Social media raises fear of teen suicide contagion, May 3, 2014, USA Today. That and Facebook is concerned enough about the overlap of suicide and Facebook usage that Facebook partnered with the National Suicide Prevention Lifeline in 2011.

69. The number of suicides that, if the percentages actually scale down as my napkin math suggests, would occur inside a calendar year among an N of 693,003. What are the odds that this number holds? That at least one “participant” did or will commit suicide? Even worse, that such suicide might have been a minor? Is that a chance you or anyone should have taken?

If I’m off by an order of magnitude, we’re only talking about 7 suicides.

If even that is off by an order of magnitude, we’re still only talking about maybe one.

Only.

Not knowing truly can be worse than knowing.

On that note, I have a special request to any parents who should read this. Heaven forbid you either are currently dealing with unimaginable grief due to the loss of a beloved child or that you will at any point in time, but please, if your child has committed suicide and was a user of Facebook during the week of January 11–18, 2012, please let us know and consult with your attorney. Mr. Zuckerberg  might especially owe you an explanation.

UPDATE:

Thanks to an astute reader, I’ve learned that my dirty napkin math was dirtier than anticipated. In my haste I zigged a decimal point when it should have zagged. Corrected numbers are now reflected in the post.

—-

Image credit: Jason Kuffer @ flikr.com. Licensed under Creative Commons.

 

42 replies »

  1. Facebook is so bad I hardly know what to say. I second what Mr. Balsinger writes above. I recently heard that they rake in over one million U.S. dollars per hour, or maybe it was per minute. By the way, I am a free market capitalist and no whiner on financial inequality; but I never bought a share of Facebook, nor would I at any price, because I like to think I own a couple of scruples. I could not forget about the one suicide I knew of via media that was related to Facebook. It is possible that they sell advertisement space to pharmacy companies that sell anti-depressants. I do not know. If they do, I for one, would want no part of them.

  2. 0.01% of 693,003 is 69.3. Yes 69 suicides are 69 too many, but, come on, check your math before you publish!

    • In Mr. Balsinger’s defense, it’s not possible to have 3/10 of a suicide, so I suppose that’s why he didn’t say that.

      • Thanks for the defense. As it turns out, I’d misplaced a decimal and in my rhetorical haste didn’t catch it before posting. 69 is still a tragically high number about which to speculate. 693, far more horrifying, was just entirely in error. On updating the “orders of magnitude” bit at the end, it still allows for the unseemly possibility of even one. If that happened, and we’ll never know, I wouldn’t want to consider that one person a rounding error.

        Facebook is apparently trying to spokesperson the issue way, so I have little hope of them actually owning any potential culpability. But with the firestorm of negative press the story is getting, even a tiny dip in the stock price at the opening bell tomorrow might give Zuckerberg and company something to ponder before doing something like this again…at least in the public eye. However wrong their methodology, or culpable the researchers and/or other parties between Facebook and PNAS, whoever made the decision to publish this rather than keep it confidential and proprietary should, by all rights, be dusting off their resume.

    • Quite correct. Thank you for calling that to my attention. I have updated the post accordingly.

  3. What exactly does this demonstrate, though? All you’re saying is that if the Facebook study population followed the same trends as the general US population, 69.3 people in that study may have killed themselves. And so? You’d find 69.3 suicides in any 693,000 people you pick. What is Facebook’s role? Is there any evidence of excess suicides or attempts in the Facebook population? Are you trying to suggest that the 69.3 suicides in the Facebook study killed themselves because they saw more negative posts in their newsfeed?

    I don’t see how that could possibly be ascertained, or why Facebook choosing to show people more negative feeds is associated with suicides and their showing of more positive feeds isn’t associated with prevention. Or why virtually every news source on the planet concentrating more on bad news than good isn’t also being asked for an explanation over hypothetical deaths and wife beatings.

    • Mainly, what I think this demonstrates is that when a corporation crosses the line from “marketing” research into “academic” research (hence why publication at PNAS is an issue), and the stated methodology is the intentional manipulation on human subjects without explicit informed consent (ToS be damned, that’s why academics in particular are having a fit), they have simply gone too far. No amount of speculation will ever convince a multi-billion dollar enterprise to actually concede even a tiny bit that they did anything wrong. That way lies even more openings for litigation and costly settlements (like they’d ever want this to hit a courtroom). So my hope, I can’t speak for anyone else’s, is that this is just enough of a pain in the collective Facebook ass that they’ll change tactics going forward.

      As for finding that number of suicides in any population, your reasoning appears correct to me. I’ll leave that one for the pro academics in the event there’s an issue there. The difference I see is that any other population of 693k didn’t necessarily have unethical researchers who damned well knew better from their education and training intentionally helping to tip the balance against them.

      I hope you personally have never had to wrestle with the kind of depression and despair that leads to “permanent solutions to temporary problems,” i.e., suicide. I’d be curious to hear what depression sufferers would have to say about someone breaching professional ethics to specifically go out of the way to make things appear just that much worse on any given day. With all the news we see about people who finally snap after dealing with _____ for just too long, however sympathetic or asinine such reasons may be, I wouldn’t hesitate for a moment to think that just one. More. Bad. Day. is the difference between someone losing their shit in traffic, balling up a fist for a swipe at someone, or thinking “fuck it” and finally reaching for the pills. It’s not for me to guess where their breaking point is. It sure as hell isn’t the place of hack researchers to take that chance, either.

      • Unfortunately I have had to deal with depression for a very long time, and frankly not all problems are temporary. Yes, people can snap and everyone has their breaking point, but this is an enormous leap to blame Facebook and claim that they somehow should have known that feeding people bad news would make them kill themselves. While it is possible, it is just as possible that anything anybody does could make someone snap. This wasn’t an experiment aimed at making people feel miserable, and you are acting like that was either the aim or a highly likely outcome, yet only offer hypothetical proportions that demonstrates literally no difference between the Facebook experiment and everybody else living their lives.

        Facebook does a lot of bad things, but I do not see the connection you’re trying to build here at all. There’s not data and no grounds for it. You’re claiming Facebook killed children based on the theory that maybe seeing a negative news item made someone kill themselves. It doesn’t follow unless you are going to blame everything else ever.

    • John asks a very good question. Here’s my take.

      The margin between the decision to take one’s own life and the decision to carry on can be razor thin. While it would be ludicrous to assert that someone committed suicide because of a negative Facebook item, it’s fair to wonder about a manipulation that adds one more straw to an already overburdened camel’s back. Human subjects research guidelines can be an unholy pain, but they exist for a reason.

      • I still don’t understand why we are jumping to the assumption that seeing more negative Facebook items than one otherwise would means Facebook share some hypothetical responsibility in hypothetical suicides that might have occurred. Any straw can be the final straw, but those who are suicidal are hit with straws from every quarter every day. Facebook is being singled out here and there is still no clear rationale as to why, and no answer to the point that they could be just as culpable for hypothetically cheering someone up. There’s also no explanation of why Facebook’s experiment makes them partly responsible for battered spouses and CNN’s or Budweiser’s business model doesn’t.

        • You’re right about a lot of this John. You never know which straw will the one that breaks the proverbial camel’s back. And no, there isn’t data to make a case that FB’s study had an effect. Given some of the parameters, I doubt such data is even possible to present within anything like a reasonable degree of confidence.

          I also know better than to buy the big lie sweeping America these days that if it can’t be measured it isn’t real and it doesn’t matter. The bottom line here is that there are a lot of overburdened camels out there and companies have no business flinging straws around for no good reason, ESPECIALLY when they’re doing so in a fashion that’s black letter unethical.

          FB needs its feet held to the fire here for behaving in an unconscionable fashion. We can reasonably pose the question that this article poses, acknowledging that a definitive answer isn’t within our reach. Those two things are not mutually exclusive – if they were we’d need to go ahead and shut down about half of what science is working on these days.

          Beyond this, I don’t really expect much to change with Facebook. They are what they are and this is likely to blow over. However, PNAS is about to enter a period of SEVERE scrutiny and if major heads don’t roll their credibility in the academic community is going to be shot.

          Whether it had any effect or not, this study was needless, unethical and irresponsible.

        • I’m afraid this still makes zero sense to me. Again, you’re blaming Facebook for throwing straws that everybody throws every single day, and treating it like it is tantamount to goading children into suicide, all on the premise that it is a lie that something that cannot be measured isn’t real?

          Frankly the methodology seems both sloppy and unnecessary (this data could have been found without deliberate manipulation), so some pushback against PNAS makes sense, but painting Facebook as monsters over this issue means you have to blame pretty much everyone else for everything they do because it could theoretically cause someone to kill themselves. I simply disagree that the question posed is reasonable, at all. It is a giant leap unless you ask the exact question about everyone else who provides anything considered negative.

        • John, forgive me if you will a moment of reduction ad absurdum, which I employ to hopefully illustrate something about the rhetorical and logical structure of your argument. People murder other people every day, so there’s no reason to criticize Mark Zuckerberg if he murders someone.

        • That is absurd, yes, and not remotely the structure of my argument. You haven’t demonstrated that anybody was killed because of Facebook’s experiment, and yet you have accused Facebook of being potentially complicit in the deaths of children. My point is not “don’t criticise Facebook”, it is “don’t make up completely outrageous, emotionally loaded claims and then do some irrelevant math to make it look like there’s a connection to something that could be connected to anything else on the planet”.

        • John, let me try and answer your question. I think what it comes down to is that “scientific research” has a particularly loaded history. From the Tuskegee syphilis experiments to LSD research by the CIA, organizations have gotten away with some pretty unethical studies “in the name of research”. That’s why we have agreed-upon regulations to ensure that this kind of research is consensual and doesn’t affect unwilling participants.

          We don’t (yet) apply the exact same standards to other industries, like advertising, but we do have regulations on what kinds of ads can be shown in what places (no cigarette ads, no junk food in schools, etc). We also expect ads to be clearly distinguishable, giving us a sense of choice in the matter, as opposed to subliminal imagery and “brainwashing.”

          There’s also the fact that advertising companies aren’t entering into an ongoing contract with individuals by asking them to accept their Terms of Service. Ads are one-off experiences, while being a Facebook user is an ongoing agreement.

          It’s similar to the whole terrorism issue. Yes, people are far more likely to get killed in a car accident than a terrorist attack, but because of the context (accident vs. malicious intent), we place far more effort on preventing the latter. Similarly, a death “in the name of research” would seem to carry more weight than something that’s just an ordinary day’s business.

  4. I don’t believe that Facebook customers are a typical or random group of people. Sure, many are, but what personality types like to send and receive judgements in regard to their personal lives? I think their use of Facebook in the first place begs the question. And I would guess that this was well analyzed in the business model.

  5. People who buy beer know they are buying beer and if they drink enough they will get drunk. The Subjects of the Facebook Experiment did not know that the people at Facebook were trying to make them sad. They had no clue. Beer drinkers are trying to be happy on purpose. Something about “volition.”

    • Facebook wasn’t trying to make people sad, Facebook was tweaking the balance of positive and negative news items to see what difference, if any, that would make on subsequent behaviour. You’re blaming them for an enormous and unforeseeable reaction to negative news items that they didn’t make up, and not acknowledging that they also offered positive news items which have just as much likelihood of having saved lives. People who go on Facebook know that they have a chance of seeing something positive or negative, whether it’s placed there by Facebook’s tweaked algorithm or by one of their own contacts posting something. Are we going to blame regular Facebook users who are overly negative for the potential suicides they might have caused? And what about CNN and other news organisations who actually do exist to make people sad and anxious on a daily basis?

  6. I’d sure like to read the critiques offered by PNAS’s peer reviewers of this “study.”

    • Indeed. My exposure to the peer review process in the past failed me in that I didn’t remember that aspect. What I’m wondering now is if there’s any way in hell someone with standing could sue to ascertain whether or not minors were subjected to human experimentation without parental consent and drag those critiques in by subpoena.

  7. What Facebook did was almost certainly professionally unethical (I’m not an expert on the ethics of social science research, so I’ll defer on this one to people who are) and I personally consider it unethical as well. But Frank’s math doesn’t hold up, for the reasons that JohnMWhite point out.

    Any population of ~690,000 people with an average suicide rate of 0.01% will have about 69 suicides. This is true not just of Facebook users, but of readers of the New York Times, Fox News watchers, anime otaku, Disneyland visitors, and so on. Should we criticize the New York Times or Fox News or for writing depressing headlines that push someone over the edge? Or call Disneyland’s policy of rotating characters through autograph lines for safety reasons “unethical” if some poor kid gets so depressed that he kills himself after failing to get an autograph?

    Facebook’s research into emotional manipulation is bad enough as it is. There’s no need to cheapen the reality of suicide in order to make Facebook look even worse.

    • I’ve been silent while thinking that out more, and I’m inclining more to agreeing with the point both JohnMWhite and you point out, but just inclining. As usual, I’m aware my non-pro dirty napkin math won’t hold up, but my gut (by no means my best organ of reasoning) still says something stinks in the numbers.

      If, as you and JMW point out, any pop of N with avg suicide rate 0.01% will turn up ~69 incidents, then Facebook, and by extension, all the other examples stated may either pop up the occasional extreme 1 to 70 (since we’re dealing with approximations), or FB, by itself, without manipulation, would logically be a somewhat smaller factor in that rate than the rate driven in part by all the other culprits. The difference as I see it is that the “research” and methods conducted by media aren’t held to the same standards, scrutiny, and regulation as the kind of research conducted by Facebook in this case. As others have noted, there were more solidly designed studies they could have run, and certainly one far more solid and ethical study they *did* run previously. If we consider the stats as some kind of trending baseline, Facebooks unethical study (which, incidentally, and has been pointed out here and elsewhere, doesn’t appear to have controlled at all for minors included in the data without parental consent) would only serve to exacerbate the already existing risk.

      Media might be legally permitted to hide behind a facade of “no regulations to cover this,” but Facebook, engaging in militarily-funded unethical research with faulty methodology with what appears to be some really shady quasi-IRB “review” (if one could call it that) traipsed needlessly into regulated research territory to, as I see it, no public good. Was that worth the risk of even one suicide? I appreciate that we differ, and respect your superior professional education and training. I just wish I had more than instinct to suggest that there’s something distinctly more nefarious, or at least wantonly negligent, in this case.

      The statistical reasoning may serve to prove me wrong on the number side. I accept that, and even concede that point. But I just can’t concede that Facebook wasn’t taking a risk it had neither business nor right taking. Pre-experiment, Facebook’s influence on suicide rate was presumably x. During experiment, I suggest that influence became x+y. Post experiment, it went back to x. +y bugs the ever living crap out of me. My math after all these years is rustier than hell (evident in my originally misplaced decimal), but if Suicide Rate is a function of x, S(x), I just can’t help suppose that S(x+y) represents an increase, not a decrease, in S. If infinitesimal, no biggie, but worth the risk? What if, truly, what if it resulted in S increasing to S+1?

      I’m sorry it appears I’m cheapening the argument. I’m actually seeing my approach as taking the sanctity of life side. Odd for me, I know.

      • Of course Facebook was taking a risk. But is it any different from the risk that you or I or Sam or anyone takes when we title a blog to be as impactful as possible? We as writers do everything in our power to influence others, to invoke or suppress emotion and logic in order to convince others a) to listen to us at all and b) to, if not agree, at least to understand where we’re coming from. Manipulating other people is what writers do.

        There’s a chance that something I’ve written about climate was enough to throw someone so deeply into despondence that he or she took his or her own life as a result. Gods know that I’ve made myself depressed enough times over the years. And if I ever found out that I was someone’s last straw, I’d be devastated. But does that mean I shouldn’t ever write about climate for fear of causing somone to commit suicide?

        Put another way, Cat’s Cradle by Vonnegut and 1984 by Orwell are bleak, bleak, bleak. I’d be amazed if each hadn’t been someone’s last straw – does that mean Vonnegut or Orwell should never have written them? Should the movies Brazil or The Wall never been filmed?

        I don’t think this issue is simple enough that it can be reduced to black and white, and that’s what it seems like you’re trying to do.

        Facebook fucked up and should be smacked as a result of this one hell of a lot harder than they actually will be. Ethics rules exist for a bunch of damn good reasons, and Facebook seems to have simply ignored them completely. But I really think you’re too far out on a limb here.

        • But is it any different from the risk that you or I or Sam or anyone takes when we title a blog to be as impactful as possible?

          Yes. Absolutely. Our purpose is to inform or persuade, not to intentionally and actively manipulate. If I walk into a room of 100 people and I know one of them is depressed, it’s one thing to tell everybody something frustrating about our political system in hopes that I might spur greater awareness and motivate action for change and another entirely to see if I can depress everyone further just to see how they react.

          Further, and more to the point, everyone reading S&R understands (or reasonably should) understand our agenda. When you enter this room, attempts will be made to inform you, to persuade you, to make you think, perhaps. There is an implied contract and a set of ethics that attends the forum. But what if I violate those ethics secretly? What if I post a piece promoting the works of ABC, Inc., and unbeknownst to my readers I have taken a cash payment for doing so?

          What FB did is akin to that – they secretly manipulated us and did so in ways that violated the implied contract of the forum. Only they did so in a way that was potentially harmful to their most vulnerable users.

          So yes, what they did and what we do is VERY different.

        • What we’re fundamentally disagreeing over here whether or not outcomes, especially unintended outcomes, can make something more or less unethical. Intent and execution are the dominant factors in determining whether or not something is ethical. Outcomes are secondary, unless the intended outcome is something unethical. Had Facebook been trying to manipulate people into committing suicide, we wouldn’t be having this conversation and Facebook would have had their servers seized by the FBI by now.

          Frank’s argument is that bad unintended outcomes can make ethical research unethical, or make unethical research even MORE unethical. By that logic, what we’re doing at S&R is no different than what Facebook does, because our implied contract and set of ethics doesn’t matter. Only the outcome matters, because the outcome has the potential to overwhelm the ethics and the intent. And if an unintended outcome is terrible, then logically, that means the implied contract and ethics were actually unethical.

          You’re essentially saying that Facebook is responsible if someone kills him or herself as an unintended consequence of something they do, but that S&R is not responsible if someone does the same as an unintended consequence of what we write. Either unintended consequences can corrupt good ethics, or they can’t. Choose one – you can’t have it both ways.

        • No. Frank is arguing that in this case unethical research may have enabled bad outcomes. It’s unethical regardless.

          It doesn’t matter whether or not FB was TRYING to kill people. It wasn’t behaving with malice aforethought, but it was acting negligently.

          In other words, everyone is trying to establish linkage between ethics and outcomes. No. They are independent issues that in some cases – perhaps this one – overlap.

        • Let me clarify a bit.

          NOT: FB was unethical because people might have died.

          NOT: People might have died because FBs behavior was unethical.

          YES: FBs behavior was unethical, and in addition it may have contributed to people harming themselves.

          Am I making sense?

        • To Sam’s point with Not, Not, Yes…as usual, when someone is clear and pithy they make my ham-fisted and prolix argument far more accessible. That’s just what I meant and why I’m having a hard time budging from the point, lame math skills aside 🙂

        • You’re both making sense, but I’m not convinced that the original post and the subsequent discussion actually makes that point.

          Here’s the nutshell version of the argument that I see: FB’s behavior was unethical, but a hypothetical and undemonstrated unintended consequence makes it even more unethical.

          To use an example, let’s say someone tests a self-driving car on a city street before it’s truly ready for such a test. This would be unethical, even if the car performs perfectly. Then that self-driving car hits and kills a drunk who wanders into oncoming traffic.

          It seems to me that you’re both saying that the fact a drunk was killed makes the initial unethical decision worse, even though there’s a really good chance that the drunk would have been killed anyway, just by a car driven by a person instead of a computer.

          You both say that wasn’t the intent, so I’ll take you both at your word and drop it at this point. Perhaps I’m simply too brain fried from a month of traveling to read coherently.

        • In short, no. Ethicality is zero percent correlated with outcomes. The reader may tend to conflate the two in a case like this, and I get how that happens. I hope I didn’t contribute to confusion there.

    • I always expect Sam to make sense. I often wonder about myself 😉 If sense is being made, it’s also possible that, however sensible, the argument might just be unpersuasive. I can’t speak for Sam, but I’m personally okay with that. I’m trying to get better at the persuasive aspect of my writing, but it needs work. Sam, on the other hand? If I weren’t wearing my tinfoil hat, I’d probably feel compelled to mail him presents and money for some unknown and compelling reason.

  8. Re: limb, at least we can agree it wouldn’t be the first time for me.

  9. Don’t forget that there’s an high % of Goth, Dark, Emo Blah people on Facebook who actually LIKE to read depressing news on their feed. As strange as it sound. Just sayn’

    • Fonzie, as one of that number, I can only approve the perverse/subversive observation you make 😉 “WTH, what’s up with all these kittehs and happy posts. That tears it!” Weird, but conceivable.