Was Facebook’s “Mood Experiment” Partly About User Retention?

If you’ve been online in the past 72 hours you’re probably aware of the controversy surrounding a piece of 2012 research conducted on Facebook by researchers from Facebook, the University of California and Cornell. The research sought to investigate the phenomenon of “emotional contagion” on a massive scale. Since the publication of the research a […]

Chat with MarTechBot

facebook-newsfeed-featuredIf you’ve been online in the past 72 hours you’re probably aware of the controversy surrounding a piece of 2012 research conducted on Facebook by researchers from Facebook, the University of California and Cornell. The research sought to investigate the phenomenon of “emotional contagion” on a massive scale.

Since the publication of the research a debate has been raging about whether it was ethical (the consensus is no). Users were not informed about the experiment, which showed slightly different versions of the News Feed (more positive vs. more negative) to a limited number of people to see how it affected their own posts. The study took place over approximately a week.

The findings showed that “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness . . . When positive expressions were reduced [in the News Feed], people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.”

Amid the controversy the involved institutions and researchers are scrambling to explain themselves. Facebook researcher and one of the study’s three authors Adam Kramer has defended the study in several public statements, one of which was published on the Forbes site. Here’s an excerpt of of Kramer’s statement:

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.

Gigaom’s Matthew Ingram has an excellent backgrounder and roundup of reaction, which ranges from sarcasm to something like moral outrage. There are also a few defenders of the study in the media.

In my opinion, there’s no question the study was unethical. The data didn’t already exist and users weren’t asked to opt-in or allowed to opt-out of the research. According to Forbes, Facebook added “research” to its Data Use Policy only after the experiment had been conducted:

Critics and defenders alike pointed out that Facebook’s “permission” came from its Data Use Policy which, among its thousands of words informs people that their information might be used for “internal operations,” including “research.” However . . . in January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that “research” is something that might happen on the platform.

Another thing that struck me, however, is that Facebook’s interest in the data might not have been entirely academic. Facebook’s Adam Kramer’s statement above suggests this: “At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.”

What if Facebook was partly trying to determine how to boost user engagement by manipulating posts in the News Feed? If so, does that seem more or less creepy than manipulating the News Feed for purely academic reasons?


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Greg Sterling
Contributor
Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.

Get the must-read newsletter for marketers.