Facebook’s news-feed emotional manipulation was totally ethical: Chris MacDonald

The news feed is manipulated all the time

7
Facebook CEO Mark Zuckerberg (Photo: Paul Sakuma/AP)

Facebook CEO Mark Zuckerberg (Photo: Paul Sakuma/AP)

It came to light recently that Facebook, in collaboration with some researchers at Cornell University, had conducted a research study on some of its users, manipulating what users saw in their news feeds in order to see if there was an appreciable impact on what those users themselves then posted. Would people who saw happy news then post happy stuff themselves? Or what? Outrage ensued. After all, Facebook had intentionally made (some) people feel (a little) sadder. And they did so without users’ express consent. The study had, in other words, violated two basic rules of ethics.

But I’m not so sure there was anything wrong with Facebook’s little experiment.

Two separate questions arise, here. One has to do with the ethics of the Cornell researchers, and whether Cornell’s ethics board should have been asked to approve the study and whether, in turn, they should have approved it. The other has to do with the ethics of Facebook as a company. But this is a blog about business ethics, so I’ll stick primarily to the question about Facebook. Was it wrong for Facebook to conduct this study?

With regard to Facebook’s conducting this study, two substantive ethical questions must be dealt with. One has to do with risk of harm. The other has to do with consent.

Let’s begin with the question of harm. The amount of harm done per person in this study was clearly trivial, perhaps literally negligible. Under most human-subjects research rules, studies that involve “minimal” risk (roughly: risks comparable to the risks of everyday life) are subject to only minimal review. Some scholars, however, have suggested a category of risk even lower than “minimal,” namely “de minimis” risk, which includes risks that are literally negligible and that hence don’t even require informed consent. This is a controversial proposal, and not all scholars will agree with it. Some will suggest that, even if the risk of harm is truly tiny, respect for human dignity requires that people be offered the opportunity to consent — or to decline to consent — to be part of the study.

READ: Facebook’s autoplay videos could fatten your phone bill. Here’s how to stop them »

So, what about the question of consent? It is a fundamental principle of research ethics that participants (“human subjects”) must consent to participate or to decline to participate, and their decision must be free and well-informed. But that norm was established to protect the interests of human volunteers (as well as paid research subjects). People in both of those categories are, by signing up to participate in a study, engaging in an activity that they would otherwise have no interest in participating in.

Having someone shove a needle in your arm to test a cancer drug (or even having someone interview you about your sexual habits) is not something people normally do. We don’t normally have needles stuck in our arms unless we see some benefit for us (e.g., to prevent or cure some illness in ourselves). Research subjects are doing something out of the ordinary — subjecting themselves to some level of risk, just so that others may benefit from the knowledge generated — and so the idea is that they have a strong right to know what they’re getting themselves into.

But the users of commercial products — such as Facebook — are in a different situation. They want to experience Facebook (with all its ups and downs), because they see it as bringing them benefits, benefits that outweigh whatever downsides come with the experience. Facebook, all jokes aside, is precisely unlike having an experimental drug injected into your arm.

Now think back, if you will, to the last time Facebook engaged in action that it knew, with a high level of certainty, would make some of its users sad. When was that? It was the last time Facebook engaged in one of its infamous rejiggings of its layout and/or news feed. As any Facebook user knows, these changes happen alarmingly often, and almost never seem to do anything positive in terms of user experience. Every time one of those changes is made (and made, it is worth nothing, for reasons entirely opaque to users), the internet lights up with the bitter comments of millions of Facebook users who wish the company would just leave well enough alone. (This point was also made by a group of bioethicists who pointed out that if Facebook has messed with people’s minds, here, they have done so no more than usual.)

The more general point is this: it is perfectly acceptable for a company to change its services in ways that might make people unhappy, or even in ways that is bound to make at least some of its users unhappy. And in fact Facebook would have never suffered criticism for doing so if it had simply never published the result. But the point here is not just that they could have gotten away with it if they had kept quiet. The point is that if they hadn’t published, there literally would have been no objection to make. Why, you ask?

If Facebook had simply manipulated users news feeds and kept the results to themselves, this process would likely have fallen under the heading of what is known, in research ethics circles, as “program evaluation.” Program evaluation is, roughly speaking, anything an organization does to gather data on its own activities, with an eye to understanding how well it is doing and how to improve its own workings. If, for example, a university professor like me alters some minor aspect of his course in order to determine whether it affected student happiness (perhaps as reflected in standard course evaluations), that would be just fine. It would be considered program evaluation and hence utterly exempt from the rules governing research ethics.

But if that professor were to collect the data and analyze it for publication in a peer-reviewed journal, it would then be called “research” and hence subject to those stricter rules, including review by an independent ethics board. But that’s because publication is the coin of the realm in the publish-or-perish world of academia. In academia, the drive to publish is so strong that — so the worry goes, and it is not an unsubstantiated worry — professors will expose unwitting research subjects to unreasonable risks, in pursuit of the all-important publication. That’s why the standard is higher for academic work that counts as research.

READ: Facebook buying Oculus Rift could change Kickstarter as we know it »

None of this — the fact that Facebook isn’t an academic entity, and that it was arguably conducting something like program evaluation — none of this implies that ethical standards don’t apply. No company has the right to subject people to serious unanticipated risks. But Facebook wasn’t doing that. The risks were small, and well within the range of ‘risks’ (can you even call them that?) experienced by Facebook’s users on a regular basis. This example illustrates nicely why there is a field called “business ethics” (and “research ethics” and “medical ethics,” and so on). While ethics is essential to the conduct of business, there’s no particular reason to think that ethics in business must be exactly the same as ethics in other realms. And the behaviour of Facebook in this case was entirely consistent with the demands of business ethics.

7 comments on “Facebook’s news-feed emotional manipulation was totally ethical: Chris MacDonald

  1. it is no risk until some emotionally unbalanced fackbook user commits suicide

    Reply

    • Mike, the exact same thing can be said about any of the times Facebook adjusted its algorithm or layout.

      Reply

      • This article is like deciding between eating an apple vs a cake. One is crispy while the other is delicious. One is a company that has employees jumping off of their roofs, while the other is like a philosopher – an ethicist if you will – an ethicizer if you wont..

        Reply

  2. If Chris is of the opinion that Facebook’s emotional manipulation was ethical from the point of view of _business_, then the title of his piece should reflect this.

    Reply

  3. There is a difference between doing things that may make people sad as a consequence of an action and committing an action specifically to make people sad, or to attempt to. This article is poorly reasoned.

    Reply

  4. What the hell is an ethicist?

    Reply

  5. Chris, I find a lot of your points compelling. My main hesitation is about the manipulation of the news feeds. I don’t know what that means. If it merely means that the researches tracked what was happening in the news feeds based on their normal settings and algorithms, and tracking subsequent posting behaviour of recipients in the contrasting cases of having seen ‘sad’ versus ‘happy’ stories, then fine. Watching what people do, and birds, and plants, is how we advance knowledge about humans and birds and plants. But if the researchers ‘planted’ contrived items in order to sharpen the independent variable in their research, then the Facebook news-feed readers would have been spending their time reading fake content. That isn’t the sort of thing I want happening to me — if someone wants some of my time to do what I would not normally be doing anyway, they need to ask my permission.

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *