Skip to content
Breaking News Alert Justice Jackson Complains First Amendment Is 'Hamstringing' Feds' Censorship Efforts

Should You Kill Your Child To Save The World?

Share

A deranged alien overlord descends his invisible craft and enters your living room. He presents you with a choice. Either you kill your child or he, being the sadistic and strangely powerful space traveler that he is, will cause a million others on Earth to slip into a coma and never wake up. No one will know he exists either way.

Let’s assume you can trust him to keep his word — if you choose Option A, he will hop back on his spaceship and fly off into the starry void. Let’s also sidestep the obvious question — “What would you do?” — and ask a more philosophically penetrating one: In this scenario, are you morally obligated to kill your child?

Why didn’t I ask the obvious question? Simple: the way we humans actually behave is not necessarily how we should behave; there’s an unbridgeable conceptual gap between what we in fact do and what we should do. We’ve even assigned a name to this particular error in reasoning: the isought fallacy.

So what’s your answer? In the scenario outlined above, are you morally obligated to kill your child?

If you said yes, then Millian utilitarianism, the view that you should always choose the action that maximizes pleasure and minimizes pain, is vindicated. If you said no, then this classical version of utilitarianism — sometimes called “act utilitarianism” — is falsified by the raw power of your intuitive strength.

Intuitions Affect Morality in Addition to Reason

Wait a second here. What’s just happened? Apparently, we have just evaluated a moral theory by judging it against our moral intuitions. The theory — you should always choose the action that maximizes pleasure and minimizes pain — is either upheld or shown to be deficient based on the results of our thinking.

But how is doing so intellectually legitimate? Shouldn’t our theories correct our intuitions, not the other way around? Phenomenologically, we take the sun to move; theory tells us it doesn’t. Theory has it right. So using intuition to correct theory really does seem like a rather strange evaluative method.

But really all we’ve done is construct a scenario in which the theory’s guiding principle can be tested, and then we’ve tested it. Not like with test tubes or in a laboratory, obviously. But in the laboratory of the mind. We have used a thought experiment to get us to our answer.

The results of our experiment are: if you said yes (that is: if you said it is morally obligatory to kill your child) then utilitarianism survives; if you said no (that is: if you said it is morally permissible to refrain from killing your child) then utilitarianism fails.

For many, this approach to moral reasoning will be confounding. Here’s their underlying concern: How can it be that responding to an imagined scenario goes any distance toward supporting or disconfirming an ethical theory? How can our moral intuitions supply philosophically legitimate support for a position? Or supply a philosophically legitimate argument against a position?

Where Moral Intuitions Come From

To begin to answer this, we need to get clearer on what moral intuitions are. Yale University philosopher Shelly Kagan defines them as “strong, immediate reactions to the description of real or imaginary examples.”

In our sadistic alien scenario, if your moral intuition tells you it is ethically permissible to let him activate Option B, killing a million random people, then you would “discover” that utilitarianism does not seem to specify the course of action that you immediately and strongly think is right. The discovery is that the theory is leading you astray.

Discovery is one of the hallmarks of science. It is a crucial component of proper investigative procedure. A thought experiment, when considered and answered, is supposed to deliver an informed judgment. The result is a kind of discovery.

How so? Because our moral sense, which is presumably built from prior moral instruction, and is a product of the countless instances of moral reflection on our part over the years, comes alive when introduced to a scenario. Our moral sense is activated by considering a scenario to which we must respond. When this moral sense is activated and finally feeds us an output, this response is the product of a system we have no reason to believe should get things wrong.

So investigating our own thoughts on the matter is a bit like investigating the world. The scientist observes the world and discovers things; we observe our responses to thought experiments and discover things. The same scientific legitimacy underwrites both ventures — or does it?

What’s funny about moral intuitions is that I’ve just described them as though they’re empirical phenomena. But if this is right then theirs is a curious type of empiricism: they are empirical truths you can discover by sitting in your armchair rather than going out into the world to investigate. They are empirical truths you can access by using only your mind and not your senses.

When I put it that way, our intuitions start to seem like they’re the very opposite of empirical. What gives? Which is it?

Some Claims Rest on Experience, Some Don’t

A distinction can help us here. Consider two types of claim: a priori and a posteriori.

An a priori claim is one that is independent of experience. For example, to figure out what the concept bachelor is, you don’t need to go interview anyone, and you don’t need to “collect” a bachelor and study one. You can just sit there in your chair and think about what goes into the idea: (a) being male; (b) being unmarried; and (c) being of a certain age. (This last one’s important: my five-year-old isn’t a bachelor.)

An a posteriori claim is based in experience. These are the empirical claims; the ones the scientific method is believed to be able to deliver for us.

Which camp do moral intuitions fall into? Both, actually. So they stem from a process that has a priori and a posteriori aspects.

How to Know If Your Intuitions Are Valid

Let’s return to the example. If you say it is permissible to save your child and condemn the million randos to death, you get that result from a process that should be very reliable. If the intuition is taken seriously it would call into question utilitarianism’s ability to adequately guide your ethical decision-making.

But what if there’s something faulty with the way the thought experiment is built? What if there’s something running interference, disallowing the correct intuition from forming and coming out?

Here is where thought experiments really shine. If you have any reason to suspect that your intuitions about a moral matter are skewed or amiss, you can run a parallel thought experiment and see how you’d respond. The idea here is to construct the second thought experiment in such a way that it eliminates the potential distortion.

If you have any reason to suspect that your intuitions about a moral matter are skewed or amiss, you can run a parallel thought experiment and see how you’d respond.

What is it about our original thought experiment that might be leading us astray? How about the fact that one of the target options is a close family member? It might be the case that we’re answering a certain way not due to the moral merits of the position but due to an attachment to a family member that overrides moral considerations.

Perhaps it is perfectly legitimate, morally speaking, to prioritize one’s own family. I’m not discounting that. In fact, that’s a position I hold. But utilitarianism’s self-conception is diametrically opposed to viewing our ethical commitments this way. One of its tenets is that we ought to calculate everyone’s well-being equally.

So our parallel thought experiment could remove a family member from the picture. Or it could retain them, although in the opposite target group. Here’s what I mean.

The first parallel option is to construct a scenario in which you happen to come across someone’s child — someone you don’t know; a child you’ve never met, let’s say — and the alien puts the same question to you as above. Either kill that child or he, the alien, will kill a million others.

The second parallel option is to fit your child into the second target group. The alien declares: “Kill that random person or I’ll kill a million others, including your child.”

Why do these parallel thought experiments help us? Because they remove a potentially disruptive variable that might’ve been leading us astray in our original scenario. If your response to that original scenario is that it’s morally permissible to let the alien kill a million fellow human beings, but your response to these new scenarios is to say it’s not permissible to kill the other million, that tells us something valuable. That tells us your original intuitions might be misleading you.

Why? Because your moral intuitions might not be systematic. If an intuition is going to provide support for a moral principle or theory, it must be systematic. That is, we must be able to apply it to all the cases in which it is relevant.

It seems like the intuition that it is permissible to allow the alien to kill a thousand so you can save the one is not systematic, given that in other parallel cases — that is, cases that are conceptually similar — you would view that same intuition as wrong.

Reasoning from Part to Whole

This is why asking the obvious question — “What would you do?” — isn’t philosophically significant. We’re trying to ask a normative, not a descriptive question. We’re trying to figure out what the correct view of morality is; we’re not trying to project what you would in fact do in that circumstance.

We’re trying to figure out what the correct view of morality is; we’re not trying to project what you would in fact do in that circumstance.

If your original intuition is guided by what you would in fact do — because you have a totalizing commitment toward protecting your children — rather than what it is morally right to do, then the intuition has not risen to the level of being a genuine ethical reason. It’s just an instinct that may or may not be ethically legitimate.

On the other hand, it could be that we are morally justified in protecting our loved ones from harm even in cases in which doing so could lead to considerably worse social outcomes. If this is the case, the original thought experiment wasn’t infected with any distortions. On this understanding, it does make a genuine ethical difference whether a target source is someone we care about deeply versus someone we don’t know at all.

What would this mean? That the parallel thought experiments weren’t so parallel after all.

Is one of your intuitions telling you now that you wish the alien would come and bring a thousand of his friends, if it would mean the end of this long and winding essay? Ethics is hard, I know. But in the end it’s one of the most important areas of life we can look into.

This article is reprinted, with permission, from Arc.