The following moral dilemmas come from a Bernard Williams’ essay “A Critique of Utilitarianism” in a book entitled “Utilitarianism: For and Against”. They are presented as part of Bernard Williams’ specific objections to Utilitarianism. However, I want to use them here to talk more generally about hypothetical moral dilemmas as a tool of thought (an ‘intuition pump’, to use the Dan Dennett euphemism) in philosophy. Here are the dilemmas, as stated in Williams’ essay
Jim finds himself in the central square of a small South American town. Tied up against the wall are a row of twenty Indians, most terrified, a few defiant, in front of them several armed men in uniform. A heavy man in a sweat-stained khaki shirt turns out to be the captain in charge and, after a good deal of questioning of Jim which establishes that he got there by accident while on a botanical expedition, explains that the Indians are a random group of the inhabitants who, after recent acts of protest against the government, are just about to be killed to remind other possible protestors of the advantages of not protesting. However, since Jim is an honored visitor from another land, the captain is happy to offer him a guest’s privilege of killing one of the Indians himself. If Jim accepts, then as a special mark of the occasion, the other Indians will be let off. Of course, if Jim refuses, then there is no special occasion, and Pedro here will do what he was about to do when Jim arrived, and kill them all. Jim, with some desperate recollection of schoolboy fiction, wonders whether if he got hold of a gun, he could hold the captain, Pedro and the rest of the soldiers to threat, but it is quite clear from the set-up that nothing of that kind is going to work: any attempt at that sort of thing will mean that all the Indians will be killed, and himself. The men against the wall, and the other villagers, understand the situation, and are obviously begging him to accept. What should he do?1
And:
George, who has just taken his Ph.D. in chemistry, finds it extremely difficult to get a job. He is not very robust in health, which cuts down the number of jobs he might be able to do satisfactorily. His wife has to go out to work to keep them, which itself causes a great deal of strain, since they have small children and there are severe problems about looking after them. The results of all this, especially on the children, are damaging. An older chemist, who knows about this situation, says that he can get George a decently paid job in a certain laboratory, which pursues research into chemical and biological warfare. George says that he cannot accept this, since he is opposed to chemical and biological warfare. The older man replies that he is not too keen on it himself, come to that, but after all George’s refusal is not going to make the job or the laboratory go away; what is more, he happens to know that if George refuses the job, it will certainly go to a contemporary of George’s who is not inhibited by any such scruples and is likely if appointed to push along the research with greater zeal than George would. Indeed, it is not merely concern for George and his family, but (to speak frankly and in confidence) some alarm about this other man’s excess of zeal, which has led the older man to offer to use his influence to get George the job… George’s wife, to whom he is deeply attached, has views (the details of which need not concern us) from which it follows that at least there is nothing particularly wrong with research into CBW. What should he do?2
Initial thoughts
- These kind of dilemmas (often pejoratively labeled “life-boat”, “trolly”, or “flag pole” scenarios, because of some very old and very well-worn traditional hypotheticals) make a subtle but very powerful appeal to intense feelings, like existential fear, or horror, or outrage. That, in my view, transforms this problem into a psychological one, not a philosophical one. We’re asking the person considering the situation to empathize with George and Jim, and then to consider whether we might be able to over-rule our own emotions, in favor of some sort of technical or moral calculus that renders the decision more ‘justifiable’. But what is it, exactly, that makes a choice grounded in emotion less appropriate than a choice grounded in some form of moral calculus? In other words, we’re smuggling in at least one moral judgement already, in the formulation of these hypothetical scenarios. How do we decide which hypotheticals are the ones that will get us to the “true” moral answers? This seems to me, to be the doorway to an infinite regress.
- There is something very odd about these new sorts of hypotheticals, that the traditional ones don’t seem to share: they absolve the non-player characters of any moral responsibility themselves, or even try to lever that responsibility entirely onto the player character by implication. In the George case, for example, the scenario doesn’t simply suggest that George may have the added incentive of the happy coincidence of impeding the progress of chemical weapons manufacturing (an oddly morally suspect thing to do, in its own right, but that’s a separate question), the story seems to suggest that George would himself bear responsibility for the actions of that energetic young weapons chemist also aiming for the job. But why? If we’re already stipulating to the notion of some “real” moral responsibility, then by what system of morals does this anonymous chemists responsibilities suddenly become George’s? I suppose, if you’re Christian, you could try to justify it by some sort of appeal to substitutional atonement (if it was good enough for Jesus, then its good enough for me). But that seems rather weak.
- These scenarios attempt to put you into a situation in which morality couldn’t possibly apply anyway. Take the Jim scenario. The guns are already out. Someone is already going to get killed. By the time we get to bullets flying, morality is sort of moot. Morality only matters where there are real choices to be made. But in the Jim scenario, all the serious choices have already been made. The only thing left to do, is to find a way to escape the situation yourself, with as little scarring as possible. And then seek psychological help. Action movies often depict this particular dilemma, as one man holding a gun to your head, and telling you he won’t kill you, if you kill another man (for an excellent and horrifying example, rent the movie “Lord of War” with Nicolas Cage… surprisingly, a rare piece of good acting on Cage’s part).
- An important point implied by my Nick Cage reference, is the fact that these scenarios exist “in a vacuum”. In the Jim scenario, Jim “finds himself” in a square. Like, he just happened to wake up there, or wandered through the jungle in a haze and suddenly realizes where he’s at. But this isn’t how people end up in incredibly violent situations. They are, necessarily, a long string of minor but very bad choices, that typically end in these kinds of impossible-situation horror stories. In the George case, George’s options are forcibly and very artificially limited. If George is so physically frail, how did he manage to make it through university in the first place? If working in as a construction worker is too much for him, why not as a water or soil purity tester, or a dispatcher, or an administrator, or something like that? Those sorts of jobs have very similar salaries (and sometimes much larger) to entry level research assistant jobs.
In spite of these objections, I’m going to task myself with evaluating the two scenarios according to Utilitarianism, taking into account the criticisms from Williams in “Utilitarianism: For and Against”, and from E. J. Lemmon, in his essay “Moral Dilemmas”.
First, my Kobayashi Maru ground rules:
- No Deus Ex Machina. I’m not going to import anything into the scenario that isn’t already present. (such as, say, George gets a call from an old friend offering him a better job, or Captain Pedro has a congenital heart condition that will flare up just as he hands Jim the gun, or rebel forces come out of the woods and overrun the captain and his men).
- Courtroom Discipline. Where Bernard has taken license to speculate on future outcomes, I will take only those same liberties (the basic courtroom standard).
- No Free Will. I will grant the non-player characters no more moral agency than Williams has granted them (for example, perhaps Captain Pedro has a change of heart, or George’s wife promises never to leave George). The are basically preprogrammed obstacles, for the purposes of this analysis.
The point of these rules, is to constrict myself to the first-person player character, his moral agency, and the choice he faces. If I took liberties violating these rules, I wouldn’t actually be engaging with the moral dilemma.
Analysing Jim
The easiest way to begin, for me, would be to simply brainstorm on all of Jim’s possible choices. What could he do? As Williams points out in the essay (echoing Kant), ought implies can. But what can Jim do? If we withhold consideration of the obligations, duties, rights, and outcomes for a moment, and just examine the imaginable choices, I think we can narrow the list down to this: Jim could:
- do as he’s commanded by Pedro.
- stay and refuse Pedro’s command.
- try to run away/escape.
- accept the gun from Pedro, but commit suicide.
Anything else beyond these four choices would really just roll up into one of them, as a particular instance of these categories.
I wonder if Williams would reject the last two options outright, as examples of what he described in the essay as “self-deception”. If Jim were to run off into the woods from whence he came, it would be tempting I think, to convince himself that he was suffering from jungle delusions or that maybe Pedro didn’t actually say what he thought he said. In the suicide scenario, the decision might be accompanied by a belief that he had no other choice, or that he might incur some responsibility that actually he wouldn’t (in a less Utilitarian mindset).
But what if Jim wasn’t actually engaging in self-deception? What then? Being on a botanical expedition, surely Jim is traveling with fellow researchers, and has access to a radio back at his vehicle, with which he can summon international authorities to swoop in and overwhelm the locals? By a Utilitarian calculation, perhaps letting all the prisoners perish in order to capture Pedro and his gang is the better choice. Since Williams included this detail, I don’t see why I can’t.
However, let’s suppose as any good impossible situation would demand, that Jim is literally lost, and must act in the moment, in some violent way. Then, why would it be beyond the pale to imagine a man like that to be so panicked and distraught by the situation, that he might on an impulse simply shoot himself to avoid the moral hazard, and the subsequent mental torture of the outcome that he imagines in that moment? The point here, is not to avoid analysis of the actual choice, but to highlight a serious problem with Utilitarianism. Mill sarcastically dismisses critics who make the complaint that there simply isn’t time to make the calculations Utilitarianism demands, or that human nature might be too frail to be as disinterested as Mill wants it to be. This scenario is sufficiently sudden and violent enough, that any sane person would have to agree that there’s simply no telling what the outcome might be. Faced with such a situation myself, I could imagine at least contemplating exactly this option. Who would want to spend the rest of his life carrying around the psychological burden of having been forced into committing murder? Or worse, having been forced to passively watch the murder of others? Unless you’re some superhuman Elie Wiesel or Aleksander Solzhenitsyn type, you’re probably at a minimum not going to be able to continue your career in South American botany. Though, I suppose if Jim were a highly-trained covert ops soldier, acting under the cover story of a research botanist, then the choice might be significantly more utilitarian in nature. But this is outside the scope of Williams’ scenario, so I can’t grant that to our player-character.
Taking this point a bit further, Williams spends a good deal of time in the essay debating about whether Utilitarianism is a system for evaluating the imputed costs and benefits of a choice about to be made, or a system for evaluating the moral utility of the outcomes of choices already made. The Jim situation seems to imply the former, but when we think about scenarios like this, we’re always considering them as if the choice had already been made, and assuming (as John rightly pointed out) that we have certainty about both the outcomes and the utility of those outcomes. Let’s assume that taking Pedro’s offer is the proper choice, simply because, in the moment, its one life instead of 20. This obviously boils utility down to a mere numbers game. The loss of one life is better than the loss of 20. Fine, Jim must take the offer. But here’s a question for you: which one does he pick? The oldest man in the group? One of the women? The child soldier? The most aggressive rebel? The one who seems to be already injured? The one that seems to speak english? Which of these victims is the least likely to provide benefit to the village in the future? Less intuitively, which is more likely to be helpful to their rebel cause? What does utility tell us about this? How could Jim possibly have enough information, in the time allotted, to make anything like a reasonable cost/benefit evaluation? He would literally be faced with “draw straws” choice.
Getting back to my original list of objections (before I read the articles), I don’t see this situation as a moral dilemma at all. Jim has no moral responsibility for what is taking place. He came into it unawares, and Pedro and his men are in control. They are the ones with all the moral choices available to them. What Jim is doing, is making a practical choice: how do I get out of this situation: (1) alive, and (2) having done as little harm as possible. Note, he is not asking “why is one choice moral and another not”. He is asking a how question. Everything after that, is a personal estimation of: how much psychological pain he can bare; whether he can trust that Pedro, as violent and sadistic as he is, won’t just kill the rest of the captives anyway as some sort of cruel joke; Pedro won’t kill him anyway, after he’s killed a captive. The locus of moral responsibility here is with Pedro, not Jim. The evil rests with Pedro, not Jim. Jim is a victim, just as much as those villagers.
Now, if Pedro had instead said, “I will let them go, if you volunteer to go to prison”, then we might actually have ourselves a genuine moral dilemma. How much of Jim’s own life is he willing to sacrifice for others? Particularly, others who don’t really know him and who he doesn’t know. I think Mill would be more likely to take that sort of scenario as an exemplar of his concept of utilitarian choice:
“… there is this basis of powerful natural sentiment; and this it is which, when once the general happiness is recognized as the ethical standard, will constitute the strength of the utilitarian morality. This firm foundation is that of the social feelings of mankind; the desire to be in unity with our fellow creatures, which is already a powerful principle in human nature, and happily one of those which tend to become stronger, even without express inculcation, from the influences of advancing civilization. The social state is at once so natural, so necessary, and so habitual to man, that, except in some unusual circumstances or by an effort of voluntary abstraction, he never conceives himself otherwise than as a member of a body; and this association is riveted more and more, as mankind are further removed from the state of savage independence. Any condition, therefore, which is essential to a state of society, becomes more and more an inseparable part of every person’s conception of the state of things which he is born into, and which is the destiny of a human being. Now, society between human beings, except in the relation of master and slave, is manifestly impossible on any other footing than that the interests of all are to be consulted. Society between equals can only exist on the understanding that the interests of all are to be regarded equally… In this way people grow up unable to conceive as possible to them a state of total disregard of other people’s interests… They are under a necessity of conceiving themselves as at least abstaining from all the grosser injuries, and (if only for their own protection) living in a state of constant protest against them…”3
Even though the choice is a forced choice (if Pedro had not put him in the situation, he’d not have had to make it) such a choice would still be Jim’s to make, and not Pedro’s. This is because Pedro is putting the decision to murder or not murder in Jim’s hands. Even if he did not make the choice on Utilitarian grounds, it would still be a moral choice, because the locus of moral responsibility is with Jim. Williams’ original scenario puts us – as the gods-eye viewers – in the position of having to assign a quantitative value to human life, and then judge Jim based on how much value we estimate is lost. The new scenario, on the other hand, puts Jim back in the moral driver’s seat.
However, since Pedro is a non-player character, I can’t change his actions. I am only free to manipulate Jim. As such, I can’t help but to come down on the side of my previous ruling. This is not a moral dilemma, but a practical one. Jim is not a proper moral agent in the original situation, only an extension of the inevitability that Pedro already set into motion. As such, I see Jim as a casualty, not a participant in the horror.
Analysing George
As with the last case, taking the time up front to define what George could do may help to focus the problem. Williams’ retelling of the dilemma may leave us with the impression that it’s a straight-forward choice, but in this case, at least at the moment, it’s not so clear for reasons of historical accuracy. The book containing Williams’ essay, “Utilitarianism: For and Against”, was published by Cambridge in 1973. So, I am going to take the world of 1973 as the set of assumptions that Williams is unconsciously importing into this scenario, and I’m going to take the United States and the UK as the “world” into which Williams has put this man.
While it’s true that in the UK, Porton Down and Nancekuke were being “maintained” as late as 1975, it seems from credible reports that these facilities (the only facilities in the UK to produce chemical and biological weapons during WWII) were not being used to produce anything new. Nancekuke was essentially dormant, and Porton Down was being used only to study other governments’ uses of chemical weapons, and to develop defenses against biological threats like Ebola. Meanwhile, in the United States the dreaded President Nixon unilaterally halted all government work on chemical and biological weapons in 1969. The military then converted the Rocky Mountain Arsenal chemical weapons base, using it as a disposal facility until the early 1980’s. While it appears the US military and the CIA did engage in some highly secret testing programs as late as 1973, it is highly unlikely that George, even with the help of his older friend, could have obtained the security clearance needed to join the program – unless his older friend worked for the CIA. Also, it’s true that the Soviet Union, and a number of east African and Middle Eastern states were engaged in the manufacture and trade of chemical weapons as late as the early 1990’s.
However, unless we’re willing to speculate that George’s friend was a Yemeni, Iranian, or Russian double-agent, and we’re willing to distend the moral dilemma beyond all recognition, then I think it’s safe to assume this isn’t what Williams had in mind. So, unless sickly George is prepared to take on the dangerous life of a secret agent, living a double-identity and doing clandestine work for a lab that is somehow secretly operating in violation of numerous national and international laws, then I honestly don’t understand how he could have this choice to make at all. Maybe George is the model for the Breaking Bad character?
Still, if I violate my own rules, we could put this dilemma back on solid ground. Let’s suppose, instead of being a chemist, and instead of having difficulty finding work in chemistry (which itself is kind of weird, since chemical engineering in the 70’s and 80’s was one of the most sought-after high-skill degrees in private industry at the time), George is instead a PhD nuclear physicist, and his friend is offering him a job in a government lab researching nuclear weapons technology. This, at least, is something that was still an active and well-trafficked pursuit by “legitimate” states in the 1970’s. Thus, it’s a plausible substitution, doesn’t require us to change the dilemma in any fundamental way, and doesn’t require us to accept Williams’ implicit assumption violating obviously well-worn facts of history, or to turn George into a spy novel protagonist.
Given this situation, what could George now do? If I stay within the bounds of the dilemma, and my own rules, then it does seem George is left with two options: take the job, or refuse the job. At this point, it seems to me easier to understand George’s problem if we frame the dilemma in the form of Lemmon’s three main categories of Duty, Obligation, and Principle. What are George’s duties, obligations, and principles?
George’s central moral principle is stated explicitly in the dilemma: George is opposed to [nuclear] weapons, in principle. So, this would suggest at least initially, that George should not take the job. But we don’t know anything about the job itself. Supposing he is tasked with developing technologies that neutralize warheads, or inventing ways of defending against various forms of radiation, or building defensive technologies like detectors or sensors or long-range scanners or something? Surely, in spite of his principles — indeed, perhaps because of them — he could take the job, if it entailed these tasks? But this is importing too much into the dilemma. Clearly, Williams’ implication is that he’d be building offensive weapons. If so, that would lead us back to recommending refusal.
In terms of obligations, the only one that seems clear is to the welfare his family: his wife and two children. George’s wife is explicitly noncommittal on the question of weapons. So, there doesn’t seem to be any obligation present to honour her commitments. On impulse, one might think that the welfare obligation would suggest that George should take the job. But here, we’re left with a single-index, one-dimensional analysis of the situation. It’s true that the household income would increase with George’s taking the job. But – as we’ve mentioned elsewhere – at what additional cost to George’s health, both physical and mental? What if the strain of working a job that forces George to live in a state of constant mental anguish, causes him to become more ill, forcing him to quit? What if his attempt to adopt toxically obstructionist habits at work (in service to his principles) causes his superiors to evaluate him poorly, getting himself fired? What if the stress of all of this finally destroys the bond between George and his wife, ending in divorce? Given any of these equally possible outcomes, it seems clear that George’s obligation to his family’s welfare ought to lead him to refuse the job.
Lastly, what are George’s duties? On the face of it, proceeding from Lemmon’s conception, one might say none, as George has not yet taken on any particular role which has encumbered him with duties (such as a politician or a policeman). However, he likely does bear some duty, in Mill’s view. If George were a well-trained utilitarian, then his “conscientious feelings” might well instruct him as to his duty:
“…The internal sanction of duty, whatever our standard of duty may be, is one and the same— a feeling in our own mind; a pain, more or less intense, attendant on violation of duty, which in properly cultivated moral natures rises, in the more serious cases, into shrinking from it as an impossibility. This feeling, when disinterested, and connecting itself with the pure idea of duty, and not with some particular form of it, or with any of the merely accessory circumstances, is the essence of Conscience…”4
And this would stem, of course, from a proper disinterested evaluation of the amount of utility to be lost and gained in the choice at hand. Bentham and Mill quantified utility (happiness) in terms of denominated currency (dollars, pounds, what have you). But as we’ve seen when looking at the two previous categories, it’s not clear why we should accept this single-index view of utility. There are lots of other factors that weigh into a “calculation” of happiness. Interestingly enough, Mill, in defending the “private” or “local” conception of utility, seems to implicitly acknowledge this problem:
“…to speak only of actions done from the motive of duty, and in direct obedience to principle: it is a misapprehension of the utilitarian mode of thought, to conceive it as implying that people should fix their minds upon so wide a generality as the world, or society at large. The great majority of good actions are intended, not for the benefit of the world, but for that of individuals, of which the good of the world is made up; and the thoughts of the most virtuous man need not on these occasions travel beyond the particular persons concerned, except so far as is necessary to assure himself that in benefiting them he is not violating the rights — that is, the legitimate and authorised expectations — of any one else…”
Worse yet, this passage seems to oppose the notion of a “social” calculation, in which George might consider the wider effect on society in a utilitarian way. But even if he were to try, he’d be in the same boat as Jim: how could he possibly estimate the overall happiness of the world, as against that of himself and his own family, in an attempt to discover his duty? Where would this information come from, and in what framework of analysis could it possibly make any sense? I am inclined, therefore, to think that Utilitarianism cannot actually help George in this regard. Not simply because there is no duty to be found in the dilemma, but because utilitarian calculus could not find it, even if it were there to be found. If it is to be found, it must be discovered elsewhere. Perhaps, contra Mill, in a Kantian categorical principle, or some other system of morals?
Final thoughts
So, in the dilemma of George, we have one vote against, one vote that could be interpreted as against, and one vote to abstain. Therefore, taking the democratic vote of Lemmon’s moral categories, George should refuse the job. Or, at least, get more information before he decides to take it. But the judgement against is hardly unequivocal. I am hard pressed to explain how any intellectual analysis of morality could be.
Williams identified what I think is a general problem with the analytical approach to moral dilemmas, in his critique of Utilitarianism specifically. In short: what framework of reasoning could we apply, which does not (at least) have the potential to bias our own evaluation of moral choices in favor of the outcomes amenable to that framework? Here’s the quote from Williams:
”…to exercise utilitarian methods on things which at least seem to respond to them is not merely to provide a benefit in some areas which one cannot provide in all. It is, at least very often, to provide those things with prestige, to give them an unjustifiably large role in the decision, and to dismiss to a greater distance those things which do not respond to the same methods. Just as in the natural sciences, scientific questions get asked in those areas where experimental techniques exist for answering them, so in the very different matter of political and social decision weight will be put on those considerations which respected intellectual techniques can seem, or at least promise, to handle. To regard this as a matter of half a loaf, is to presuppose both that the selective application of those techniques to some elements in the situation does not in itself bias the result, and also that to take in a wider set of considerations will necessarily, in the long run, be a matter of more of the same; and often both those presuppositions are false…”
If I am deontologically disposed, I am going to search for answers to the dilemmas that conform to hypothetical and categorical imperatives. If I am consequentially disposed, I am going to search for answers that maximize utility. Who’s “correct”? How can I be sure that the hypothetical situation itself isn’t a product of my predisposition? For that, we’d have to appeal to some higher court — yet another tumble into an infinite regress…
Bibliography
- Smart, J. J. C.. Utilitarianism: For and Against. Cambridge University Press. Kindle Edition.
- Mill, John Stuart. On Liberty, Utilitarianism and Other Essays (Oxford World’s Classics) OUP Oxford. Kindle Edition.
- Lemmon, E. J.. Moral Dilemmas, The Philosophical Review, Vol. 71, No. 2 (Apr., 1962), pp. 139-158 Published by: Duke University Press on behalf of Philosophical Review
[Imported from exitingthecave.com on 1 December 2021]