Thursday, November 17, 2016

On Moral Buck-Passing

I want to enlarge upon an issue briefly discussed in the last post. It is, I think, an incredibly thorny problem, and I’d like to workshop some possible solutions or constructive approaches.

I’ll be looking at the issue here through the lens of consequentialism. Other moral theories undoubtedly have unique takes on it, but my metaethics and metaphysics have lead me pretty decisively to the view that any moral properties deserving of realist countenance must be, first and foremost, properties of states of the world and properties of actions or persons only in a derivative sense. I’ll lay all this out in argument form in a future post.

For now, though, let me throw a few scenarios at you:

(1) A man at a zoo, ignoring all posted warnings, scales a fence and enters the lions’ enclosure. He is attacked and killed. Who between the man and the lion bears greater responsibility for the man’s death?

(2) A mentally ill man with tendencies toward violence is denied insurance coverage (for legally sound reasons) for medications he would otherwise be unable to afford, which are necessary to keep his symptoms under control. Unmedicated, he eventually kills someone. Who is to blame?

(3) A small cult is publicly satirized and shamed. Indignant, its members retaliate and an innocent bystander to the whole affair is killed. On whose hands is the bystander’s blood?

(4) You learn that a company has been engaging in unethical (though by some technicality still legal) activity. You take to your social media platform of choice and spearhead a large-scale boycott. Profits fall precipitously and the company duly ceases the unethical behavior but also, in order to absorb the financial blow, lays off a third of its ground-level work force (most of whom had no participation in or even knowledge of the unethical activity). Who is to blame for the suffering of those now unemployed?

Now, in each of these examples, focus only on the proximate cause of the bad consequence: the lions, the mentally ill man, the aggrieved cult members, and the company (or its executives). However you ultimately came down on each question, I suspect you found a greater willingness to assign blame to the proximate cause as you progressed from (1) to (4), perhaps with a corresponding increase in reluctance to assign blame to the more remote causes (the man climbing the fence, the insurance company, the satirist, and the boycott leader) whose ultimately negative effects are brought about through the actions of the more proximate causes. In any case, I want you to think about how you reasoned in each case and see if you can pinpoint the factors on which any changes in your judgments about the more deserving focus of blame seemed to depend.

Consequentialists seek to act in the world in order bring about outcomes that realize or facilitate some moral good—whatever that may be per the particular species of consequentialism in question. In order to do this reliably, they must try to predict the likely consequences of a considered action (as far downstream as feasible) and weigh the costs and benefits of each. Often, a very important element of this calculus consists of the responses other persons are likely to have to the action. In some cases, we might have good reason to think that a person is likely to respond to an action in an immoral way (i.e., with an action of his or her own that brings about some state that would be reckoned a moral harm by the consequentialist). Now, suppose we go ahead with the action and the other person duly brings about the bad consequence we predicted. Let us further stipulate that the harm brought about by the second action is greater in magnitude than the benefit brought about by the first. Is the other person wholly at fault for that consequence or do we share any of the blame? Generalizing the question: Can a consequentialist ever safely pass the moral buck?

It’s tempting to seek to answer this question by appealing to juridical notions of culpability. We want to know whether the proximate cause of a bad consequence was of sound mind at the time the action in question was taken, whether that person was capable of understanding that the action was wrong, and so forth. These are important considerations, but they don’t, I think, comprise all the relevant moving parts. Before we dig any deeper, though, we need to do a bit of terminological housekeeping.

In a consistently consequentialist framework, talk of moral responsibility is, I contend, ultimately a sort of shorthand for talk about where in a causal chain one might most effectively intervene in order to bring about good outcomes or prevent or ameliorate bad ones. We punish agents (i.e., persons of sound mind) who’ve done bad in order to dissuade them from doing bad again or to dissuade others from doing bad at all, and we praise and reward those who’ve done good in order to incentivize them and their observers to do good in the future. To determine the most appropriate locus of intervention, we must assess, as best we can, both the ease of bringing an intervention to that locus and the likelihood that such an intervention will have the desired effect. The relevant relationship may be crudely expressed as follows:

Degree of moral responsibility = P(I&S) = P(I) * P(S|I),

where “P(I)” is the probability of bringing the intervention and “P(S|I)” is the probability of success once the intervention is brought.

A more complete calculus would also have us factor in the net cost of the intervention, but this simplified version will do for the present discussion (one could argue that the above two probabilities jointly capture the cost—or at least a considerable amount of it—insofar as costlier interventions will be less likely to be attempted and will have higher thresholds for success). It is also the case that for a given node in a causal chain, there may be several feasible interventions with differing costs and probabilities of success. It’s an interesting question whether the moral responsibility of a node might depend in some way on the number of options available for influencing its behavior, but that’s a topic to be taken up another time. In what follows, then, we’ll only consider for each node the intervention with the highest value of P(I&S).

Another issue we need to bring to the table is what we may call moral justification (I would call it moral rightness, in contrast to moral goodness, but “justification” brings out the essential feature more clearly). In epistemology, of course, the justification of a belief is widely distinguished from its truth value. One could be justified in believing something (e.g., in accordance with solid epistemic principles and all the best presently available evidence) which nevertheless turns out to be false. And one could, of course, believe something that’s in fact true without being justified in doing so, as when one believes something for irrelevant reasons. In a similar vein, one may have very good reasons to suspect that an action under consideration will bring about net good, when it in fact turns out to do the opposite. It seems that in cases like these, the individual is responsible for the bad outcome but still justified in doing what she did (i.e., she did not exhibit any marked failure of rationality in arriving at the conclusion that the action was the right thing to do).

Now, how we answer the justification question affects what sorts of interventions it is reasonable to bring to a responsible moral agent. If the agent had been well-intentioned in the first place, then simply knowing of the unforeseen bad consequence may be more than enough to prompt an appropriate change in her behavior (e.g., attempting to account for a new variable, brought to light by reflection on the present case, in relevantly similar future situations). It is, of course, also possible that the bad outcome was due entirely to lousy moral luck, and that nothing gleaned from the episode militates toward any change in the moral calculus used by the agent. In such cases we wouldn’t, I think, wish for her to alter her behavior at all, and so no intervention will be needed.

Now, the above scenario embeds only a very simple causal chain with two nodes: An actor and the (unforeseen) bad consequence she brings about. In cases like the four with which we began this post, there is another actor between the two nodes whose (foreseen) response to the original action serves as the proximate cause of the bad outcome. So, we’ve got something like the following causal structure:

(A) → (a1) → benefit 
              ↓
(B) → (mb) → (a2) → harm,

where (a1) and (a2) are the two actions undertaken by (A) and (B), respectively, (mb) is some mental state of (B), and |harm| > |benefit|. 

In trying to decide what action to perform, (A) must take into account what (B) is most likely to do, as best she can ascertain, and what (B)’s degree of moral responsibility for that action would be. Now, in each of our four opening examples, the occupiers of the (B) node all had pretty high likelihoods of bringing about bad consequences. These outcomes were highly foreseeable from the (A) node occupiers’ vantage points.

What about responsibility? The problem, as I see it, is that simply in virtue of entertaining these sorts of moral questions with an open mind, (A) will have a higher value for P(I&S) than (B), at least as best as she'll be able to judge. That is to say, she knows a moral intervention will be more likely to succeed with her than with (B)—since she is already of the right sort of moral disposition—and will thus always be more justified in holding herself responsible for the bad consequence than in holding (B) responsible. At the point of contemplation, (A) is in a unique position vis-à-vis her moral epistemology, for she is her own surest intervener. And in the mere act of contemplating the decision in the first place, she has shown herself to be a more promising target of moral intervention than anyone to whose mental states she has no direct access (i.e., anyone else).

This isn’t to say that (B) could not be saddled with some degree of responsibility, perhaps modulo his soundness of mind or reflective capacity. The problem, though, is that whatever the actual values of P(I&S) for (A) and (B), (A) must ultimately make a dichotomous choice: Do the action under consideration or don’t. Now, if she knows that the action is likely to ultimately lead, via the counteraction of (B), to a state of net moral harm, and if she judges herself more responsible for that state than (B), then it would seem she must conclude that she shouldn’t perform the action.

But this seems absurd. If this reasoning were followed consistently, the moral would end up hamstrung and held hostage by the immoral. Bullies and terrorists everywhere would be rewarded and incentivized. Clearly, the ideal situation would be one in which (A) performs the action (for remember, it is beneficial on its own) and (B) abstains from or is prevented from performing the counteraction, but it seems it will always be more morally rational for (A) to refrain from initiating the causal chain in the first place than to initiate it and hope (B) doesn’t do what she knows he’s strongly inclined to do. What looks like individual rationality would breed collective catastrophe.

There’s another important dynamic here concerning the relative power (A) and (B) wield over each other. When (A) has the resources necessary to punish (B) and so disincentivize its counteraction, then (A) may feel much more justified in going through with the initial action, despite (B)’s threats. The central dilemma of this post becomes particularly acute, though, when the power ratio is reversed. To venture briefly out of the realm of the abstract: This is precisely the circumstance in which America’s (and indeed the world’s) marginalized find themselves. They face a situation in which any overt attempts to secure fairer treatment for themselves are likely to be met with retaliatory assaults by a more powerful social class on whatever measures of justice they had already managed to win. Now, the solution that leaps most readily to mind vis-à-vis cases like this involves (A) seeking sufficient coercive power over (B) to be able to hold it to moral account, but, of course, each move toward securing this power is apt to instantiate the same problematic causal structure.

To the surprise of no one who’s been even a casual browser of Vox or The Atlantic over the past couple years, centrist liberals have in large numbers already decided that the marginalized and their allies—the (A) node occupiers—bear the greatest share of responsibility for the Molotov cocktail (as Michael Moore put it) the alt-right and their allies have just lobbed into the Oval Office. But if everyone else shared this belief, whence would come the necessary justice for (A)—or do the centrists believe that no further justice is needed? If so, they ought to explicitly articulate and defend this claim. In preaching reconciliation, they are willing to risk an awful lot of protracted harm—harm which most of them would not in any direct way have to bear—on the hope that the aggrieved right can be wheedled toward greater acceptance of the people they view as line-cutters and cheats. For reasons canvassed in the previous post, I’m not terribly sanguine about this prospect.

So, what prima facie options are there for a consequentialist to justify passing the moral buck to (B) in situations like these?

A few:

One might, of course, simply take this as a reductio of consequentialism. However, I’m very reluctant to give my intuitions at the level of applied ethics authority over my intuitions at the metaethical level. To do so would seem to tacitly commit me to a view that I already had a coherent, broadly truth-tracking unconscious moral “theory” prior to any attempts to ground it metaphysically in the world, and I’m extremely skeptical of this being the case.

Of course, one might also seek to jettison or supplement the account of moral responsibility sketched above. I’m open to this move in the abstract, but care will need to be taken to ensure that any proposed alternatives aren’t ultimately founded on any irreducibly deontic or aretaic intuitions.

One might adopt a sort of proximate consequentialism which would doggedly restrict moral judgment to only the most proximate causes of (A)’s action. My first thought here is that such a move seems unprincipled and ad hoc, though there are (a small number of) consequentialists who hold this view due to deep skepticism about our ability to predict the future beyond the most immediate effects. It should be noted, however, that this stance will not likely sanction buck-passing in every scenario in which it is desired. If (B)’s counteraction precedes the intended effects of (A)’s action in time, then ought the proximate consequentialist regard (B)’s counteraction as the proximate effect of (A)’s action—a fortiori if proximate consequentialism was adopted on the basis of uncertainty vis-à-vis remote effects? This approach would seem to penalize any sort of long-term moral planning. I am not quite ready to accept that we can’t do better than this.

Perhaps the most obvious answer is a sort of rule consequentialism that would sanction moral buck-passing as a way of keeping us from getting immured in local minima in what we might call moral error space. Note that this approach, in attempting to see beyond the horizon of the local minimum, is rather straightforwardly antithetical to proximate consequentialism. Now, there are likely cases in which moral buck-passing really should not happen; a rule can be a very blunt instrument, and I think what’s really wanted here is a more nuanced decision procedure. So what conditions must be met in order to justify (or prohibit) an instance of buck-passing? Again, I’m skeptical that legal notions of culpability are the right ones to lean on here. It seems fairly obvious to me that an actor of sound mind could, due to personal convictions, be much more resistant to moral instruction and rehabilitation than an actor deemed unfit to stand trial for his actions. 
The juridical view would have us ask to what extent (B) could have responded to (A)’s actions differently. Alternatively, we might ask to what extent (B) could have realized the harm of its action even if (A) had done nothing. On this view, in order for (A) to be responsible for the harm realized by (B), (A)’s action would have to be a necessary—and not merely contributory or sufficientcause of (B)’s action. This seems to give us acceptable answers to cases (1) and (4) discussed above. The lion cannot eat the man until the man makes himself available to the lion, so (A) gets the—ahem—lion’s share of responsibility in that case. The corporation, on the other hand, is in a position to lay off employees with or without some profit-harming boycott. On closer inspection, though, thinks get murky pretty quickly. 
This notion of “could have” needs a lot more unpacking. Are we talking here only about something like a broad capacity? It seems it can’t just be that, since the lion had the capacity to kill the fence-scaler all along and had hitherto lacked only the opportunity. The mentally ill man, on the other hand (I am assuming most would not wish for the moral buck to be passed to him), has both capacity and opportunity for the murder he commits. What the lack of medication seemed to cause for him was a particular desire to kill. But do not the acts of (A) in scenarios (3) and (4) also provide the requisite desires for the harmful counteractions taken by (B)? Perhaps it matters whether the particular desire is character-consistent for (B). The corporation in (4) may have no particular predilection to fire those low-level employees prior to (A)’s boycott, but it has a more general willingness to downsize in order to offset profit loss. So (A)’s boycott only gave the corporation incentive to act on a pre-existing stable disposition. Sounds…kind of right, but look again at scenario (2) in light of this. Perhaps the man has no particularly strong general proclivity toward violence as long as his symptoms are controlled. But perhaps it is part of even his healthier character that he is willing to respond violently to sufficiently strong perceived threats. And suppose the murder he commits was due to him misperceiving the victim as just such a threat. Could not the desire on which he murderously acts be regarded as character-consistent in the same way as in scenario (4)? Does it matter that he acts on faulty perception while the corporation does not?
Ok, enough for now. Suffice it to say that a rule consequentialist justification for buck-passing—though, it is in my view the most promising route—will take a lot of work to get off the ground.
Any other ideas?

Bonus task: Start noticing how just terrifyingly ubiquitous this problem is.


No comments:

Post a Comment