Sanchez continues the debate about Narveson’s moral structure - arguing that a moral foundation must have both internal and external influences.

Julian Sanchez is a senior fellow at the Cato Institute and studies issues at the busy intersection of technology, privacy, and civil liberties, with a particular focus on national security and intelligence surveillance. Before joining Cato, Sanchez served as the Washington editor for the technology news site Ars Technica, where he covered surveillance, intellectual property, and telecom policy.

Recovering from my Thanksgiving food coma, I belatedly noticed that Miles had gamely decided to take up the defense of Jan Narveson’s position in response to my critique of theories that seek to reduce morality to prudence or self-​interest. First, Miles suggests a distinction. I had written: > If your account of why it is wrong to torture people does not include the fact that it feels horrible to the person who is tortured, then your account is not one I am prepared to describe as a “moral” one. Miles reads this as a claim from “<em>within</em> morality”—a substantive moral judgment—as opposed to a claim <em>about</em> morality. There is, to be sure, a risk of begging the question here. It would be circular and tendentious if I tried to turn my substantive commitments into formal requirements by suggesting, for instance, that morality is <em>by definition</em> the discipline that concerns itself with maxims we could rationally will as universal laws, or with securing the greatest happiness of the greatest number, then rejected all competing theories as not really about “morality” as I conceive it. So the point is well taken—but I don’t ultimately think I’m making that sort of mistake here. Additionally, I suspect there’s something backwards about a general insistence that justifications for behaving morally be offered in nonmoral terms, but I’ll get back to that at the end of the post. “Morality,” as an abstract term encompassing many diverse moral theories, is obviously a somewhat fuzzy set, what Wittgenstein would have called a “family resemblance” concept. But if there’s a thread that links the varied ways “moral” is and has been used by human thinkers, at least in the modern era, I think it’s the fundamental idea that there’s something about other people (or sentient beings, or whatever) that gives us independent reasons to treat them certain ways, quite apart from whatever might be in it for us. A dizzying array of theoretical superstructures arises from that same basic foundation: the simple and radical proposition that other people are real. I tried to drive that home with an example that does involve a particular, substantive moral judgment: “Torturing other people just for fun is wrong”—and I do doubt that any theoretical framework you might construct to <em>explain</em> that judgment could ever be on surer epistemic footing than the judgment itself—but my point was not intended to turn on our specific revulsion or “outrage” at torture. It was just a particularly dramatic way of highlighting the absence, in Narveson’s and similar views, of a feature I take to be distinctively moral, on the ordinary colloquial use of that term. To adapt Miles’ illustration, suppose I ask you whether there is any moral reason that I ought not buy a particular company’s products. If you tell me that their merchandise is overpriced, or of shoddy quality, or will subject me to derisive looks from hipsters because they stopped wearing that in Brooklyn in, like, 2008… then I’m going to give you a bemused (but not outraged) look and wonder if you know what “morality” means. These might all be reasons <em>of some sort</em> to avoid the company’s products, but they’re ordinary prudential reasons, and I asked you for a <em>moral</em> one. If, instead, you tell me that I should avoid this company because their products are made by developing-​world workers whose wages seem unreasonably low to you, I may well disagree with your substantive conclusion, but I at least recognize that you are making the right <em>type</em> of claim. Narveson, of course, agrees that we should have a rule against torturing people just for fun—the defect is in the considerations he takes to be relevant, not any outrageously wrong result. The relevant “intuition,” in other words, is not the moral <em>conclusion</em> that it is wrong to torture people. It is the claim that <em>how torture feels to the person who is tortured</em> is among the considerations relevant to formulating moral principles about torture. Now, this might sound like a merely semantic squabble, a fruitless search for the “Essence of Morality,” as Narveson puts it. We could just agree to call what I have in mind “morality,” and Narveson’s idea “schmorality” and get on with it. But many people have thought, and continue to think, that we have a certain class of reasons for action—normally distinguished from ordinary prudential reasons—provided <em>directly</em> by the effects those actions would have on others (or, slightly different, the way they involve treating others). Perhaps there are no such reasons but, as Narveson argues, prudence alone ends up taking us a lot further toward the same upshot than has traditionally been thought. While that would be an interesting result, we should still be clear that we’re giving up on this <em>other</em> sort of reason that many people have also thought to exist. (Narveson, on page 126, pretty clearly rejects this picture, denying that there’s any distinct class of “moral reasons.”) By analogy: It might turn out that the deities described by various world religions don’t exist, but many of the rules of behavior those religions have presented as divine commands are actually perfectly sound, and can be justified in other ways. It would not aid clarity, in this case, to say that divine command theory turns out to be true after all: We may still have reasons to act in conformity with certain rules, but the <em>source</em> of those rules, crucially, is not what the divine command theorist thought—it’s not even in the same ballpark. As Miles, Narveson, and other moral internalists point out, another characteristic feature of morality is that it’s supposed to provide reasons for everyone to act. In Narveson’s words, the fact that some act “would be… horrendously wrong is not something we can just ignore or be indifferent to—or if we somehow manage to do so, others will not do so.… [T]hose who agree entirely that certain particular acts are wrong but who profess indifference to <em>that</em> are engaging in puzzling verbal behavior.” Unfortunately, Narveson’s general account of practical reasoning is conspicuously a bit thin here. But it will help (following Parfit) to distinguish clearly between <em>motivating reasons</em> and <em>normative reasons</em>, which Narveson often seems to conflate. It is true that moral claims, claims about what we ought (morally) to do, are necessarily claims about what <em>reasons for action</em> we have. It is not an objection to the validity of a moral claim, however, that some people who understand all the terms contained in that claim may fail to be properly motivated, or refuse to acknowledge the relevant reasons. If I agree that the fact that torture would cause another person terrible agony provides me with a decisive reason not to torture, but still deny that I see why I shouldn’t do it, this is indeed puzzling verbal behavior. If, on the other hand, I simply deny that the suffering of others provides any reasons for me—if I say just don’t care how much they suffer unless it’s apt to come back and bite me—then I’m not engaged in any logical or linguistic confusion; I’m just a bad person. Similarly, if I fail to see why my own future happiness or suffering in any way provide even defeasible reasons for me <em> now </​em> to act, it may not be possible to argue me into it. (Even if I <em>do</em> recognize those reasons in principle, of course, <em>akrasia</em> may strike, leading me to act contrary to what I know to be my own strongest net reasons.) This is not grounds to doubt that my own future suffering is a source of valid and powerful reasons for action; it’s just a reason for doubting that I meet the minimum requirements for anything we can recognize as prudential or self-​interested reasoning. As an empirical matter, criminals are often defective on both counts: A young street thug doesn’t just fail to care enough about the harms he imposes on others, but fails to care enough about how his pursuit of immediate gratification will harm his own happiness in the further future. The disposition to take the longer view, which Narveson has to assume of his rational agents to get his argument off the ground, is at least as much a contingent matter of socialization and cultural conditioning as the disposition to recognize the interests of others as potential sources of reasons. Narveson acknowledges that his own argument won’t work for every imaginable rational agent, but depends on people having acquired aims and values within a certain range. He believes that, empirically, it will turn out that except for the “totally imprudent or the totally fanatical… a clear-​headed appraisal will confirm the wisdom of signing the social contract instead of bashing ahead as one will,” given that most members of society will have acquired (through socialization) subjective values that can be advanced through social cooperation. (If it is enough for Narveson’s theory to count as morality that <em>most</em> of us contingently have aims that make this a rational bargain, why isn’t enough that <em>most</em> of us have similarly biologically and culturally instilled propensities to respond to the interests of others?) This may look like an anthropological or sociological claim—a prediction about how people who happen to be wired in a certain way will, in fact, dispose themselves to behave—but the qualifier “clear-​headed” here makes it a normative claim. Manifestly, we are not all contractualist libertarians yet, so Narveson has to mean that we <em>ought rationally</em> to adopt this disposition, given certain brute facts about our actual values and concerns. Note an ambiguity here between normative and descriptive claims. In excluding the “totally imprudent” from the binding power of moral rules, Narveson seems to be sticking to the descriptive: Some people will be disposed to adopt such constraints, some won’t. But Narveson clearly regards the radically imprudent as irrational. The imprudent person is irrational because he fails to follow the general strategy—to adopt the disposition—that will maximize his satisfaction over the long haul. Let’s pry open the hood a bit and see how this works. It is not enough to make the descriptive claim that what we <em>now want</em> is to follow the general policy that will maximize the satisfaction of our aims over the long term—if that were already the case, there would be no work for practical rationality to do. The claim, rather, has to be that when one’s immediate inclinations or short term desires conflict with a long term maximizing strategy, what one <em>ought rationally</em> to do is act on the optimal long term strategy. Why might this be? Not, again, because we <em>already</em> have an <em>actual</em> standing desire to do this: What we want to know is why having and acting on that desire, which amounts to the same thing as adopting such a disposition, is more rational than radical imprudence. One approach might be to suggest that a principle of temporal neutrality is rationally required, and that if we are neutral in this way, the weight of reasons provided by the satisfaction of our aims and desires over the course of a lifetime is greater than whatever reasons might be provided by any short-​term benefit to be derived from acting on our immediate inclinations. This is a substantive (and undefended) claim about what rationality requires: Whatever our values and aims might be, rationality consists in responding to the reasons provided by the optimal <em>overall</em> satisfaction of our aims over time. If I am unsure whether I have a reason <em>now</em> to act to enable the achievement of some aims next week, or the avoidance of suffering next month, sufficient to override any desire I might have now to act in an incompatible way that would provide lesser immediate gratification, a temporal neutralist affirms: “Yes, you do! That’s pretty much what it is to be a rational agent!” I’ll suppose instead that Narveson sides with David Gauthier, upon whose work in <em>Morals By Agreement</em> the central argument in <em>The Libertarian Idea</em> is explicitly based. Narveson will say, then, that only our <em>current</em> preferences and aims count, but that if we engage in “clear-​headed” deliberation, we will find that what Gauthier calls our “considered preferences” include a concern with our future well being that trumps our contrary near-​term inclinations. “Considered preferences” are supposed to be the ones we’d acknowledge having (now) after some process of ideal deliberation, including suitably vivid reflection on what all the different consequences of adopting different dispositions would be. As Gauthier writes, “rational choice must be directed to the maximal fulfilment of our present considered preferences, where consideration extends to all future effects in so far as we may now foresee them.” In other words, my preference for a little pleasure today in exchange for greater suffering tomorrow may be rational, but only if I have really thought about what all the consequences of my choice would be. This move, however, only dresses a substantive commitment in procedural drag. Why is what we have reason to do determined by these deliberatively purified preferences, and not our actual unreflective desires? Presumably because facts about our future experience (such as whether we will regret or endorse our present choice of dispositions) are rationally relevant. But the contents of preferences are either rationally evaluable or they aren’t. If they aren’t, there’s no basis for thinking preferences subject to informed deliberation generate more rational decisions than unreflective preferences. (This claim probably requires a closer argument than I want to give it here; the curious should consult Derek Parfit’s <em> On What Matters, Vol. I </​em> , §13.) In any event, it seems like it requires a great deal of contortion to advance a view that makes policies and dispositions adopted over extended periods of time the central object of rational choice, while denying that there’s anything to be said about the substantive rationality of arbitrarily steep discount rates. The entire motivation for this analytic shift is the understanding that the agent has to be conceived as a being extended in time, and characterized by some kind of unity in dispositions and interests across time-​slices. Otherwise the obvious question is: Why would you ever have reason to act according to the disposition you previously rationally adopted, rather than adopting, on the spot, the new disposition that produces the maximizing act in this individual choice situation? (This would, of course, collapse the act/​disposition distinction entirely.) Even uncontroversial restrictions on rationality, like transitivity of preferences, tacitly imply <em>some</em> degree of coherence across time. The model of the agent embedded in any adequately thick concept of rationality is going to invoke the same kind of “intuition” Narveson says he wants to banish from ethical discourse. (Again, for a much, much more extensive treatment of Gauthier’s model of rationality, see Appendix B to Volume I of Parfit’s <em>On What Matters</em>.) This is where I think Narveson’s approach falls apart. While for the most part communitarian complaints about the excessive “atomism” or liberal of libertarian views are misguided, that sort of objection seems on point with respect to Narveson’s argument. The forward-​looking rational agent who reflectively identifies outcomes accruing to future selves as <em>his</em> outcomes doesn’t just spring fully formed into being. That kind of agent is constructed through a process of socialization that—<em>whatever</em> the specific content of the culture or community in which it occurs—involves development of some fairly sophisticated neural machinery dedicated to entering the perspectives of others, being distressed at the sight of others enduring harm, and so on. A theory of practical rationality thick enough to get Narveson’s project off the ground (which I’m tentatively taking to be Gauthier’s) is implicitly a way of systematizing the reasons that will be recognized <em>as</em> reasons by an agent socialized to conceptualize its self in a certain way. (Contrast this with, say, the standards you would use to assess the rationality of a decision-​generating computer program loaded with arbitrary goals to optimize each time it runs.) But that kind of agent is—with the possible exception of sociopaths—always already an agent socialized to recognize at least some other people as sources of reasons. The burgeoning rational agent is always already a moral agent. Now we’re better situated to return to this question of arguments “within” and “outside” morality. If a theory of rationality is supposed to be normative, it must in one sense stand “outside” the particular views we might currently hold about what is rational or irrational. That is, in assessing what actions and dispositions are rational given our aims and beliefs about the world, a <em>normative</em> theory of practical rationality cannot take as a fixed point our <em>current</em> belief that action A and disposition D are the most rational. The point of systematizing our reasons and deliberating in conformity with some model of rationality is, after all, to <em>correct</em> our beliefs about what it is rational to do. But that deliberation also has to begin <em>inside</em> a particular (though still very abstract) concept of rational agency. You couldn’t coherently <em>argue</em> a child who didn’t already have the appropriate self-​conception <em>into</em> being that kind of agent from the outside. All you can do is order and render more coherent sets of reasons the agent is constitutionally disposed to recognize. The situation in ethics is closely analogous. Insofar as moral deliberation is supposed to <em>correct</em> our views about what is right or wrong, we want to exclude one type of “claim within morality” from our theory building: Our <em>current</em> second-​order judgments about which particular ways of acting are right or wrong. An adequately normative theory does need foundations “outside morality” in <em>that</em> sense. But that’s not at all the same as needing to be able to argue people <em>into</em> the moral perspective—into the core disposition to regard other people as independent sources of reasons—from a position entirely outside it. That’s just lecturing to the infant again. At best you’ll get what Narveson achieves if we assume his argument is otherwise successful: Heartening reason to believe that even sociopaths can be productive members of society by adhering to a sort of cargo-​cult simulation of morality. All moral theory can or <em>has</em> to do is revise and rationalize the disposition to recognize other-​provided reasons if it’s there in some basic form. Which it will be, to some extent, in any minimally socialized member of any human community, in exactly the same way that the form of rational agency and range of substantive aims will be prevalent—a premise needs Narveson to make his argument work. We do not all, already, accept that we’re rationally required to be disposed to follow libertarian morality, given our underlying aims as four-​dimensional cooperative agents. Narveson needs to claim that the existence of <em>this</em> type of moral disagreement is not a fatal problem for his theory: We <em>would</em> see that we have decisive reasons to adopt this disposition if we deliberated appropriately (and would reflectively endorse keeping this disposition if we already have it). It does not become a more fundamental problem after we help ourselves to the additional premise that most people, along with being selves extended in time whose goals can be affected by rule governed interaction with others, are also not sociopaths.