Julian Sanchez is a senior fellow at the Cato Institute and studies issues at the busy intersection of technology, privacy, and civil liberties, with a particular focus on national security and intelligence surveillance. Before joining Cato, Sanchez served as the Washington editor for the technology news site Ars Technica, where he covered surveillance, intellectual property, and telecom policy.

Since <a href=” http://www.libertarianism.org/blog/libertarian-idea-setting-scene”>Miles has launched an ongoing discussion of Jan Narveson’s excellent <em>The Libertarian Idea</em></a> —which along with David Gauthier’s <em>Morals by Agreement</em> I’d encourage everyone to read—I thought I’d interject to voice agreement with one of his points about something that’s always seemed off to me about Narveson’s (and Gauthier’s) approach. As Miles notes, in contrast with contractarians like John Rawls or T.M. Scanlon, Narveson is not trying to develop a construction procedure that embeds even a thin foundational intuition about the equal moral importance of other people. Rawls assumes that actual citizens are “rational”—they have a conception of the good and of their own interests that they pursue—but also “reasonable” in that they recognize themselves as having independent reason to respect the equal dignity of their fellow human beings, to treat them fairly, and so on. The contracting parties in his Original Position are held to be strictly rational and self interested as a simplifying assumption, with the “reasonableness” aspect built into the conditions of the model (in that case, ignorance of the specific social position one will occupy). That’s emphatically not what Narveson is doing. In effect, he wants to reduce morality to prudence, showing that people would have strictly self-​interested reasons to constrain their own behavior even if they are not “reasonable” or concerned with the welfare and dignity of others except insofar as those others are able to aid or hinder their self interested pursuits. If successful, this would be an interesting result: It would show, in effect, that there are pseudo-​moral principles even sociopaths would have reason to respect. But it is still, ultimately, an ethics for sociopaths. An anecdote, to highlight just how odd this is. Many years ago I attended a philosophy colloquium with Narveson, and posed the following hypothetical to him. Philosophers of mind sometimes ask us to imagine “zombies”: Entities who behave just as humans do, pursuing goals and cooperating or fighting with others and so forth, but have no internal lives. There is nothing it’s like, on the inside, to <em>be</em> a zombie. You can imagine, if you like, that they’re a sort of sophisticated android programmed to simulate human behavior. They will pursue human-​like goals, according to their programming, but won’t actually <em>feel</em> satisfaction or happiness when those goals are achieved. They will resist and fight back if attacked, and in other ways respond just as humans who are in pain, but they won’t actually <em>feel</em> pain—they won’t suffer. Now, on Narveson’s approach, all that really matters to the self interested contractor is how other people behave in response to our actions. If humans and androids react in the same way, in other words, then when we’re considering adoption of a “moral” principle prohibiting torturing people, then so long as everyone’s behavior is the same, it would not factor into the deliberative process at all whether being tortured involves a subjective experience of horrible agony for the person on the wrong end of it. All that matters is how they will characteristically behave in response. By the same token, if there were some life form that were sentient but unable to interact with us—maybe some kind of highly intelligent sentient plant that we (somehow) knew had a rich inner life—Narveson’s account gives us no reason to refrain from inflicting horrible suffering on these life forms. Now, this was many years ago, and I don’t know what Narveson would say about this hypothetical today, but at the time, he agreed that on his view, we’d have no greater reason to restrain our actions toward humans than to zombie androids, and no reason to refrain from inflicting terrible suffering on the sentient trees. This might seem like a bit of irrelevant philosophical fancy, since we don’t, in fact, have to deal with androids and intelligent plants. But I think it highlights something deeply defective in Narveson’s approach. If your account of why it is wrong to torture people does not include the fact that <em>it feels horrible to the person who is tortured</em>, then your account is not one I’m prepared to describe as a “moral” one. More generally, attempts to reduce morality to prudence generally assume that there’s something metaphysically unproblematic about the idea, not just that people <em>do</em> care about their long-​term self-​interest (as opposed to just their immediate short-​term desires), but that they have <em>reason</em> to, whereas the claim that they have similar reasons to care about or respect the interests of others is some kind of “queer” claim standing in need of special explanation. I recommend to those who share this view Derek Parfit’s seminal <em>Reasons and Persons</em>, as well as the more recent <em>On What Matters</em> (whose two thick volumes I’m still slogging through with great interest). Theoretical, moral, and practical reasoning all ultimately depend on foundational axioms that can’t be established without circularity. In logic, it’s the familiar list of axioms and inference rules; in ethics, it’s the basic idea that <em>other people are real</em>, and that their happiness and suffering fundamentally matters in some way, just as much as your own. That all these forms of reasoning “hit bottom” at some point is, admittedly, intellectually unsatisfying. But it’s also a fact we’re stuck with, and trying to dismiss those foundational domain-​specific axioms as mere intuition seems less like a road to progress than an attempt to change the subject.